IoT requires a switch to dynamic security

By admin In News, Technology No comments

IoT requires a switch to dynamic security

At the recent annual conference organised by Techworks’ IoT Security Foundation, Fortinet chief security strategist and author of the RIoT Control book Tyson Macaulay argued that engineers working in the area need to start rethinking how they approach the problem. Rather than treating security as a static feature that – once implemented – is fixed, what is really needed is a framework that lets products react to changes in hacking techniques and, potentially, stay one step ahead.

“Security has got to go from static to dynamic. Static thinking is about set it and forget it. That type of thinking in the IoT just doesn’t work, partly because of safety considerations and also partly because of the privacy considerations,” Macaulay said.

In his keynote, Macaulay explained that he adopted a framework originally conceived by Ed Amoroso, founder and CEO of Tag Cyber and former chief security officer at AT&T. Amoroso has developed idea such as “software-defined network security”, which borrows ideas from the cloud-computing area. Different systems need different kinds of protection. As the workloads on cloud servers are constantly shifting – sometimes on a minute-by-minute basis – why not tune the security mechanisms for these workloads individually?

Amoroso is keen to take the emphasis away from firewalls and other systems that tend to focus on having a secure perimeter, largely because once hackers are through the firewall they often find that they can do what they like. Instead, concepts such as threat-hunting become more important. This is where the protection software analyses what is happening on the network in real time to try to spot anomalous behaviour and, if necessary or possible, lock out the perpetrator.

Macaulay stressed the importance of not having a one-size-fits-all approach to security policies, especially with IoT systems. At the moment, he argued, there is a tendency to buy the lowest-cost offering with a fixed set of capabilities, because that is what many bidding processes favour. When you have a system with many moving parts – which is pretty much what a large-scale IoT system is going to be – the integrator is probably going to find that the security features in a particular brand of sensor or gateway do not mesh well with the features in other parts of the network.

Clearly, some suppliers are going to like that approach because it improves their chances of locking in a customer by providing most or all of the pieces. When you consider that IoT systems are likely to have to evolve over time and may well involve bringing in data that comes from partners, this is probably a poor idea. The buyer is likely to end up with the same type of situation faced by public authorities locked into private-finance initiative (PFI) contracts. Everything that was in the original bid document was cheap(ish) – but you want to add water-quality monitoring to your smart city system now? That’s going to cost. A lot more.

Many of the recommendations that Macaulay had are on the level of business models and how business decisions interact with technology. One that has consequences for the way systems are built is to bring dynamic behaviour to the underlying hardware and firmware. Macaulay called for what he terms “cryptographic agility”.

“It’s the ability to move from one type of cryptography to another in real time at will and without having to recode the app,” he explained. “It is something that IoT systems today do not have, at least not without reflashing the system.

“There is one reason why this is important: much of the crypto being used now is open-source and some of these libraries are shipping with algorithms that are 20-plus years old. They are deprecated and insecure,” Macaulay said.

This is perhaps an argument for not using some of the open-source libraries out there. Dave Hughes, the head of UK embedded-software firm HCC Embedded, said several years ago that there is a strong argument for public money or a consortium to fund future open-source libraries so that the teams can adopt more stringent development and testing regimes rather than just relying on the idea that many eyes reveals bugs. It was not long after I heard Hughes argue this that the Shellshock bug emerged. That had lain dormant for many years in the open-source command interpreter Bash before finally being discovered by a programmer with a long-standing interest in how such shell programs operate.

The problem is not restricted to public-source code. Proprietary libraries sport flaws that weaken their security and the arrival of quantum computing will force a move to encryption that does not use large primes. Having a system that lets a software update alter the crypto library in an IoT device without changing the code on which it depends would help systems that need to operate in the field for decades survive these changes. However, there are questions of performance. Today’s microcontrollers have accelerators designed for specific crypto algorithms. They probably will not work that well for next-generation quantum-safe techniques such as ‘Learning With Errors’. It may, in turn, call for hardware accelerators that can, themselves, be reconfigured to deal with novel algorithms. Such engines tend to be less energy efficient than their hardwired colleagues. Dynamic thinking does not come cheap.

At the recent annual conference organised by Techworks’ IoT Security Foundation, Fortinet chief security strategist and author of the RIoT Control book Tyson Macaulay argued that engineers working in the area need to start rethinking how they approach the problem. Rather than treating security as a static feature that – once implemented – is fixed, what is really needed is a framework that lets products react to changes in hacking techniques and, potentially, stay one step ahead.

“Security has got to go from static to dynamic. Static thinking is about set it and forget it. That type of thinking in the IoT just doesn’t work, partly because of safety considerations and also partly because of the privacy considerations,” Macaulay said.

In his keynote, Macaulay explained that he adopted a framework originally conceived by Ed Amoroso, founder and CEO of Tag Cyber and former chief security officer at AT&T. Amoroso has developed idea such as “software-defined network security”, which borrows ideas from the cloud-computing area. Different systems need different kinds of protection. As the workloads on cloud servers are constantly shifting – sometimes on a minute-by-minute basis – why not tune the security mechanisms for these workloads individually?

Amoroso is keen to take the emphasis away from firewalls and other systems that tend to focus on having a secure perimeter, largely because once hackers are through the firewall they often find that they can do what they like. Instead, concepts such as threat-hunting become more important. This is where the protection software analyses what is happening on the network in real time to try to spot anomalous behaviour and, if necessary or possible, lock out the perpetrator.

Macaulay stressed the importance of not having a one-size-fits-all approach to security policies, especially with IoT systems. At the moment, he argued, there is a tendency to buy the lowest-cost offering with a fixed set of capabilities, because that is what many bidding processes favour. When you have a system with many moving parts – which is pretty much what a large-scale IoT system is going to be – the integrator is probably going to find that the security features in a particular brand of sensor or gateway do not mesh well with the features in other parts of the network.

Clearly, some suppliers are going to like that approach because it improves their chances of locking in a customer by providing most or all of the pieces. When you consider that IoT systems are likely to have to evolve over time and may well involve bringing in data that comes from partners, this is probably a poor idea. The buyer is likely to end up with the same type of situation faced by public authorities locked into private-finance initiative (PFI) contracts. Everything that was in the original bid document was cheap(ish) – but you want to add water-quality monitoring to your smart city system now? That’s going to cost. A lot more.

Many of the recommendations that Macaulay had are on the level of business models and how business decisions interact with technology. One that has consequences for the way systems are built is to bring dynamic behaviour to the underlying hardware and firmware. Macaulay called for what he terms “cryptographic agility”.

“It’s the ability to move from one type of cryptography to another in real time at will and without having to recode the app,” he explained. “It is something that IoT systems today do not have, at least not without reflashing the system.

“There is one reason why this is important: much of the crypto being used now is open-source and some of these libraries are shipping with algorithms that are 20-plus years old. They are deprecated and insecure,” Macaulay said.

This is perhaps an argument for not using some of the open-source libraries out there. Dave Hughes, the head of UK embedded-software firm HCC Embedded, said several years ago that there is a strong argument for public money or a consortium to fund future open-source libraries so that the teams can adopt more stringent development and testing regimes rather than just relying on the idea that many eyes reveals bugs. It was not long after I heard Hughes argue this that the Shellshock bug emerged. That had lain dormant for many years in the open-source command interpreter Bash before finally being discovered by a programmer with a long-standing interest in how such shell programs operate.

The problem is not restricted to public-source code. Proprietary libraries sport flaws that weaken their security and the arrival of quantum computing will force a move to encryption that does not use large primes. Having a system that lets a software update alter the crypto library in an IoT device without changing the code on which it depends would help systems that need to operate in the field for decades survive these changes. However, there are questions of performance. Today’s microcontrollers have accelerators designed for specific crypto algorithms. They probably will not work that well for next-generation quantum-safe techniques such as ‘Learning With Errors’. It may, in turn, call for hardware accelerators that can, themselves, be reconfigured to deal with novel algorithms. Such engines tend to be less energy efficient than their hardwired colleagues. Dynamic thinking does not come cheap.

Chris Edwardshttps://eandt.theiet.org/rss

E&T News

https://eandt.theiet.org/content/articles/2018/12/iot-requires-a-switch-to-dynamic-security/

Powered by WPeMatico