SBN

Notes on the UK IoT cybersec “Code of Practice”

The British government has released a voluntary “Code of Practice” for securing IoT devices. I thought I’d write some notes on it.

First, the good parts

Before I criticize the individual points, I want to praise if for having a clue. So many of these sorts of things are written by the clueless, those who want to be involved in telling people what to do, but who don’t really understand the problem.
The first part of the clue is restricting the scope. Consumer IoT is so vastly different from things like cars, medical devices, industrial control systems, or mobile phones that they should never really be talked about in the same guide.
The next part of the clue is understanding the players. It’s not just the device that’s a problem, but also the cloud and mobile app part that relates to the device. Though they do go too far and include the “retailer”, which is a bit nonsensical.
Lastly, while I’m critical of most all the points on the list and how they are described, it’s probably a complete list. There’s not much missing, and the same time, it includes little that isn’t necessary. In contrast, a lot of other IoT security guides lack important things, or take the “kitchen sink” approach and try to include everything conceivable.

1) No default passwords

Since the Mirai botnet of 2016 famously exploited default passwords, this has been at the top of everyone’s list. It’s the most prominent feature of the recent California IoT law. It’s the major feature of federal proposals.
But this is only a superficial understanding of what really happened. The issue wasn’t default passwords so much as Internet-exposed Telnet.
IoT devices are generally based on Linux which maintains operating-system passwords in the /etc/passwd file. However, devices almost never use that. Instead, the web-based management interface maintains its own password database. The underlying Linux system is vestigial like an appendix and not really used.
But these devices exposed Telnet, providing a path to this otherwise unused functionality. I bought several of the Mirai-vulnerable devices, and none of them used /etc/passwd for anything other than Telnet.
Another way default passwords get exposed in IoT devices is through debugging interfaces. Manufacturers configure the system one way for easy development, and then ship a separate “release” version. Sometimes they make a mistake and ship the development backdoors as well. Programmers often insert secret backdoor accounts into products for development purposes without realizing how easy it is for hackers to discover those passwords.
The point is that this focus on backdoor passwords is misunderstanding the problem. Device makers can easily believe they are compliant with this directive while still having backdoor passwords.
As for the web management interface, saying “no default passwords” is useless. Users have to be able to setup the device the first time, so there has to be some means to connect to the device without passwords initially. Device makers don’t know how to do this without default passwords. Instead of mindless guidance of what not to do, a document needs to be written that explains how devices can do this both securely as well as easy enough for users to use.
Humorously, the footnotes in this section do reference external documents that might explain this, but they are the wrong documents, appropriate for things like website password policies, but inappropriate for IoT web interfaces. This again demonstrates how they have only a superficial understanding of the problem.

2) Implement a vulnerability disclosure policy

This is a clueful item, and it should be the #1 item on every list.
Though they do add garbage on top of this, but demanding companies respond in a “timely manner”, but overall this isn’t a bad section.

3) Keep software updated

This is another superficial understanding of the problem.
Software patching works for desktop and mobile phones because they have interfaces the user interacts with, ones that can both notify the user of a patch as well as the functionality to apply it. IoT devices are usually stuck in a closet somewhere without such interfaces.
Software patching works for normal computers because they sell for hundreds of dollars and thus have sufficient memory and storage to reliably do updates. IoT devices sell for cut-throat margins and have barely enough storage to run. This either precludes updates altogether, or at least means the update isn’t reliable, that upon every update, a small percentage of customer devices will be “bricked”, rendered unusable. Adding $1 for flash memory to a $30 device is not a reasonable solution to the problem.
Software patching works for software because of its enormous margins and longevity. A software product is basically all profit. The same doesn’t apply to hardware, where devices are sold with slim margins. Device makers have a hard time selling them for more because there are always no-named makers of almost identical devices in Shenzen willing to undercut them. (Indeed, looking at Mirai, it appears that was the majority of infected devices, not major brands, but no-named knock-offs). 
The document says that device makers need to publish how long the device will be supported. This ignores the economics of this. Devices makers cannot know how long they will support a device. As long as they are selling new ones, they’ve got incentive and profits to keep supplying updates. After that, they don’t. There’s really no way for them to predict the long term market success of their devices.
Guarantees cost money. If they guarantee security fixes for 10 years, then that’s a liability they have to account for on their balance sheet. It’s a huge risk: if the product fails to sell lots of units, then they are on the hook for a large cost without the necessary income to match it.
Lastly, the entire thing is a canard. Users rarely update firmware for devices. Blaming vendors for not providing security patches/updates means nothing without blaming users for not applying them.

4) Securely store credentials and security-sensitive data

Like many guides, this section makes the superficial statement “Hard-coded credentials in device software are not acceptable”. The reason this is silly is because public-keys are a “credential”, and you indeed want “hard-coded” public-keys. Hard-coded public-key credentials is how you do other security functions, like encrypted and signature verification.
This section tells device makers to use the trusted-enclave features like those found on phones, but this is rather silly. For one thing, that’s a feature of only high-end CPUs, not the low-end CPUs found in such devices. For another thing, IoT devices don’t really contain anything that needs that level of protection.
Storing passwords in clear-text on the device is almost certain adequate security, and this section can be ignored.

5) Communicate securely

In other words, use SSL everywhere, such as on the web-based management interface.
But this is only a superficial understanding of how SSL works. You (generally) can’t use SSL for devices because there’s no secure certificate on the device. It forces users to bypass nasty warnings in the browser, which hurts the entire web ecosystem. Some IoT devices do indeed try to use SSL this way, and it’s bad, very bad.
On the other hand, IoT devices can and should use SSL when connecting outbound to the cloud.

6) Minimise exposed attack surfaces

This is certainly a good suggestion, but it’s a platitude rather than an action item. IoT devices already minimize as much as they can in order to reduce memory/storage requires. Where this is actionable requires subtler understanding. A lot of exposed attack services come from accidents. 
A lot of other exposed attack surfaces come about because device makers know no better way. Actual helpful, meaning advice would consist of telling them what to do in order to solve problems, rather than telling them what not to do.
The reason Mirai-devices exposed Telnet was for things like “remote factory reset”. Mirai infected mostly security cameras which don’t have factory reset buttons. That’s because they are located high up out of reach, or if they are in reach, they don’t want to allow the public to press the factory reset button. Thus, doing a factory reset meant doing it remotely. That appears to be the major reason for Telnet and “hardcoded passwords”, to allow remote factory reset. Instead of telling them not to expose Telnet, you need a guide explaining how to securely do remote factory resets.
This guide discussed “ports”, but the reality is that the attack surface in the web-based management interface on port 80 is usually more than all other ports put together. Focusing on “ports” reflects a superficial understanding of the problem.

7) Ensure software integrity

The guide says “Software on IoT devices should be verified using secure boot
mechanisms”. No, they shouldn’t be. In the name of security, they should do the opposite.
First of all, getting “secure boot” done right is extraordinarily difficult. Apple does it the best with their iPhone and still they get it wrong. For another thing, it’s expensive. Like trusted enclaves in processors, most of the cheap low-end processors used in IoT don’t support it.
But the biggest issue is that you don’t want it. “Secure boot” means the only operating system the device can boot comes from the vendor, which will eventually stop supporting the product, making it impossible to fix any security problem. Not having secure boot means that customers can still be able to patch bugs without the manufacturer’s help.
Instead of secure boot, device makers should do the opposite and make it easy for customers to build their own software. They are required to do so under the GNU Public License anyway. That doesn’t mean open-sourcing everything, they can still provide their private code as binaries. But they should allow users to fix any bug in open-source and repackage a new firmware update.

8) Ensure that personal data is protected

I suppose giving the GDPR, this section is required, but GDPR is a pox on the Internet.

9) Make systems resilient to outages

Given the recent story of Yale locks locking people out of their houses due to a system outage, this seems like an obviously good idea.
But it should be noted that this is hard. Obviously such a lock should be resilient if the network connection is down, or their servers have crashed. But what happens when such a lock can contact their servers, but some other component within their organization has crashed, such that the servers give unexpected responses, neither completely down, but neither completely up and running, either?
We saw that in the Mirai attacks against Dyn. It left a lot servers up and running, but took down on some other component that those servers relied upon, leaving things in an intermediate state that was neither unfunctional nor completely functional.
It’s easy to stand on a soapbox and proclaim devices need to be resilient, but this is unhelpful. What would instead be helpful is a catalog of failures that IoT will typically experience.

10) Monitor system telemetry data

Security telemetry is a desirable feature in general. When a hack happens, you want to review logfiles to see how it happened. This item reflects various efforts to come up with such useful information
But again we see something so devoid of technical details as to be useless. Worse, it’s going to be exploited by others, such as McAffee wanting you to have anti-virus on TV sets, which is an extraordinarily bad idea.

11) Make it easy for consumers to delete personal data

This is kinda silly in that the it’s simply a matter of doing a “factory reset”. Having methods to delete personal details other than factory resets is bad.
The useful bit of advise is that factory resets don’t always “wipe” information, they just “forget” it in a way that can be recovered. Thus, we get printers containing old documents and voting machines with old votes.
On the other hand, this is a guide for “consumer IoT”, so just the normal factory reset is probably sufficient, even if private details can be gleaned.

12) Make installation and maintenance of devices easy

Of course things should be easy, everyone agrees on this. The problem is they don’t know how. Companies like Microsoft and Apple spend billions on this problem and still haven’t cracked it.
My home network WiFi password uses quotes as punctuation to improve security. The Amazon Echo app uses Bluetooth to pair with the device and set which password to use for WiFi. This is well done from a security point of view.
However, their app uses an input field that changes quotes to curly-quotes making it impossible to type in the password. I instead had to go to browser, type the password in the URL field, copy it, then go back to the Alexa app and paste it into the field. Then I could get things to work.
Amazon is better at making devices easy and secure with Echo and they still get things spectacularly wrong.

13) Validate input data

Most security vulnerabilities are due to improper validation of input data. However, “validate input data” is stupid advice. It’s like how most phishing attacks come from strangers, but how telling people to not open emails from strangers is stupid advice. In both cases, it’s a superficial answer that doesn’t really understand how the problem came about.
Let’s take PHP and session cookies, for example. A lot of programmers think the session identifier in PHP is some internal feature of PHP. They therefore trust it, because it isn’t input. They don’t perceive how it’s not internal to PHP, but external, part of HTTP, and something totally hackable by hackers.
Or take the famous Jeep hacker where hackers were able to remotely take control of the car and do mischievous things like turn it off on the highway. The designers didn’t understand how the private connection to the phone network was in fact “input” coming from the Internet. And then there was data from the car’s internal network, which wasn’t seen as “input” from an external source.
Then there is the question of what “validation” means. A lot of programmers try to solve SQL injection by “blacklisting” known bad characters. Hackers are adept at bypassing this, using other bad characters, especially using Unicode. Whitelisting known good characters is a better solution. But even that is still problematic. The proper solution to SQL injection isn’t “input validation” at all, but using “parameterized queries” that don’t care about input.

Conclusion

Like virtually every other guide, this one is based upon platitudes and only a superficial understanding of the problem. It’s got more clue than most, but is still far from something that could actually be useful. The concept here is virtue signaling, declaring what would be virtuous and moral for an IoT device, rather than something that could be useful to device makers in practice.

*** This is a Security Bloggers Network syndicated blog from Errata Security authored by Robert Graham. Read the original post at: https://blog.erratasec.com/2018/10/notes-on-uk-iot-cybersec-code-of.html