California has passed an IoT security bill, awaiting the government’s signature/veto. It’s a typically bad bill based on a superficial understanding of cybersecurity/hacking that will do little improve security, while doing a lot to impose costs and harm innovation.
It’s based on the misconception of adding security features. It’s like dieting, where people insist you should eat more kale, which does little to address the problem you are pigging out on potato chips. The key to dieting is not eating more but eating less. The same is true of cybersecurity, where the point is not to add “security features” but to remove “insecure features”. For IoT devices, that means removing listening ports and cross-site/injection issues in web management. Adding features is typical “magic pill” or “silver bullet” thinking that we spend much of our time in infosec fighting against.
We don’t want arbitrary features like firewall and anti-virus added to these products. It’ll just increase the attack surface making things worse. The one possible exception to this is “patchability”: some IoT devices can’t be patched, and that is a problem. But even here, it’s complicated. Even if IoT devices are patchable in theory there is no guarantee vendors will supply such patches, or worse, that users will apply them. Users overwhelmingly forget about devices once they are installed. These devices aren’t like phones/laptops which notify users about patching.
You might think a good solution to this is automated patching, but only if you ignore history. Many rate “NotPetya” as the worst, most costly, cyberattack ever. That was launched by subverting an automated patch. Most IoT devices exist behind firewalls, and are thus very difficult to hack. Automated patching gets beyond firewalls; it makes it much more likely mass infections will result from hackers targeting the vendor. The Mirai worm infected fewer than 200,000 devices. A hack of a tiny IoT vendor can gain control of more devices than that in one fell swoop.
The bill does target one insecure feature that should be removed: hardcoded passwords. But they get the language wrong. A device doesn’t have a single password, but many things that may or may not be called passwords. A typical IoT device has one system for creating accounts on the web management interface, a wholly separate authentication system for services like Telnet (based on /etc/passwd), and yet a wholly separate system for things like debugging interfaces. Just because a device does the proscribed thing of using a unique or user generated password in the user interface doesn’t mean it doesn’t also have a bug in Telnet.
That was the problem with devices infected by Mirai. The description that these were hardcoded passwords is only a superficial understanding of the problem. The real problem was that there were different authentication systems in the web interface and in other services like Telnet. Most of the devices vulnerable to Mirai did the right thing on the web interfaces (meeting the language of this law) requiring the user to create new passwords before operating. They just did the wrong thing elsewhere.
People aren’t really paying attention to what happened with Mirai. They look at the 20 billion new IoT devices that are going to be connected to the Internet by 2020 and believe Mirai is just the tip of the iceberg. But it isn’t. The IPv4 Internet has only 4 billion addresses, which are pretty much already used up. This means those 20 billion won’t be exposed to the public Internet like Mirai devices, but hidden behind firewalls that translate addresses. Thus, rather than Mirai presaging the future, it represents the last gasp of the past that is unlikely to come again.
This law is backwards looking rather than forward looking. Forward looking, by far the most important thing that will protect IoT in the future is “isolation” mode on the WiFi access-point that prevents devices from talking to each other (or infecting each other). This prevents “cross site” attacks in the home. It prevents infected laptops/desktops (which are much more under threat than IoT) from spreading to IoT. But lawmakers don’t think in terms of what will lead to the most protection, they think in terms of who can be blamed. Blaming IoT devices for moral weakness of not doing “reasonable” things is satisfying, regardless if it’s effective.
The law makes the vague requirement that devices have “reasonable” and “appropriate” security features. It’s impossible for any company to know what these words mean, impossible to know if they are compliant with the law. Like other laws that use these terms, it’ll have be worked out in the courts. But security is not like other things. Rather than something static that can be worked out once, it’s always changing. This is especially true since the adversary isn’t something static like wear and tear on car parts, but dynamic: as defenders improve security, attackers change tactics, so what’s “reasonable” is constantly changing. Security struggles with hindsight bias, so what’s “reasonable” and “appropriate” seem more obvious after bad things occur rather than before. Finally, you are asking the lay public to judge reasonableness, so a jury can easily be convinced that “anti-virus” would be a reasonable addition to IoT devices despite experts believing it would be unreasonable and bad.
The intent is for the law to make some small static improvement, like making sure IoT products are patchable, after a brief period of litigation. The reality is that the issue is going to constantly be before the courts as attackers change tactics, causing enormous costs. It’s going to saddle IoT devices with encryption and anti-virus features that the public believe are reasonable but that make security worse.
Lastly, Mirai was only 200k devices that were primarily outside the United States. This law fails to address this threat because it only applies to California devices, not the devices purchased in Vietnam and Ukraine that, once they become infected, would flood California targets. If somehow the law influenced general improvement of the industry, you’d still be introducing unnecessary costs to 20 billion devices in an attempt to clean up 0.001% of those devices.
In summary, this law is based upon an obviously superficial understanding of the problem. It in no way addresses the real threats, but at the same time, introduces vast costs to consumers and innovation. Because of the changing technology with IPv4 vs. IPv6 and WiFi vs. 5G, such laws are unneeded: IoT of the future is inherently going to be much more secure than the Mirai-style security of the past.
Update: This tweet demonstrates the points I make above. It’s about how Tesla used an obviously unreasonable 40-bit key in its keyfobs.
Just one more thing. Everybody is making fun of Tesla for using a 40-bit key (and rightly so). But Tesla at least had a mechanism we could report to and fixed the problem once informed. @McLarenAuto, @KarmaAutomotive, and @UKTriumph use the same system and ignored us.
— Cryp·tomer (@TomerAshur) September 10, 2018
It’s obviously unreasonable and they should’ve known about the weakness of 40-bit keys, but here’s the thing: every flaw looks this way in hindsight. There never has been a complex product ever created that didn’t have similarly obvious flaws.
*** This is a Security Bloggers Network syndicated blog from Errata Security authored by Robert Graham. Read the original post at: https://blog.erratasec.com/2018/09/californias-bad-iot-law.html