Monday, September 10, 2018

California's bad IoT law

California has passed an IoT security bill, awaiting the governor's signature/veto. It’s a typically bad bill based on a superficial understanding of cybersecurity/hacking that will do little improve security, while doing a lot to impose costs and harm innovation.


It’s based on the misconception of adding security features. It’s like dieting, where people insist you should eat more kale, which does little to address the problem you are pigging out on potato chips. The key to dieting is not eating more but eating less. The same is true of cybersecurity, where the point is not to add “security features” but to remove “insecure features”. For IoT devices, that means removing listening ports and cross-site/injection issues in web management. Adding features is typical “magic pill” or “silver bullet” thinking that we spend much of our time in infosec fighting against.

We don’t want arbitrary features like firewall and anti-virus added to these products. It’ll just increase the attack surface making things worse. The one possible exception to this is “patchability”: some IoT devices can’t be patched, and that is a problem. But even here, it’s complicated. Even if IoT devices are patchable in theory there is no guarantee vendors will supply such patches, or worse, that users will apply them. Users overwhelmingly forget about devices once they are installed. These devices aren’t like phones/laptops which notify users about patching.

You might think a good solution to this is automated patching, but only if you ignore history. Many rate “NotPetya” as the worst, most costly, cyberattack ever. That was launched by subverting an automated patch. Most IoT devices exist behind firewalls, and are thus very difficult to hack. Automated patching gets beyond firewalls; it makes it much more likely mass infections will result from hackers targeting the vendor. The Mirai worm infected fewer than 200,000 devices. A hack of a tiny IoT vendor can gain control of more devices than that in one fell swoop.

The bill does target one insecure feature that should be removed: hardcoded passwords. But they get the language wrong. A device doesn’t have a single password, but many things that may or may not be called passwords. A typical IoT device has one system for creating accounts on the web management interface, a wholly separate authentication system for services like Telnet (based on /etc/passwd), and yet a wholly separate system for things like debugging interfaces. Just because a device does the proscribed thing of using a unique or user generated password in the user interface doesn’t mean it doesn’t also have a bug in Telnet.

That was the problem with devices infected by Mirai. The description that these were hardcoded passwords is only a superficial understanding of the problem. The real problem was that there were different authentication systems in the web interface and in other services like Telnet. Most of the devices vulnerable to Mirai did the right thing on the web interfaces (meeting the language of this law) requiring the user to create new passwords before operating. They just did the wrong thing elsewhere.

People aren't really paying attention to what happened with Mirai. They look at the 20 billion new IoT devices that are going to be connected to the Internet by 2020 and believe Mirai is just the tip of the iceberg. But it isn’t. The IPv4 Internet has only 4 billion addresses, which are pretty much already used up. This means those 20 billion won’t be exposed to the public Internet like Mirai devices, but hidden behind firewalls that translate addresses. Thus, rather than Mirai presaging the future, it represents the last gasp of the past that is unlikely to come again.

This law is backwards looking rather than forward looking. Forward looking, by far the most important thing that will protect IoT in the future is “isolation” mode on the WiFi access-point that prevents devices from talking to each other (or infecting each other). This prevents “cross site” attacks in the home. It prevents infected laptops/desktops (which are much more under threat than IoT) from spreading to IoT. But lawmakers don’t think in terms of what will lead to the most protection, they think in terms of who can be blamed. Blaming IoT devices for moral weakness of not doing “reasonable” things is satisfying, regardless if it's effective.

The law makes the vague requirement that devices have “reasonable” and “appropriate” security features. It’s impossible for any company to know what these words mean, impossible to know if they are compliant with the law. Like other laws that use these terms, it’ll have be worked out in the courts. But security is not like other things. Rather than something static that can be worked out once, it’s always changing. This is especially true since the adversary isn’t something static like wear and tear on car parts, but dynamic: as defenders improve security, attackers change tactics, so what’s “reasonable” is constantly changing. Security struggles with hindsight bias, so what’s “reasonable” and “appropriate” seem more obvious after bad things occur rather than before. Finally, you are asking the lay public to judge reasonableness, so a jury can easily be convinced that “anti-virus” would be a reasonable addition to IoT devices despite experts believing it would be unreasonable and bad.

The intent is for the law to make some small static improvement, like making sure IoT products are patchable, after a brief period of litigation. The reality is that the issue is going to constantly be before the courts as attackers change tactics, causing enormous costs. It’s going to saddle IoT devices with encryption and anti-virus features that the public believe are reasonable but that make security worse.

Lastly, Mirai was only 200k devices that were primarily outside the United States. This law fails to address this threat because it only applies to California devices, not the devices purchased in Vietnam and Ukraine that, once they become infected, would flood California targets. If somehow the law influenced general improvement of the industry, you’d still be introducing unnecessary costs to 20 billion devices in an attempt to clean up 0.001% of those devices.

In summary, this law is based upon an obviously superficial understanding of the problem. It in no way addresses the real threats, but at the same time, introduces vast costs to consumers and innovation. Because of the changing technology with IPv4 vs. IPv6 and WiFi vs. 5G, such laws are unneeded: IoT of the future is inherently going to be much more secure than the Mirai-style security of the past.




Update: This tweet demonstrates the points I make above. It's about how Tesla used an obviously unreasonable 40-bit key in its keyfobs.

It's obviously unreasonable and they should've known about the weakness of 40-bit keys, but here's the thing: every flaw looks this way in hindsight. There never has been a complex product ever created that didn't have similarly obvious flaws.

On the other hand, what Tesla does have better than any other car maker is the proper programs whereby they can be notified of such flaws in order to fix them in a timely manner. Better yet, they offer bug bounties. This isn't a "security feature" in the product, but yet is absolutely the #1 most important thing that a company has, more so than any security feature. What we are seeing with the IoT marketplace in general is that companies lack such notification/disclosure programs: companies can be compliant with the California law was still lacking such programs.

Finally, Tesla cars are "Internet connected devices" according to the law, so they can be sued under that law for this flaw, even though it represents no threat the law was intended to handle.

Again, the law wholly misses the point. A law demanding IoT companies have disclosure program would actually be far more effective at improving security than this current law, while not imposing the punitive costs the current law does.

4 comments:

Greg Nation said...

"Lawmakers don’t think in terms of what will lead to the most protection, they think in terms of who can be blamed."

Well said. This concept has occurred to me many times but I didn't know how to put it into words.

Not only does this legislation potentially increases attack surfaces, but it might also make CA computer systems more attractive targets.

Say I'm a company with a competitor located in CA. If I can break into their systems, I can get them bogged down in legal troubles, which may lead to more regulations, which may further expand the attack surface, which may make it an easier target to hack again... wash, rinse, repeat.

Patrick Moore said...

"Reasonable and appropriate" is there on purpose. It means that as the state of the art security standards change and improve - IoT vendors are expected to keep up. It is intentionally *not* a fixed term.

Furthermore there are lots of laws that use that terminology. Laws governing money handling, accounting, civil engineering, etc.

You also dismiss the (1) and (2) sections.
1798.91.04. (a) A manufacturer of a connected device shall equip the device with a reasonable security feature or features that are all of the following:
(1) Appropriate to the nature and function of the device.
(2) Appropriate to the information it may collect, contain, or transmit.

An IoT vendor can decide NOT to collect any data or transmit minimal things and use that minimal feature set as a defense for fewer security features.

Unknown said...

Well said comments I see here. Should you need the services of a hacker for website or database hacks, CCTV, DDOS, credit score, criminal records, emails, facebook ,whatsapp and the likes, talk to LOTUSCRACKER AT GMAIL DOT COM

Unknown said...

"The key to dieting is not eating more but eating less."

When you're going to use a metaphor, please understand the other domain first. Not all dieting is designed for losing weight, and those who want to lose weight sometimes need to eat more and more often than they already do. The key to dieting isn't eating more or eating less, it's eating appropriately for your body's needs. And you could easily map *that* metaphor back into the domain of information security.