Wednesday, July 22, 2015

Infosec's inability to quantify risk

Infosec isn't a real profession. Among the things missing is proper "risk analysis". Instead of quantifying risk, we treat it as an absolute. Risk is binary, either there is risk or there isn't. We respond to risk emotionally rather than rationally, claiming all risk needs to be removed. This is why nobody listens to us. Business leaders quantify and prioritize risk, but we don't, so our useless advice is ignored.

An example of this is the car hacking stunt by Charlie Miller and Chris Valasek, where they turned off the engine at freeway speeds. This has lead to an outcry of criticism in our community from people who haven't quantified the risk. Any rational measure of the risk of that stunt is that it's pretty small -- while the benefits are very large.

In college, I owned a poorly maintained VW bug that would occasionally lose power on the freeway, such as from an electrical connection falling off from vibration. I caused more risk by not maintaining my car than these security researchers did.

Indeed, cars losing power on the freeway is a rather common occurrence. We often see cars on the side of the road. Few accidents are caused by such cars. Sure, they add risk, but so do people abruptly changing lanes.

No human is a perfect driver. Every time we get into our cars, instead of cycling or taking public transportation, we add risk to those around us. The majority of those criticizing this hacking stunt have caused more risk to other drivers this last year by commuting to work. They cause this risk not for some high ideal of improving infosec, but merely for personal convenience. Infosec is legendary for it's hypocrisy, this is just one more example.

Google, Tesla, and other companies are creating "self driving cars". Self-driving cars will always struggle to cope with unpredictable human drivers, and will occasionally cause accidents. However, in the long run, self-driving cars will be vastly safer. To reach that point, we need to quantify risk. We need to be able to show that for every life lost due to self-driving cars, two have been saved because they are inherently safer. But here's the thing, if we use the immature risk analysis from the infosec "profession", we'll always point to the one life lost, and never quantify the two lives saved. Using infosec risk analysis, safer self-driving cars will never happen.

In hindsight, it's obvious to everyone that Valasek and Miller went too far. Renting a track for a few hours costs less than the plane ticket for the journalist to come out and visit them. Infosec is like a pride of lions, that'll leap and devour one of their members when they show a sign of weakness. This minor mistake is weakness, so many in infosec have jumped on the pair, reveling in righteous rage. But any rational quantification of the risks show that the mistake is minor, compared to the huge benefit of their research. I, for one, praise these two, and hope they continue their research -- knowing full well that they'll likely continue to make other sorts of minor mistakes in the future.

7 comments:

martijn said...

I'm one of those who has criticised Chris and Charlie's research. Actually, I think it's amazing research, I'm jealous of them for having the technical understanding and experience that allows them to conduct the research and I am grateful for them for doing it. It's likely going to make the world safer.

My criticism is the way it was presented. The video suggests Chris and Charlie had little concern for Andy's safety, let alone that of other people on the same road. I'm sure a big part of this was just to make the video look more juicy - Chris and Charlie as Beavis and Butthead laughing at poor Andy is a lot funnier than two researcher in lab coats analysing packets sent to the car. But it could be interpret the wrong way and result in people taking a less favourable opinion of car hacking, which would hurt both the hackers and security at large.

NB I work from home and I don't drive. But I'm sure I'm hypocritical in many other ways.

Daggar said...

I think most of the infosec community is just jealous that someone actually listened to them.

kurt wismer said...

could the demo have been done differently? yes

could the demo have been done more safely? yes

have car hacking demos been done more safely in the past? yes

were charlie and chris insulated from consequences of potentially poor risk estimation while leaving others to face those consequences head-on? yes

was their approach beyond reproach? no

Unknown said...

"Indeed, cars losing power on the freeway is a rather common occurrence. We often see cars on the side of the road."

Correlation doesn't equal causation. Do you have any stats to show that losing power on the freeway is common or are you merely just assuming that car on side of the road == lost power?

There can be a number of reasons including:
- flat tire
- engine over heated
- check engine light came on (could be minor like forgetting to put on the gas cap)
- ran out of gas (which yes, does end in losing power, but it's hardly immediate or abrupt)
- involved in accident

What's next? Hacking helicopters and cutting power to show off vulnerabilities because pilots are trained in auto-rotation and should be able to land safely?

Charlie and Chris deserve praise for their research and condemnation for their obvious attempt at attracting attention for themselves and their efforts while ignoring the risks.

Ryan said...

you forget that if you get pwned, you get pwned. even benign bugs could turn out to be exploitable in combination to allow for a greater escalation of privileges or something

Gabor Szathmari said...
This comment has been removed by the author.
Unknown said...

I have far less control over the impact of an exploit than I do over likelihood. To ignore likelihood in analysis means our focus won’t be on what we can control, but on what we fear.

When patches, fixes and manufacturer response can take days or weeks to eliminate an exploit, I minimize risk by focusing on raising the difficulty of deploying an exploit, reducing avenues by which a bad actor can launch an attack, and therefore reduce the likelihood. By my math, Likelihood * Impact = Risk. In my experience, this means we are best positioned to protect our companies by lowering likelihood, than by waiting for patches and updates from manufacturers to affect impact.

Think of this – had you bought a few six packs of beer for a few of your buddies one summer day, and passed out some metric wrenches in the driveway, you could have reduced the likelihood of the Classic VW electrical gremlin. The only way to reduce the impact would be to not drive anywhere, and unfortunately, many companies without a thorough understanding of risk analysis often find themselves operating from a paralyzed and reactionary stance. Good work Robert on your articles, and thanks for spawning a thoughtful conversation!