Thursday, May 29, 2014

Can I drop a pacemaker 0day?

Can I drop a pacemaker 0day at DefCon that is capable of killing people?

Computers now run our cars. It's now possible for a hacker to infect your car with a "virus" that can slam on the brakes in the middle of the freeway. Computers now run medical devices like pacemakers and insulin pumps, it's now becoming possible assassinate somebody by stopping their pacemaker with a bluetooth exploit.

The problem is that manufacturers are 20 years behind in terms of computer "security". They don't just have vulnerabilities, they have obvious vulnerabilities. That means not only can these devices be hacked, they can be easily be hacked by teenagers. Vendors do something like put a secret backdoor password in a device believing nobody is smart enough to find it -- then a kid finds it in under a minute using a simple program like "strings".

Telling vendors about the problem rarely helps because vendors don't care. If they cared at all, they wouldn't have been putting the vulnerabilities in their product to begin with. 30% of such products have easily discovered backdoors, which is something they should already care about, so telling them you've discovered they are one of the 30% won't help.

Historically, we've dealt with vendor unresponsiveness through the process of "full disclosure". If a vendor was unresponsive after we gave them a chance to first fix the bug, we simply published the bug ("drop 0day"), either on a mailing list, or during a talk at a hacker convention like DefCon. Only after full disclosure does the company take the problem seriously and fix it.

This process has worked well. If we look at the evolution of products from Windows to Chrome, the threat of 0day has caused them to vastly improve their products. Moreover, now they court 0day: Google pays you a bounty for Chrome 0day, with no strings attached on how you might also maliciously use it.

So let's say I've found a pacemaker with an obvious BlueTooth backdoor that allows me to kill a person, and a year after notifying the vendor, they still ignore the problem, continuing to ship vulnerable pacemakers to customers. What should I do? If I do nothing, more and more such pacemakers will ship, endangering more lives. If I disclose the bug, then hackers may use it to kill some people.

The problem is that dropping a pacemaker 0day is so horrific that most people would readily agree it should be outlawed. But, at the same time, without the threat of 0day, vendors will ignore the problem.

This is the question for groups that defend "coder's rights", like the EFF. Will they really defend coders in this hypothetical scenario, declaring that releasing code 0day code is free speech that reveals problems of public concern? Or will they agree that such code should be suppressed in the name of public safety?

I ask this question because right now they are avoiding the issue, because whichever stance they take will anger a lot of people. This paper from the EFF on the issue seems to support disclosing 0days, but only in the abstract, not in the concrete scenario that I support. The EFF has a history of backing away from previous principles when they become unpopular. For example, they once fought against regulating the Internet as a public utility, now they fight for it in the name of net neutrality. Another example is selling 0days to the government, which the EFF criticizes. I doubt if the EFF will continue to support disclosing 0days when they can kill people. The first time a child dies due to a car crash caused by a hacker, every organization is going to run from "coder's rights".

By the way, it should be clear in the above post on which side of this question I stand: for coder's rights.



Update: Here's another scenario. In Twitter discussions, people have said that the remedy for unresponsive vendors is to contact an organization like ICS-CERT, the DHS organization responsible for "control systems". That doesn't work, because ICS-CERT is itself a political, unresponsive organization.

The ICS-CERT doesn't label "default passwords" as a "vulnerability", despite the fact that it's a leading cause of hacks, and a common feature of exploit kits. They claim that it's the user's responsibility to change the password, and not the fault of the vendor if they don't.

Yet, disclosing default passwords is one of the things that vendors try to suppress. When a researcher reveals a default password in a control system, and a hacker exploits it to cause a power outage, it's the researcher who will get blamed for revealing information that was not-a-vulnerability.

I say this because I was personally threatened by the FBI to suppress something that was not-a-vulnerability, yet which they claimed would hurt national security if I revealed it to Chinese hackers.

Again, the only thing that causes change is full disclosure. Everything else allows politics to suppress information vital to public safety.



Update: Some have suggested it's that moral and legal are two different arguments, that someone can call full disclosure immoral without necessarily arguing that it should be illegal.

That's not true. That's like saying that speech is immoral when Nazi's do it. It isn't -- the content may be vile, but the act of speaking never immoral.

The "moral but legal" argument is too subtle for politics, you really have to pick one or the other. We saw that happen with the EFF. They originally championed the idea that the Internet should not be regulated. They, they championed the idea of net neutrality -- which is Internet regulation. They original claimed there was no paradox, because they were saying merely that net neutrality was moral not that it should be law. Now they've discarded that charade, and are actively lobbying congress to make net neutrality law.

Sure, sometimes some full disclosure will result in bad results, but more often, those with political power will seek to suppress vital information with reasons that sound good at the time, like "think of the children!". We need to firmly defend full disclosure as free speech, in all circumstances.



Update: Some have suggested that instead of disclosing details, a researcher can inform the media.

This has been tried. It doesn't work. Vendors have more influence on the media than researchers.

We say this happen in the Apple WiFi fiasco. It was an obvious bug (SSID's longer than 97 bytes), but at the time Apple kernel exploitation wasn't widely known. Therefore, the researchers tried to avoid damaging Apple by not disclosing the full exploit. Thus, people could know about the bug without people being able to exploit it.

This didn't work. Apple's marketing department claimed the entire thing was fake. They did later fix the bug -- claiming it was something they found unrelated to the "fake" vulns from the researchers.

Another example was two years ago when researchers described bugs in airplane control systems. The FAA said the vulns were fake, and the press took the FAA's line on the problem.

The history of "going to the media" has demonstrated that only full-disclosure works.

19 comments:

John Moehrke said...

What works every time, tell the customers of the vendor. Give them the knowledge they can provide and prove the problem to the vendor. This is nothing more than good old... Follow the money.

Anonymous said...

Would the FDA be an appropriate channel for this? If the FDA agreed the product was unsafe, they would pull it correct?

You have to hit the vendor in the wallet if you want to get their attention.

Ajay Pillai said...

The FDA ought to be sought out and informed in these situations. Regulating medical devices is their jurisdiction/charter, and they don't take it as laxly as people unfamiliar with the agency might think. Has anyone tried this?

Unknown said...


> That's not true. That's like saying that speech is immoral when Nazi's do it. It isn't -- the content may be vile, but the act of speaking never immoral.

I don't think you can just dismiss that line of thought. The difference between legality and morality exists all the time, just look at (I suspect) your own opinions on intellectual property.

The doctrine of free speech does not mean that all speech is good or moral. It just means that it's legal. The Nazis can say whatever they want, but I'll be damned if it isn't immoral.

Unknown said...

Something can be legal and immoral. It's interesting that you mention the Nazi's. The killing of the Jews was sanctioned by the Nazi government and was legal. But clearly not moral. We hope that morality and legality are connected but that is not always the case.

Unknown said...
This comment has been removed by the author.
Jason M said...

Something absolutely can be immoral while making it illegal would also be immoral. A lot of speech that I think should be protected is absolutely immoral, it's just that a 3rd party suppressing that speech would be worse.

Karl Fogel said...

Some prior art on this whole issue, FWIW:

"Killed by Code: Software Transparency in Implantable Medical Devices" (Karen Sandler et al)

softwarefreedom.org/resources/2010/transparent-medical-devices.html

Gordon Mohr said...

Perhaps:

Release the exploit to the manufacturer, and an encrypted disclosure to the net.

Arrange for the the decryption key to be revealed in the future via a uncancellable dead man's switch, such as a time-lock encryption scheme. (Also provide the manufacturer enough info to verify that switch has been deployed.)

A long-enough 'fuse' – such as years – might serve to sway public opinion and even liability/legal-process against the manufacturer.

Sure, you started the trolley. But the manufacturer both tied their customers to the tracks, *and* refused to throw the patch-lever that would have saved them.

Unknown said...

From the perspective of a biomedical engineer, one of the main reasons why the vendors keep shipping their products unfixed is, that you can not simply push a software update to the pacemaker.
After developing a medical device, such as a pacemaker, the device has to be approved by health authorities. And the authorities approve the device in the whole. Which means, that no component of the device must be modified afterwards. If you change something, you have to do the whole process of approval again. And this process has several costs and duration and a huge bureaucratic overhead. As a reason, most manufacturers are putting the updates in the newer models of their devices, rather then to update their older devices.

Unknown said...

You can't assume that these other avenues just always won't work, to me that is criminal. I think someone (EFF, someone else) should establish a guideline of best efforts to contact the appropriate people, give them the appropriate time to respond, and then if nothing happens you should release the info. Saying things like "contacting the government doesn't work" means that you think they can't ever change, and will never give them the opportunity to learn from past mistakes like everyone else. Not all vendors are the same, and not all people you contact in the government will be the same either.

Anonymous said...

My favourite example for the immoral/illegal debate is cheating[0] on your partner/spouse. Most people would agree that it's immoral, but very few would suggest that it should be illegal.


[0] Where "cheating" is defined as going beyond the agreed[1] boundaries of a relationship, and therefore intrinsically works differently for people openly in polyamorous relationships.
[1] Agreement may be unspoken.

Jared Kells said...

This is doing the rounds that you've found a pacemaker 0day. Are you talking hypothetically?

W. said...

A huge problem with the development of medical devices are the "standards" (ANSI Z136, IEC 62394, DIN 62304 and all the stuff referenced therein) to which the process normally follows.

Technically those are mere guidelines. If you can reasonably explain in theory and experiment, why your own quality assurance methods and safety measures are appropriate, you can get your product approved by the FDA and other national regulatory bodies. However doing it this way (almost) never works.

But if you follow those standards meticulously in the development of a new medical product, do everything by the word, then you can "cash in" FDA approval for sure.

One problem with those standards is, that they follow outdated development models (waterfall) and make the ill assumption that every risk and problem associated with a certain product can be identified and addressed in the initial design phase. Of course things don't work that way; the only way to identify problems in a product is to have a high development iteration rate and to build several prototypes, addressing the problems as they are found. However following the standards every time a flaw is found you essentially have to start development of a whole new product from base zero and go through the whole process again. This is the main reason why the development of medical products is so damn expensive.

Another problem is, that those standards are not really helpful when it comes to solving the actual problems. Most of the stuff that's written in e.g. IEC/DIN-62304 is very vague and mostly its just an amalgamation of buzzwords like "security is important, verification, etc." So in the end you produce a lot of paper like "we did verify the software by foo and bar and security is implemented by this and that". But unfortunately those are all tests for if the thing is well behaved in all the expected scenarios (waterfall model). Falling for the fallacy that after a thorough initial phase nothing unexpected can happen.

Pentesting is a word unknown to most medical developers and also the consultants offering their services in that business.

I've been dealing with this shit for the past two years now. I and three colleagues from University founded a spin-off to commercialize our laser technology, which main application is Optical Coherence Tomography, which is a major diagnostic tool in ophtalmology (eye medicine). But it also happens that I and the other software guys also happen to be crypto and security nerds and we develop our software with that knowledge.

But the whole medical approval process is so fucked up, that we no longer aim for entering the hospital market at all, but go for a research system approach now.

Unknown said...

Your rationale is missing one key fact. If you found it, the odds are pretty small you're the first - far more probable that it's just not public yet, and there's a measurable probability that it's not public because somebody wants to do something bad - think of the relative white/gray/black hat ratios. I would submit to CERT and RISKS digest - I would not drop it at a conference like defcon, because of the number of people whose tools are more powerful than their brains

Darren Pauli said...

Drop disclosures to the press and demonstrate exploits to refute the PR. Shonky scribes may swallow the easy-to-digest vendor spin more readily than complex technical explanations, but there are plenty worth their salt who will not.

George said...

Telling the customers absolutely does NOT work. In fact it will get you in trouble because the customers don't want it disclosed. If it's not disclosed, they have no duty to fix it. If it's disclosed, you're rocking the boat.

Unknown said...

My preference would be to go the money route and use civil court to get it.

1. Report it to the manufacturer.
2. Provide reasonable time for response.
3. Find someone willing to participate with the device in question.
4. In a regulated environment - such as public display of the exploit with an ambulance on hand - carry out the attack.
5. Sue the manufacturer for gross negligence.

You get to make money *and* force the problem to be fixed. What more can you ask for?

Unknown said...

My preference would be to go the money route and use civil court to get it.

1. Report it to the manufacturer.
2. Provide reasonable time for response.
3. Find someone willing to participate with the device in question.
4. In a regulated environment - such as public display of the exploit with an ambulance on hand - carry out the attack.
5. Sue the manufacturer for gross negligence.

You get to make money *and* force the problem to be fixed. What more can you ask for?