Saturday, May 02, 2015

How to fix the CFAA

Someone on Twitter asked for a blogpost on how I'd fix the CFAA, the anti-hacking law. So here is my proposal.

The major problem with the law is that the term "authorized" isn't clearly defined. You non-technical people think the meaning is obvious, because you can pick up a dictionary and simply point to a definition. However, in the computer world, things are a bit more complicated.

It's like a sign on a store window saying "No shirt, no shoes, no service" -- but you walk in anyway with neither. You know your presence is unwanted, but are you actually trespassing? Is your presence not "authorized"? Or, should we demand a higher standard, such as when the store owner asks you to leave (and you refuse) that you now are trespassing/unauthorized?

What happens on the Internet is that websites routinely make public data they actually don't want people to access. Is accessing such data unauthorized? We saw that a couple days ago, where Twitter accidentally published their quarterly results an hour early on their website. An automated script discovered this and republished Twitters results to a wider audience, ironically using Twitter to do so. This caused $5-billion to drop off their market valuation. It's obviously unintentional on the part of the automated script, so not against the law, but it still makes us ask whether it was "authorized".

Consider if I'd been browsing Twitter's investor page, as the script did, and noticed the link. I would've thought to myself "this is odd, the market doesn't close for another hour, I'll bet this is a mistake". Would I be authorized in clicking on that link, seeing the quarterly results, and trading stocks/options based on that information? In other words, I know that Twitter made a mistake and does not want me to do so, but since they made the information public, this doesn't mean my access is unauthorized. What if I write a script to check Twitter's investor page every quarter, hoping they make a mistake again, thereby profiting from it?

As a techy, I often encounter similar scenarios. I cannot read the statute in order to figure out whether my proposed conduct would be in violation. I talk with many lawyers who are experts on the statute, and they cannot tell me if my proposed conduct is permissible under the statute. This isn't some small area of ambiguity between two clear sides of the law, this is a gaping hole where nobody can answer the question. The true answer is this: it depends upon how annoyed people will be if my automated script moves Twitter's stock price by a large amount.

You'd think that this is an obvious candidate for the "void for vagueness" doctrine. The statute is written in such a way that reasonable people cannot figure out what is permissible under the law. This allows the law to be arbitrarily and prejudicial applied, as indeed it was in the Weev case.

The reason for this confusion comes from the 1980s origin of the law. Back then, computers were closed, and you needed explicit authorization to access them, such as a password. The web changed that to open, public computers that required no password or username. Authorization is implicit. I did not explicitly give you authorization to download this blogpost from my server, but you intentionally did so anyway.

This is legal, but I'm not a lawyer and I don't know how it's legal. Some lawyers have justified it as "social norms", but that's bogus. It's the social norms now, but it wasn't then. If implicit authorization was the norm back then, then it would've been included in the law. The answer to that is "nerd norms". Only nerds accessed computers back then, and it was the norm for nerds. Now we have iPads, and everyone thinks they are a nerd, so nerd norms prevailed and nobody went to jail for accessing the web while social norms were changing.

But sometimes "iPad user norms" differ from "nerd norms", and that's where we see trouble in the cases involving Weev and Swartz. I could write a little script to automatically scrape all the investor pages of big companies, in case any make the same mistake Twitter did. I might get prosecuted because now I've done something iPad users consider abnormal: they might click on a link, but they would never write a script, so script writing is evil.

This brings me to the definition of "authorization". It should be narrowed according to "nerd norms". Namely, it should refer to only explicit authorization. If a website is public and gives things up while demanding authorization from nobody, then it's implicit that pretty much anything is authorized -- even when the website owners mistakenly publish something. In other words, following RFC2616 implicitly authorizes those who likewise follow that RFC.

I am not a lawyer, but Orin Kerr is. His proposal adds the following language. This sounds reasonable to me. It would clear up the confusion in my hypothetical investor page scenario above: because I'm bypassing no technological barrier, and permission is implied, I'm not guilty.
"to circumvent technological access barriers to a computer or data without the express or implied permission of the owner or operator of the computer"
By the way, technically I'm asking for clarification. If lawmakers want to define "unauthorization" broadly to include all "unwanted" access, then that would satisfy my technical concerns. But politically, I want that definition defined the other way, narrowly, so that I'm not violating the law accessing information you accidentally made public, even though I know you don't want me to access it.


My second concern is with the penalties of the law. Currently we are seeing a 14 year old kid in Florida charged with a (state) felony for a harmless prank on his teacher's computer. There's no justification for such harsh penalties, especially since if they could catch them all, it'd make felons out of hundreds of thousands of kids every year. Misdemeanors are good punishments for youthful exuberance. This is especially true since 90% of those who'll go onto being the defenders of computer in the future will have gone through such a phase in their development. Youth is about testing boundaries. We should have a way of informing youth when they've gone to far, but in a way that doesn't destroy their lives.

Most of the computer crimes committed are already crimes in their own right. If you steal money, that's a crime, and it should not matter if a computer was violated in the process. There's no need for additional harsh penalties in an anti-hacking law.

Orin's proposed changes also include reducing the penalties, bringing things down to misdemeanors. I don't understand the legalese, but they sound good. From what I understand, though, there is a practical problem. Computer crime is almost always across state lines, but federal prosecutors don't want to get involved in misdemeanors. This ends up meaning that a federal law about misdemeanors has little practical effect -- or at least, that's what I'm told.

In the coming election, and issue for both Democrats and Republicans is the number of felons in jail in this country, which is 10 times higher than any other civilized country. It's a race thing, but even if you are white, the incarceration rate is 5 times that of Europe. I point this out because politically, I oppose harsh jail sentences in general. Being a technical expert is the reason for wanting the first change above, but my desire for this second change is purely due to my personal politics.


Summary

I am not a lawyer or policy wonk, so I could not possibly draft specific language. My technical concern is that the definition of "authorized" in the current statute is too vague when applied to public websites and needs to be clarified. My personal political desires is that this definition should be narrow, and the penalties for violating the law should be lighter.


Friday, May 01, 2015

Ultron didn't save the world

The movie Avengers: Age of Ultron has a message for us in cybersec: In our desire to save the world, we are likely to destroy it.

Tony Stark builds "Ultron" to save the world, to bring peace in our time. As a cybernetic creation, Ultron takes this literally, and decides the best way to bring peace is to kill all humans.

The problem, as demonstrated by the movie, isn't that there was a bug in Stark's code. The problem was the hubris thinking that Stark could protect everyone. Inevitably, protecting everyone meant ruling everyone, bringing peace by force. It's the same hubris behind the USA's effort to bring peace to Iraq and Afghanistan.

I mention this because in the cybersecurity industry, there are many who propose to bring security through authority. They want government mandated rules on how to write code, imposed liability requirements, and so on.

This sounds reasonable. After all, nobody wants medical equipment like pacemakers to be hacked. But here's the thing. Computer-controlled devices have the potential to vastly improve health, whether it's Watches monitoring your heart, pacemakers, insulin pumps, and so on. While these devices can be hacked, the practical reality is that if you want to kill people, bombs and bullets are still easier than hacking medical devices. Standards and liability, on the other hand, will chill innovations that save lives. The fallacy of asserting authority to bring security means killing people because the innovation that would've saved them was delayed because developers worried too much about hackers.

The cybersecurity industry is weird. We are the first to point out the hollow rhetoric of the surveillance and police state. Yet, we are the first to become totalitarian when we think it's going to be us who is in control. No, we should learn from Tony Stark: even when it's us "good guys" who are running the show, we should still resists the urge to impose our authority by force. The tradeoffs from the security we demand is often worse than the hackers it would stop.

Review: Avengers, Age of Ultron

Today was the opening of the movie "Avengers: Age of Ultron". The best way to describe it is this. On the first date, you went and saw "The Avengers". You felt the rush of something new, and you were quite satisfied. This movie, "Age of Ultron", is the second date. You already know what to expect, but that doesn't matter, because you progress past the holding-hands stage. You didn't go all the way, but you know that's coming on the third date, with "Avengers: Infinity Wars".

Remember that this movie is part of the Marvel Avengers arc, consisting of Ironman (3 movies), Captain America (2), Thor (2), Hulk, and Avengers (2). This arc also includes two TV series, and also a (so far) unrelated Guardians of the Galaxy movie. Everything is leading to the Infinity Wars movies.

Wednesday, April 29, 2015

Some notes on why crypto backdoors are unreasonable

Today, a congressional committee held hearings about 'crypto backdoors' that would allow the FBI to decrypt text messages, phone calls, and data on phones. The thing to note about this topic is that it's not anywhere close to reasonable public policy. The technical and international problems are unsolvable with anything close to the proposed policy. Even if the policy were reasonable, it's unreasonable that law enforcement should be lobbying for it.

Monday, April 27, 2015

The hollow rhetoric of nation-state threats

The government is using the threat of nation-state hackers to declare a state-of-emergency and pass draconian laws in congress. However, as the GitHub DDoS shows, the government has little interest in actually defending us.

It took 25 days to blame North Korea for the Sony hack, between the moment "Hacked by the #GOP" appeared on Sony computers and when President Obama promised retaliation in a news conference -- based on flimsy evidence of North Korea's involvement. In contrast, it's been more than 25 days since we've had conclusive proof the Chinese government was DDoSing GitHub, and our government has remained silent. China stopped the attacks after two weeks on their own volition, because GitHub defended itself, not because of anything the United States government did.

The reason for the inattention is that GitHub has no lobbyists. Sony spends several million dollars every year in lobbying, as well as hundreds of thousands in campaign contributions. When Sony gets hacked, politicians listen. In contrast, GitHub spends zero on either lobbying or contributions.

It's not that GitHub isn't important -- it's actually key infrastructure to the Internet. All computer nerds know the site. It's the largest repository of source-code on the net. It's so important that China couldn't simply block the IP address, because China needs the site, too. That's why China had to use a convoluted attack in order to pressure GitHub to censor content.

Despite GitHub's importance, few in Washington D.C. have heard of it. If you don't spend money on lobbying and donors, you just don't exist. Even if the government heard of GitHub, they still wouldn't respond. We have over half a trillion dollars of trade with China every year, not to mention a ton of diplomatic disputes. Our government won't risk upsetting any of those issues, which do have tons of lobbying dollars behind them, in order to defend GitHub. At most, GitHub will become a bargaining chip, such as encouraging China to stop subsidizing tire exports in order to satisfy the steel workers union.

The point of this post isn't to bang the drums of cyberwar, to claim that our government should retaliate against China for their nation-state attack. Quite the opposite. I'm trying to point out the hollow rhetoric of "nation-state threats". You can't use "nation-state defense" to justify sanctions on North Korea while ignoring the nation-state attack on GitHub.

The next time somebody uses "nation-state threats" in order to justify government policy that increases the police-state and military-industrial complex, the first question we should ask is to have them explain government's inaction in the nation-state attack against GitHub.

Thursday, April 16, 2015

Solidarity

The government's zealous War on Hackers threatens us, the good hackers who stop the bad ones. They can't tell the good witches from the bad witches. When members of our community get threatened by the system, we should probably do more to stand in solidarity with them. I mention this because many of you will be flying to SFO this coming week for the RSA Conference, which gives us an opportunity to show solidarity.

Today, a security researcher tweeted a joke while on a plane. When he landed, the FBI grabbed him and confiscated all his stuff. The tweets are here:



Chris Roberts' area of research is embedded control systems like those on planes. It's not simply that the FBI grabbed him because of a random person on a plane, but specifically because he's a security researcher. He's on the FBI's radar (so to speak) for things like this Fox News interview.

I suggest we all start joke tweeting along these lines,  from the airplanes, like:

DFW->SFO. Playing with airplane wifi. I hope the pilots enjoy the Rick Astely video playing on their EICAS system. 
LGA->SFO. Note to self. Don't fuzz the SATCOM unit while on Twitter. Takes GoGo an hour to come back up. 
NRT->SFO. Yup, the IFE will grab corrupt MP3 from my iPhone and give a shell. I wonder if nmap will run on it. 
PDX->SFO. HackRF says there's a strong 915 MHz qpsk 64k symbol/second signal. I wonder what'll happen if I replay it.
The trick is to write jokes, not to actually threaten anything -- like the original tweet above. Those of us with technical knowledge and skills should be free to express our humor without the FBI confiscating all our stuff when we land.




BTW, I know you can all steal in-flight WiFi easier than you can pay for it, but do pay for it :)


Wednesday, April 15, 2015

Masscanning for MS15-034

So Microsoft has an important web-server bug, so naturally I'd like to scan the Internet for it. I'm running the scan now, but I'm not sure it's going to give any useful results.

The bug comes from adding the following header to a web request like the following
Range: bytes=0-18446744073709551615
As you can see, it's just a standard (64-bit) integer overflow, where 18446744073709551615 equals -1.

That specific header is harmless, it appears that other variations are the ones that may cause a problem. However, it serves as a useful check to see if the server is patched. If the server is unpatched, it'll return the following error:
HTTP/1.1 416 Requested Range Not Satisfiable
From the PoC's say, a response that looks like the following means that it is patched:
The request has an invalid header name
However, when I run the scan across the Internet, I'm getting the following sorts of responses from servers claiming to be IIS:

HTTP/1.1 200 OK
HTTP/1.1 206 Partial Content
HTTP/1.1 301 Moved Permanently
HTTP/1.1 302 Object moved
HTTP/1.1 302 Found
HTTP/1.1 302 Redirect
HTTP/1.1 401 Unauthorized
HTTP/1.1 403 Forbidden
HTTP/1.1 404 Object Not Found
HTTP/1.1 416 Requested Range Not Satisfiable
HTTP/1.1 500 URL Rewrite Module Error.
HTTP/1.1 500 Internal Server Error

I suspect the biggest reason is that the "Range" header only is parsed when there is a static file being served. If a script generates the page, then it'll ignore the range. I also suspect that virtual-hosting gets in the way -- that unless the correct DNS name is provided, then it'll reject the request.

Thus, the testing is inconclusive. While I can find some vulnerable responses, just because a server gives me some other response doesn't mean it's not also vulnerable. Thus, I can't really say anything about the extent of the vulnerability.

It's an important message for companies using various tools to see if they are completely patched. Just because the tools can't find vulnerable systems doesn't mean you aren't vulnerable.




Saturday, April 11, 2015

Should billionaires have to disclose their political contributions?

The Intercept has a story about a billionaire sneaking into a Jeb Bush fundraiser. It points out:
"Bush’s campaign operation has taken steps to conceal the names of certain big-money donors. ... Bush’s Right to Rise also formed a 501(c)(4) issue advocacy wing, which, like a Super PAC, can raise and spend unlimited amounts of money — but unlike a Super PAC, never has to reveal donor names."
This leads me to ask two questions:

  1. Should billionaires be allowed to spend unlimited amounts of money promoting their politics?
  2. If they can spend unlimited amounts, should they be forced to disclose them?

If you know me, you know that I'm asking a trick question. I'm not referring to venture capitalist Ron Conway, the billionaire mentioned the story. I'm instead referring to Pierre Omidyar the billionaire founder of eBay who funds The Intercept, which blatantly promotes his political views such as those on NSA surveillance. Can Omidyar spend endless amounts on The Intercept? Should he be forced to disclose how much?

This question is at the heart of the Supreme Court decision in Citezen's United. It comes down to this: the Supreme Court was unable to craft rules that could tell the difference between what Omidyar was doing with The Intercept, and what Jeb Bush is doing with his Super PAC. Yet, restricting Omidyar is also not an option -- despite being clearly political, he's nonetheless acting like a typical press organization. As they opinion says:
Differential treatment of media corporations and other corporations cannot be squared with the First Amendment, and there is no support for the view that the Amendment’s original meaning would permit suppressing media corporations’ political speech. 
As a libertarian, I think Omidyar should be free to use his slush fund for The Intercept as he pleases, without having to report the amounts. However, I still believe it's in the public interest. Nobody wants to live in a plutocracy, so all info about the rich and powerful is of public interest. If Anonymous doxxes it, or an employees leaks it, then the information is fair game, and should be reported on in exactly the same way that The Intercept discloses the Snowden leaks.