The major problem with the law is that the term "authorized" isn't clearly defined. You non-technical people think the meaning is obvious, because you can pick up a dictionary and simply point to a definition. However, in the computer world, things are a bit more complicated.
It's like a sign on a store window saying "No shirt, no shoes, no service" -- but you walk in anyway with neither. You know your presence is unwanted, but are you actually trespassing? Is your presence not "authorized"? Or, should we demand a higher standard, such as when the store owner asks you to leave (and you refuse) that you now are trespassing/unauthorized?
What happens on the Internet is that websites routinely make public data they actually don't want people to access. Is accessing such data unauthorized? We saw that a couple days ago, where Twitter accidentally published their quarterly results an hour early on their website. An automated script discovered this and republished Twitters results to a wider audience, ironically using Twitter to do so. This caused $5-billion to drop off their market valuation. It's obviously unintentional on the part of the automated script, so not against the law, but it still makes us ask whether it was "authorized".
Consider if I'd been browsing Twitter's investor page, as the script did, and noticed the link. I would've thought to myself "this is odd, the market doesn't close for another hour, I'll bet this is a mistake". Would I be authorized in clicking on that link, seeing the quarterly results, and trading stocks/options based on that information? In other words, I know that Twitter made a mistake and does not want me to do so, but since they made the information public, this doesn't mean my access is unauthorized. What if I write a script to check Twitter's investor page every quarter, hoping they make a mistake again, thereby profiting from it?
As a techy, I often encounter similar scenarios. I cannot read the statute in order to figure out whether my proposed conduct would be in violation. I talk with many lawyers who are experts on the statute, and they cannot tell me if my proposed conduct is permissible under the statute. This isn't some small area of ambiguity between two clear sides of the law, this is a gaping hole where nobody can answer the question. The true answer is this: it depends upon how annoyed people will be if my automated script moves Twitter's stock price by a large amount.
You'd think that this is an obvious candidate for the "void for vagueness" doctrine. The statute is written in such a way that reasonable people cannot figure out what is permissible under the law. This allows the law to be arbitrarily and prejudicial applied, as indeed it was in the Weev case.
The reason for this confusion comes from the 1980s origin of the law. Back then, computers were closed, and you needed explicit authorization to access them, such as a password. The web changed that to open, public computers that required no password or username. Authorization is implicit. I did not explicitly give you authorization to download this blogpost from my server, but you intentionally did so anyway.
This is legal, but I'm not a lawyer and I don't know how it's legal. Some lawyers have justified it as "social norms", but that's bogus. It's the social norms now, but it wasn't then. If implicit authorization was the norm back then, then it would've been included in the law. The answer to that is "nerd norms". Only nerds accessed computers back then, and it was the norm for nerds. Now we have iPads, and everyone thinks they are a nerd, so nerd norms prevailed and nobody went to jail for accessing the web while social norms were changing.
But sometimes "iPad user norms" differ from "nerd norms", and that's where we see trouble in the cases involving Weev and Swartz. I could write a little script to automatically scrape all the investor pages of big companies, in case any make the same mistake Twitter did. I might get prosecuted because now I've done something iPad users consider abnormal: they might click on a link, but they would never write a script, so script writing is evil.
This brings me to the definition of "authorization". It should be narrowed according to "nerd norms". Namely, it should refer to only explicit authorization. If a website is public and gives things up while demanding authorization from nobody, then it's implicit that pretty much anything is authorized -- even when the website owners mistakenly publish something. In other words, following RFC2616 implicitly authorizes those who likewise follow that RFC.
I am not a lawyer, but Orin Kerr is. His proposal adds the following language. This sounds reasonable to me. It would clear up the confusion in my hypothetical investor page scenario above: because I'm bypassing no technological barrier, and permission is implied, I'm not guilty.
"to circumvent technological access barriers to a computer or data without the express or implied permission of the owner or operator of the computer"By the way, technically I'm asking for clarification. If lawmakers want to define "unauthorization" broadly to include all "unwanted" access, then that would satisfy my technical concerns. But politically, I want that definition defined the other way, narrowly, so that I'm not violating the law accessing information you accidentally made public, even though I know you don't want me to access it.
My second concern is with the penalties of the law. Currently we are seeing a 14 year old kid in Florida charged with a (state) felony for a harmless prank on his teacher's computer. There's no justification for such harsh penalties, especially since if they could catch them all, it'd make felons out of hundreds of thousands of kids every year. Misdemeanors are good punishments for youthful exuberance. This is especially true since 90% of those who'll go onto being the defenders of computer in the future will have gone through such a phase in their development. Youth is about testing boundaries. We should have a way of informing youth when they've gone to far, but in a way that doesn't destroy their lives.
Most of the computer crimes committed are already crimes in their own right. If you steal money, that's a crime, and it should not matter if a computer was violated in the process. There's no need for additional harsh penalties in an anti-hacking law.
Orin's proposed changes also include reducing the penalties, bringing things down to misdemeanors. I don't understand the legalese, but they sound good. From what I understand, though, there is a practical problem. Computer crime is almost always across state lines, but federal prosecutors don't want to get involved in misdemeanors. This ends up meaning that a federal law about misdemeanors has little practical effect -- or at least, that's what I'm told.
In the coming election, and issue for both Democrats and Republicans is the number of felons in jail in this country, which is 10 times higher than any other civilized country. It's a race thing, but even if you are white, the incarceration rate is 5 times that of Europe. I point this out because politically, I oppose harsh jail sentences in general. Being a technical expert is the reason for wanting the first change above, but my desire for this second change is purely due to my personal politics.
I am not a lawyer or policy wonk, so I could not possibly draft specific language. My technical concern is that the definition of "authorized" in the current statute is too vague when applied to public websites and needs to be clarified. My personal political desires is that this definition should be narrow, and the penalties for violating the law should be lighter.
Rob, I don't agree with your conclusions all of the time, but you've always got a clear grasp of the facts - I think you'd probably do a pretty good job helping with crafting a better law, starting with a clear understanding of what anonymous access is, a la Schwartz
Great post, I would certainly agree that "unauthorized access" needs a less vague definition. The vagueness of that term is certainly the tap root of the harmful aspects of the CFAA.
Kerr's revised definition succeeds at protecting vulnerability researchers from unfair treatment. However the definition is so narrow and specific that it doesn't protect the public. We've merely moved the legal ambiguity from "unauthorized access" to "technological access barriers".
In the case of shellshock and other zero days, was there an access barrier that was truly bypassed? If I wrote a script to exploit shellshock, then accessed and published a companies earnings within a week of the vulnerability being know to the world. Clearly this is computer "abuse". What if the company in question was my competitor, and I did it explicitly to harm their market standing?
The revised law also needs to serve the needs of a VICTIM of a cyber attack. Failing to do so will lead us to different kinds of legal absurdity.
1.)"barriers" that do not secure technology and exist purely to enable prosecution
2.)The legalization and enablement of a corporate cyber espionage industry (I'm surprised the gov contractors running exploit sweat shops aren't lobbing for this change already)
(I'm not suggesting that someone should be prosecuted for accessing public information, just that the constitution of a sufficient barrier becomes questionable after said barrier has been bypassed)
As a nerd I would also quibble about the term "access." When I "visit" a web site, I am merely sending a polite, well-mannered request, one that the web site is explicitly set up to handle, and I am given some data in return, data I examine on my own computer. I am not getting onto that web host in any traditional sense: running processes, examining the innards of the thing, modifying its contents. I am as much gaining access as I would have been gaining access back in the day when I mailed in an order to Sears Roebuck & Co.
Post a Comment