Wednesday, March 04, 2015

Cliché: Safen Up!

RSA Conference is often a mockery of itself. Yesterday, they posted this tweet:



This is similar to the Simpsons episode where Germans buy the power plant. In fear for his job, Homer (the plant's Safety Inspector) starts going around telling people to "Stop being so unsafe!".



Security is not a platitude; insecurity is not a moral weakness. It's a complex set of tradeoffs. Going around telling people to "safen up" will not improve the situation, but will instead breed resentment. Infosec people are widely disliked because of their moralizing.

The only way to be perfectly secure is to cut the cables, turn off the machines, thermite the drives, and drop the remnants in a deep ocean trench. Anything less and you are insecure. Learn to deal with insecurity instead of blaming people for their moral weaknesses.

Saturday, February 21, 2015

Exploiting the Superfish certificate

As discussed in my previous blogpost, it took about 3 hours to reverse engineer the Lenovo/Superfish certificate and crack the password. In this blog post, I described how I used that certificate in order to pwn victims using a rogue WiFi hotspot. This took me also about three hours.

The hardware

You need a computer to be the WiFi access-point. Notebook computers are good choices, but for giggles I chose the "Raspberry Pi 2", a tiny computer that fits in the palm of your hand which costs roughly $35. You need two network connections, one to the Internet, and one to your victims. I chose Ethernet to the Internet, and WiFi to the victims.

The setup is shown above. You see the little Raspberry Pi 2 computer, with a power connection at the upper left, an Ethernet at the lower-left, and the WiFi to the right. I chose an "Alfa AWUS050NH" WiFi adapter, but a lot of different ones will work. Others tell me this $15 TP-Link adapter works well.. You can probably find a good one at Newegg or Amazon for $10. Choose those with external antennas, though, for better signal strength. You can't really see it in this picture, but at the top of the circuit board is a micro-SD card acting as the disk drive. You'll need to buy at least a 4-gigabyte card, which costs $4, though consider getting an 8-gig or even 16-gig card since they don't cost much more.

Thursday, February 19, 2015

Extracting the SuperFish certificate

I extracted the certificate from the SuperFish adware and cracked the password ("komodia") that encrypted it. I discuss how down below. The consequence is that I can intercept the encrypted communications of SuperFish's victims (people with Lenovo laptops) while hanging out near them at a cafe wifi hotspot. Note: this is probably trafficking in illegal access devices under the proposed revisions to the CFAA, so get it now before they change the law.

Some notes on SuperFish

What's the big deal?

Lenovo, a huge maker of laptops, bundles software on laptops for the consumer market (it doesn't for business laptops). Much of this software is from vendors who pay Lenovo to be included. Such software is usually limited versions, hoping users will pay to upgrade. Other software is ad supported. Some software, such as the notorious "Ask.com Toolbar", hijacks the browser to display advertisements.

Such software is usually bad, especially the ad-supported software, but the SuperFish software is particularly bad. It's designed to intercept all encrypted connections, things is shouldn't be able to see. It does this in a poor way that it leaves the system open to hackers or NSA-style spies. For example, it can spy on your private bank connections, as shown in this picture.

Marc Rogers has a post where he points out that what the software does is hijack your connections, monitors them, collects personal information, injects advertising into legitimate pages, and causes popup advertisement.

Thursday, February 12, 2015

Technical terms are not ambiguous

I see technical terms like "interference" and "authorization" in laws. As a technical person, this confuses me. I have a different understand of these terms than how the courts might interpret them. Courts insist that these words must be interpreted using their common everyday meanings, not their technical meanings. Yet, situations are inherently technical, so the common meanings are ambiguous.


Take for example the law that forbids causing radio interference:
No person shall willfully or maliciously interfere with or cause interference to any radio communications of any station licensed or authorized by or under this chapter or operated by the United States Government.
Interference seems like a common, non-technical term, but it's unlikely that's the meaning here. Interference has a very technical meaning, as demonstrated by this long Wikipedia article on "radio interference". There are entire books dedicated this this subject. It's a big technical deal, it's unreasonable to think the law means anythings else.

This is important when looking at the recent "Marriott WiFi Jamming" case, because Marriott did not cause "radio interference" or "jamming". Instead, what they did was send "deauth" packets. Using a real world analogy, jamming is like a locked door, blocking access against your will. On the other hand, a "deauth packet" is merely a Keep Out sign -- you can choose to ignore it. Indeed, I've configured my WiFi devices to ignore deauth packets, so I would not be affected by Marriott's "jamming".

The debate here isn't really over whether the definition of "interference" is technical or common. Instead, the issue is that the situation is technical. Radio interference is important because it's against your will, and there is nothing you can do avoid it. The FCC recognizes that deauths are different from "interference". It therefore allows deauth packets in most situations, only singling out Marriott's case as being disallowed by the statute. It's clearly being vague about the term in order to pursue arbitrary and prejudicial enforcement of this statute.


The same thing happens with "authorization" in the CFAA, the anti-hacking law. Authorization is a technical term, yet judges insist juries should use the common meaning of the term, such as in this recent case. This creates an unsolvable ambiguity. The Internet is defined by technical documents that declare what is "authorized" and "not authorized". This is at odds with what an average person might consider "authorized", and it's impossible for a technical person to understand the common meaning.

I have a fantasy that Tim Berners-Lee gets arrested and stands trial. The prosecution argues that his access of a website was unauthorized according to the common meaning. Berners-Lee then counters that it was authorized according to the technical meaning, and cites RFC2616 as proof. RFC2616 is the document Berners-Lee wrote defining the "web". He invented the thing. It's unreasonable to think that a jury should find something "unauthoized" that he clearly labeled as "authorized" when creating the web.

In other words, when you attach a website to the Internet, you implicitly agree to RFC2616. Likewise, when I access the website, I also implicitly agree with this document. The document delineating what "authorization" means creates an implicit agreement between us. It boggles my mind that this document doesn't have the same weight as things like Terms of Service (ToS). This document should be cited at least as often in court case as ToS documents.

The Weev case hinged partly on whether forging a "User-agent" string allowed "unauthorized" access. Reading the RFC, it's clear that the User-Agent is not an authorization mechanism. Weev would not have perceived it that way. More importantly, the owners of the website would not have seen it that way. Checking for an iPad User-Agent was a way of customizing content for the iPad, not for authorizing iPads. In the broader context, all web browsers forge User-Agent strings. Websites create better content for certain browsers, so browsers lie about their identity so their users get the better content.

The point is that it's impossible for the average person in the jury to tell if forging a User-Agent string is "unauthorized" without refering back to RFC2616 as to what "authorization" means on the web.


I'm writing this post because of this case where the judge said the following:
The root term, however — “authorization” — is not defined by the statute, and has been the subject of robust debate. One point of agreement is that “without authorization” should be given its “common usage, without any technical or ambiguous meaning.” 
The judge is wrong. It's the common usage that hopelessly ambiguous; the technical meaning is relatively clear. It's the common usage of "authorization" that has lead to prejudicial and arbitrary prosecution under the CFAA. It's impossible for technical person to know what is prohibited by the statute. Moreover, it's really impossible for anybody to know what is prohibited by the statute -- nobody knows whether forging User-Agents is prohibited by the statute without a technical discussion.

No, you can't make things impossible to reverse-engineer

I keep seeing this Wired article about somebody announcing a trick to make software "nearly impossible" to reverse-engineer. It's hype. The technique's features are no better at stopping reverse-engineering than many existing techniques, but has an enormous cost on the system that makes it a lot worse.

We already have deterrents to reverse-engineering. Take Apple iTunes, for example, which has successfully resisted reverse-engineering for years. I think the last upgrade to patch reverse-engineered details was in 2006. Its anti-reverse-engineering techniques aren't wonderful, but are instead simply "good enough". It does dynamic code generation, so I can't easily reverse engineer the static code in IDApro. It does anti-debugging tricks, so I can't attach a debugger to the running software. I'm sure if I spent more time at it, I could defeat these mechanisms, but I'm just a casual reverse-engineer who is unwilling to put in the time.

The technique described by Wired requires that the software install itself as a "hypervisor", virtualizing parts of the system. This is bad. This is unacceptable for most commercial software, like iTunes, because it would break a lot of computers. It might be acceptable for really high-end software that costs more than the computer, in which the computer is essentially dedicated to a single task, but wouldn't be acceptable for normal software. Also, virus/malware writers would avoid this, because detecting viruses trying to become hypervisors is a common thing. Although, virus that go "all in" and become a hypervisor, which provides lots of other benefits to the virus, might use this technique.

By the way, this isn't a "crypto trick" as promised in the title. Instead, it's a hypervisor/kernel trick, or a CPU/memory-management trick. That's what's exceptional about this technique, the actual encryption is largely immaterial.

The Wired article lacks details. Presumably, one could easily reverse engineer the code that does the encryption -- then write an IDApro script that decrypts the other code. Or, one could reverse-engineer the mock hypervisor, and make the simple change of setting the "execute/read" flag instead of "execute-only", then dump the raw memory.

For us geeks, this "split-TLB" thing is really interesting. The content is probably solid. It's just that it probably doesn't live up to the hype in that Wired story. The above story uses the trope of some "major new advance changes everything". Such stories have never proven themselves in practice. It's unlikely that this one will either.

Tuesday, February 03, 2015

Explaining the Game of Sony Attribation

Attribution is a blame game. It’s not about who did it, but who is best to blame. Ambulance chasing lawyers sue whoever has the most money, not who is most responsible. I point this out because while the U.S. “attributes” the Sony hack to North Korea, this doesn’t mean North Korea did the attack. Instead, it means that North Korea was involved enough to justify sanctions. It still leaves the question of “who did it” unresolved.

A lesson in the corrupt press

In the last few days, both President Obama and Republican presidential candidate Chris Christie made similar statements about vaccination. They both said that parents should absolutely vaccinate their children, but that it's still ultimately the parent's choice (and not government's). While the statements were similar, the press reported these stories completely differently. They praised Obama for calling for vaccination, and lambasted Christie for siding with anti-vaxxers on parental choice.

The White House's statement is the following:
The President certainly believes that these kinds of decisions are decisions that should be made by parents, because ultimately when we’re talking about vaccinations, we’re typically talking about vaccinations that are given to children.  But the science on this, as our public health professionals I’m sure would be happy to tell you, the science on this is really clear.
Christie's statement is the following:
Mary Pat and I have had our children vaccinated and we think that it’s an important part of being sure we protect their health and the public health. I also understand that parents need to have some measure of choice in things as well, so that’s the balance that the government has to decide.
The thing is, not only is Chris Christie not siding with the anti-vaxxers, he's actually siding against them. Many Republicans are "small-government" types who believe that it's ultimately the parent's decision. Christie is more a "big-government" Republican. He believes that, when necessary, the government can override a parent's decision. He did something similar recently, forcibly quarantining a nurse who was infected with ebola.

Members of the press are overwhelmingly Democrats. Therefore, they will tar Republicans with this anti-vax nonsense regardless of what Republicans say. This will hit hardest on the "small-government" types who, while being pro-vaccine, nonetheless agree with Obama's statement that it's ultimately the parent's choice.

It's not that Republicans do themselves any favors. Last week, Rand Paul made some comments on vaccination. He described how he vaccinated his own kids, and he called vaccination one of the greatest medical advances ever. Nonetheless, he threw the anti-vax side a bone, suggesting there might indeed be a link between vaccines and autism. 30% of any group is a bunch of UFO believing wackos, but wackos who nonetheless have the same right to vote as normals. No politician needlessly antagonizes them. All politicians, both Democrat and Republican, will choose their words carefully on this issue. Some, like Rand Paul, will cross the line. However, the story isn't that he secret leans in the anti-vax direction -- he's just as likely to lean in the anti-wacko direction.

I point this out because the situation is only going to get worse as the presidential election season progresses. Right now, anti-vaxxers are if anything a left-wing phenomenon, which is why California is the center of the measles outbreaks. But, by the time the election season comes around, it'll have become a right-wing issue. In the primary debates, the Democrats won't discuss vaccines, but Republicans will. The left-wing press will be calling all Republican candidates anti-vaxxers. And the Republicans, not wanting to alienate 30% of the electorate, won't sufficiently defend themselves against the charge.

In other words, it'll become just like Global Warming, where "everyone knows" in the press that Republicans deny the basic science, despite most all Republicans having declared they support the basic science.

Wednesday, January 28, 2015

Nobody thought BlackPhone was secure -- just securer

An exploitable bug was found in BlackPhone, a "secure" Android phone. This is wildly misinterpreted. BlackPhone isn't a totally secure phone, such a thing is impossible. Instead, it's a simply a more secure phone. I mention this because journalists can't tell the difference.


BlackPhone is simply a stock version of Android with the best settings and with secure apps installed. It's really nothing different than what you can do with your own phone. If you have the appropriate skill/knowledge, you can configure your own Android phone to be just like BlackPhone. It also comes with subscriptions to SilentCircle, a VPN service, and a cloud storage service, which may be cheaper as a bundle with installed separately on the phone.

BlackPhone does fork Android with their "PrivateOS", but such a fork is of limited utility. Google innovates faster than a company like BlackPhone can keep up, including security innovations. A true fork would quickly become out of date with Google's own patches, and hence be insecure. BlackPhone is still new, so I don't know how they plan on dealing with this. Continually forking the latest version of Android seems the most logical plan, if not convincing Android to accept their changes.

The upshot is this: if you don't know anything about Android security, and you want the most secure Android phone, then get a BlackPhone. You'll likely be more secure than trying to do everything yourself. Your calls over SilentCircle will likely be secure. You'll likely not get pwned at WiFi hotspots. But that doesn't mean you'll be perfectly secure -- Android is a long way away from that.








Some notes on GHOST

I haven't seen anybody compile a list of key points about the GHOST bug, so I thought I'd write up some things. I get this from reading the code, but mostly from the advisory.

Most things aren't vulnerable. Modern software uses getaddrinfo() instead. Software that uses gethostbyname() often does so in a way that can't be exploited, such as checking inet_addr() first. Therefore, even though software uses the vulnerable function doesn't mean it's actually vulnerable.

Most vulnerable things aren't exploitable. This bug is hard to exploit, only overwriting a few bytes. Most of the time, hackers will only be able to crash a program, not gain code execution.

Many exploits are local-only. It needs a domain-name of a thousand zeroes. The advisory identified many SUID programs (which give root when exploited) that accept such names on the command-line. However, it's really hard to generate such names remotely, especially for servers.

Is this another Heartbleed? Maybe, but even Heartbleed wasn't a Heartbleed. This class of bugs (Heartbleed, Shellshock, Ghost) are hard to exploit. The reason we care is because they are pervasive, in old software often going back for more than a decade, in components used by other software, and impossible to stamp out completely. With that said, hackers are far more likely to be able to exploit Shellshock and Heartbleed than Ghost. This can change quickly, though, if hackers release exploits.

Should I panic? No. This is a chronic bug that'll annoy you over the next several years, but not something terribly exploitable that you need to rush to fix right now.

Beware dynamic and statically linked libraries. Most software dynamically links glibc, which means you update it once, and it fixes all software (after a reboot). However, some software links statically, using it's own private copy of glibc instead of the system copy. This software needs to be updated individually.

There's no easy way to scan for it. You could scan for bugs like Heartbleed quickly, because they were remote facing. Since this bug isn't, it'd be hard to scan for. Right now, about the only practical thing to scan for would be Exim on port 25. Robust vulnerability scanners will often miss vulnerable systems, either because they can't log on locally, or because while they can check for dynamic glibc libraries, they can't find static ones. This makes this bug hard to eradicate -- but luckily it's not terribly exploitable (as mentioned above).

You probably have to reboot. This post is a great discussion about the real-world difficulties of patching. The message is that restarting services may not be enough -- you may need to reboot.

You can run a quick script to check for vulnerability. In the advisory, and described here, there is a quick program you can run to check if the dynamic glibc library is vulnerable. It's probably something good to add to a regression suite. Over time, you'll be re-deploying old VM images, for example, that will still be vulnerable. Therefore, you'll need to keep re-checking for this bug over and over again.

It's a Vulnerability-of-Things. A year after Heartbleed, over 200,000 web servers are still vulnerable to it. That's because they aren't traditional web-servers, but web interfaces built into devices and appliances -- "things". In the Internet-of-Things (IoT), things tend not to be patched, and will remain vulnerable for years.

This bug doesn't bypass ASLR or NX. Qualys was able to exploit this bug in Exim, despite ASLR and NX. This is a property of Exim, not GHOST. Somewhere in Exim is the ability to run an arbitrary command-line string. That's the code being executed, not native x86 code that you'd expect from the typical buffer-overflow, so NX bit doesn't apply. This vuln reaches the strings Exim produces in response, so the hacker can find where the "run" command is, thus defeating ASLR.

Some pages worth bookmarking:
http://chargen.matasano.com/chargen/2015/1/27/vulnerability-overview-ghost-cve-2015-0235.html
I'll more eventually here as I come across them.

Tuesday, January 27, 2015

You shouldn't be using gethostbyname() anyway

Today's GHOST vulnerability is in gethostbyname(), a Sockets API function from the early 1980s. That function has been obsolete for a decade. What you should be using is getaddrinfo() instead, a newer function that can also handle IPv6.

The great thing about getaddrinfo() is the fact that it allows writing code that is agnostic to the IP version. You can see an example of this in my heartleech.c program.

x = getaddrinfo(hostname, port, 0, &addr);
fd = socket(addr->ai_family, SOCK_STREAM, 0);
x = connect(fd, addr->ai_addr, (int)addr->ai_addrlen);

What you see here is your normal call to socket() and connect() just use the address family returned by getaddrinfo(). It doesn't care if that is IPv4, IPv6, or IPv7.

The function actually returns a list of addresses, which may contain a mixture of IPv4 and IPv6 addresses. An example is when you lookup www.google.com:

[ ] resolving "www.google.com"
[+]  74.125.196.105:443
[+]  74.125.196.147:443
[+]  74.125.196.99:443
[+]  74.125.196.104:443
[+]  74.125.196.106:443
[+]  74.125.196.103:443
[+]  [2607:f8b0:4002:801::1014]:443

My sample code just chooses the first one in the list, whichever is returned by the DNS server. Sometimes Google returns an IPv6 address as the first one. If you prefer a specific version, you can search through the list as demonstrated below:

while (addr && args->ip_ver == 4 && addr->ai_family != AF_INET)
addr = addr->ai_next;

Or, instead of searching the results yourself for an IPv4 or IPv6 address, you can use the third parameter to specify which one you want.

This function has been a POSIX standard for more than a decade. The above code works on WinXP as well as fairly old versions of Linux and Mac OS X. Even if your code is aggressively portable, there is little reason not to use this function. Only in insanely portable code, such as when you worry about 16-bit pointers, should have to worry about backing off to gethostbyname(). Conversely, gethostbyname() is no longer part of POSIX, and thus officially no longer "standard".

If you learn Sockets programming at the university, they still teach gethostbyname(). That's because as far as Internet programming is concerned, academia is decades out of date.