Wednesday, January 28, 2015

Nobody thought BlackPhone was secure -- just securer

An exploitable bug was found in BlackPhone, a "secure" Android phone. This is wildly misinterpreted. BlackPhone isn't a totally secure phone, such a thing is impossible. Instead, it's a simply a more secure phone. I mention this because journalists can't tell the difference.


BlackPhone is simply a stock version of Android with the best settings and with secure apps installed. It's really nothing different than what you can do with your own phone. If you have the appropriate skill/knowledge, you can configure your own Android phone to be just like BlackPhone. It also comes with subscriptions to SilentCircle, a VPN service, and a cloud storage service, which may be cheaper as a bundle with installed separately on the phone.

BlackPhone does fork Android with their "PrivateOS", but such a fork is of limited utility. Google innovates faster than a company like BlackPhone can keep up, including security innovations. A true fork would quickly become out of date with Google's own patches, and hence be insecure. BlackPhone is still new, so I don't know how they plan on dealing with this. Continually forking the latest version of Android seems the most logical plan, if not convincing Android to accept their changes.

The upshot is this: if you don't know anything about Android security, and you want the most secure Android phone, then get a BlackPhone. You'll likely be more secure than trying to do everything yourself. Your calls over SilentCircle will likely be secure. You'll likely not get pwned at WiFi hotspots. But that doesn't mean you'll be perfectly secure -- Android is a long way away from that.








Some notes on GHOST

I haven't seen anybody compile a list of key points about the GHOST bug, so I thought I'd write up some things. I get this from reading the code, but mostly from the advisory.

Most things aren't vulnerable. Modern software uses getaddrinfo() instead. Software that uses gethostbyname() often does so in a way that can't be exploited, such as checking inet_addr() first. Therefore, even though software uses the vulnerable function doesn't mean it's actually vulnerable.

Most vulnerable things aren't exploitable. This bug is hard to exploit, only overwriting a few bytes. Most of the time, hackers will only be able to crash a program, not gain code execution.

Many exploits are local-only. It needs a domain-name of a thousand zeroes. The advisory identified many SUID programs (which give root when exploited) that accept such names on the command-line. However, it's really hard to generate such names remotely, especially for servers.

Is this another Heartbleed? Maybe, but even Heartbleed wasn't a Heartbleed. This class of bugs (Heartbleed, Shellshock, Ghost) are hard to exploit. The reason we care is because they are pervasive, in old software often going back for more than a decade, in components used by other software, and impossible to stamp out completely. With that said, hackers are far more likely to be able to exploit Shellshock and Heartbleed than Ghost. This can change quickly, though, if hackers release exploits.

Should I panic? No. This is a chronic bug that'll annoy you over the next several years, but not something terribly exploitable that you need to rush to fix right now.

Beware dynamic and statically linked libraries. Most software dynamically links glibc, which means you update it once, and it fixes all software (after a reboot). However, some software links statically, using it's own private copy of glibc instead of the system copy. This software needs to be updated individually.

There's no easy way to scan for it. You could scan for bugs like Heartbleed quickly, because they were remote facing. Since this bug isn't, it'd be hard to scan for. Right now, about the only practical thing to scan for would be Exim on port 25. Robust vulnerability scanners will often miss vulnerable systems, either because they can't log on locally, or because while they can check for dynamic glibc libraries, they can't find static ones. This makes this bug hard to eradicate -- but luckily it's not terribly exploitable (as mentioned above).

You probably have to reboot. This post is a great discussion about the real-world difficulties of patching. The message is that restarting services may not be enough -- you may need to reboot.

You can run a quick script to check for vulnerability. In the advisory, and described here, there is a quick program you can run to check if the dynamic glibc library is vulnerable. It's probably something good to add to a regression suite. Over time, you'll be re-deploying old VM images, for example, that will still be vulnerable. Therefore, you'll need to keep re-checking for this bug over and over again.

It's a Vulnerability-of-Things. A year after Heartbleed, over 200,000 web servers are still vulnerable to it. That's because they aren't traditional web-servers, but web interfaces built into devices and appliances -- "things". In the Internet-of-Things (IoT), things tend not to be patched, and will remain vulnerable for years.

This bug doesn't bypass ASLR or NX. Qualys was able to exploit this bug in Exim, despite ASLR and NX. This is a property of Exim, not GHOST. Somewhere in Exim is the ability to run an arbitrary command-line string. That's the code being executed, not native x86 code that you'd expect from the typical buffer-overflow, so NX bit doesn't apply. This vuln reaches the strings Exim produces in response, so the hacker can find where the "run" command is, thus defeating ASLR.

Some pages worth bookmarking:
http://chargen.matasano.com/chargen/2015/1/27/vulnerability-overview-ghost-cve-2015-0235.html
I'll more eventually here as I come across them.

Tuesday, January 27, 2015

You shouldn't be using gethostbyname() anyway

Today's GHOST vulnerability is in gethostbyname(), a Sockets API function from the early 1980s. That function has been obsolete for a decade. What you should be using is getaddrinfo() instead, a newer function that can also handle IPv6.

The great thing about getaddrinfo() is the fact that it allows writing code that is agnostic to the IP version. You can see an example of this in my heartleech.c program.

x = getaddrinfo(hostname, port, 0, &addr);
fd = socket(addr->ai_family, SOCK_STREAM, 0);
x = connect(fd, addr->ai_addr, (int)addr->ai_addrlen);

What you see here is your normal call to socket() and connect() just use the address family returned by getaddrinfo(). It doesn't care if that is IPv4, IPv6, or IPv7.

The function actually returns a list of addresses, which may contain a mixture of IPv4 and IPv6 addresses. An example is when you lookup www.google.com:

[ ] resolving "www.google.com"
[+]  74.125.196.105:443
[+]  74.125.196.147:443
[+]  74.125.196.99:443
[+]  74.125.196.104:443
[+]  74.125.196.106:443
[+]  74.125.196.103:443
[+]  [2607:f8b0:4002:801::1014]:443

My sample code just chooses the first one in the list, whichever is returned by the DNS server. Sometimes Google returns an IPv6 address as the first one. If you prefer a specific version, you can search through the list as demonstrated below:

while (addr && args->ip_ver == 4 && addr->ai_family != AF_INET)
addr = addr->ai_next;

Or, instead of searching the results yourself for an IPv4 or IPv6 address, you can use the third parameter to specify which one you want.

This function has been a POSIX standard for more than a decade. The above code works on WinXP as well as fairly old versions of Linux and Mac OS X. Even if your code is aggressively portable, there is little reason not to use this function. Only in insanely portable code, such as when you worry about 16-bit pointers, should have to worry about backing off to gethostbyname(). Conversely, gethostbyname() is no longer part of POSIX, and thus officially no longer "standard".

If you learn Sockets programming at the university, they still teach gethostbyname(). That's because as far as Internet programming is concerned, academia is decades out of date.

Thursday, January 22, 2015

Needs more Hitler

Godwin's Law doesn't not apply to every mention of Hitler, as the Wikipedia page explains:
Godwin's law applies especially to inappropriate, inordinate, or hyperbolic comparisons with Nazis. The law would not apply to mainstays of Nazi Germany such as genocide, eugenics, racial superiority, or to a discussion of other totalitarian regimes, if that was the explicit topic of conversation, because a Nazi comparison in those circumstances may be appropriate.
Last week, I wrote a piece about how President Obama's proposed cyber laws were creating a Cyber Police State. The explicit topic of my conversation is totalitarian regimes.

This week, during the State of the Union address, I compared the text of Mein Kampf to the text of President Obama's speech. Specifically, Mein Kampf said this:
The state must declare the child to be the most precious treasure of the people. As long as the government is perceived as working for the benefit of the children, the people will happily endure almost any curtailment of liberty and almost any deprivation.
Obama's speech in support of his cyber legislation says this:
No foreign nation, no hacker, should be able to shut down our networks, steal our trade secrets, or invade the privacy of American families, especially our kids. We are making sure our government integrates intelligence to combat cyber threats, just as we have done to combat terrorism. And tonight, I urge this Congress to finally pass the legislation we need to better meet the evolving threat of cyber-attacks, combat identity theft, and protect our children’s information.
There is no reason to mention children here. None of the big news stories about hacker attacks have mentioned children. None of the credit cards scandals, or the Sony attack, involved children. Hackers don't care about children, have never targeted children in the past, and are unlikely to target children in the future. Children are wholly irrelevant to the discussion.

Tuesday, January 20, 2015

Drums of cyberwar: North Korea's cyber-WMDs

People ask me if today's NYTimes story changes my opinion that North Korea didn't do the Sony hack. Of course it doesn't. Any rational person can tell that the story is bogus. Indeed, such stories hint the government is hiding something.

Wednesday, January 14, 2015

Notes on the CIA spying case

The CIA announced it wasn't going to punish those responsible for spying/hacking on Senate computers. Since journalists widely get this story wrong, I thought I'd write up some notes getting it right. That's because while the CIA organization is guilty of gross misconduct, it's actually likely that no individual employees did anything wrong. The organization is guilty, but (possibly) the people aren't.

The first thing to note is that no hacking happened. These were CIA computers, at a CIA facility, managed by CIA sysadmins, who had the admin passwords.

That's the complicated bit. In 2009 when the Intelligence committee demanded to look at the torture/interrogation documents, the CIA balked about the security issues of staffers taking documents offsite. Therefore, they came to an agreement with the Senate: the CIA would set up a special secured network at their building, disconnected from the rest of the CIA network. The Senate staffers would go there to work. Documents would be transferred from the CIA's main network onto this special network by hand (probably USB flash drive or something).

The Senate committee didn't have to agree to this. By law, they have oversight, and can make decisions that screw the CIA. But the Senate committee recognized this was a legitimate concern, and agreed to the compromise. However, they demanded concessions from the CIA, such as not "spying" on their staffers.

I say "spying" here because that's the word used in the press, but it was more complex than that. Spying on employees is routine within the CIA. There's always compliance officers running around checking computers to make sure they don't have documents on them they shouldn't. So "compliance" is the better word than "spying", it sounds much nicer.

But the agreement was specifically that only IT techies would have access to the computers purely for the purposes of IT techy stuff, and that nobody else at the CIA would have access -- not even for compliance purposes.

Well, in the course of events, other people at the CIA did access these computers, did do compliance checks. Judging from Dianne Feinstein's comments, it appears that most of these incidents were just honest mistakes, at least, she's not concerned by them. The one incident she's concerned about involves the Panetta report -- the internal CIA investigation that found gross misconduct in the torturing/interrogation.

The Panetta report wasn't one of the documents the Senate staffers were supposed to see. Nobody knows how it got onto these special computers. The staffers just found it there accidentally. At least, that's the information we have publicly. The CIA accuses the staffers of doing nefarious things, but we outsiders can't know really what happened. (Maybe somebody at the CIA leaked it to the staffers).

When the CIA heard the staffers had the Panetta document, they did what they always do when things like this happen: their normal compliance checks and investigation. Among the things they would do in such situations is thoroughly scan the computers they'd given the Senate staffers, read their emails, search their files, and so forth. Yes, at the top level, the head of the CIA agreed that this would not happen -- but the employees didn't necessarily know this. Apparently, nobody told them about the agreement -- they didn't get the memo.

The problem is ultimately this: that while the CIA as an organization broke the rules here, it's possible that no individual person did anything intentionally bad.

Personally, I think this is bullshit. I think lower level flunkies knew what they were doing was wrong, that high-level managers gave them direction, and that many at the CIA deliberately pushed the rules as much as they could in order to interfere with the Senate investigation. But I don't have proof of this, and no such proof has been made public.


I don't like the CIA. I think their torture is a stain on our national honor. I think it's a travesty that the torturers aren't punished. It's clear I don't support the CIA, and that I have no wish to defend them. But I still defend truth, and the truth is this: the CIA did not "hack senate computer" as many claim.




These notes where compiled mostly from Dianne Feinstein's description of events http://www.feinstein.senate.gov/public/index.cfm/2014/3/feinstein-statement-on-intelligence-committee-s-cia-detention-interrogation-report.


Obama's War on Hackers


In next week's State of the Union address, President Obama will propose new laws against hacking that could make either retweeting or clicking on the above (fictional) link illegal. The new laws make it a felony to intentionally access unauthorized information even if it's been posted to a public website. The new laws make it a felony to traffic in information like passwords, where "trafficking" includes posting a link.

You might assume that things would never become that bad, but it’s already happening even with the current laws. Prosecutors went after Andrew “weev” Auernheimer for downloading a customer list AT&T negligently made public. They prosecuted Barrett Brown for copying a URL to the Stratfor hack from one chatroom to another. A single click is all it takes. Prosecutors went after the PayPal-14 for clicking on a single link they knew would flood PayPal’s site with traffic. The proposed changes make such prosecutions much easier.

Even if you don’t do any of this, you can still be guilty if you hang around with people who do. Obama proposes upgrading hacking to a “racketeering” offense, means you can be guilty of being a hacker by simply acting like a hacker (without otherwise committing a specific crime). Hanging out in an IRC chat room giving advice to people now makes you a member of a “criminal enterprise”, allowing the FBI to sweep in and confiscate all your assets without charging you with a crime. If you innocently clicked on the link above, and think you can defend yourself in court, prosecutors can still use the 20-year sentence of a racketeering charge in order to force you to plea bargain down to a 1-year sentence for hacking. (Civil libertarians hate the police-state nature of racketeering laws).

Obama’s proposals come from a feeling in Washington D.C. that more needs to be done about hacking in response to massive data breaches of the last couple years. But they are blunt political solutions which reflect no technical understanding of the problem.

Most hacking is international and anonymous. They can’t catch the perpetrators no matter how much they criminalize the activities. This War on Hackers is likely to be no more effective than the War on Drugs, where after three decades the prison population has sky rocketed from 0.1% of the population to a staggering 1%. With 5% the world’s population, we have 25% of the world’s prisoners – and this has done nothing to stop drugs. Likewise, while Obama’s new laws will dramatically increase hacking prosecutions, they’ll be of largely innocent people rather than the real hackers that matter.

Internet innovation happens by trying things first then asking for permission later. Obama’s law will change that. For example, a search engine like Google downloads a copy of every website in order to create a search “index”. This sort of thing is grandfathered in, but if “copying the entire website” were a new idea, it would be something made illegal by the new laws. Such copies knowingly get information that website owners don’t intend to make public. Similarly, had hacking laws been around in the 1980s, the founders of Apple might’ve still been in jail today, serving out long sentences for trafficking in illegal access devices.

The most important innovators this law would affect are the cybersecurity professionals that protect the Internet. If you cared about things such as "national security" and "cyberterrorism", then this should be your biggest fear. Because of our knowledge, we do innocent things that look to outsiders like "hacking". Protecting computers often means attacking them. The more you crack down on hackers, the more of a chilling effect you create in our profession. This creates an open-door for nation-state hackers and the real cybercriminals.

Along with its Hacking Prohibition law, Obama is also proposing a massive Internet Surveillance law. Companies currently monitor their networks, using cybersecurity products like firewalls, IPSs, and anti-virus. Obama wants to strong-arm companies into sharing that information with the government, creating a virtualized or “cloud” surveillance system.

In short, President Obama’s War on Hackers is a bad thing, creating a Cyber Police State. The current laws already overcriminalize innocent actions and allow surveillance of innocent people. We need to roll those laws back, not extend them.



Monday, January 12, 2015

A Call for Better Vulnerability Response

Microsoft forced a self-serving vulnerability disclosure policy on the industry 10 years ago, but cries foul when Google does the same today.

Ten years ago, Microsoft dominated the cybersecurity industry. It employed, directly or through consultancies, the largest chunk of security experts. The ability to grant or withhold business meant influencing those consulting companies -- Microsoft didn't even have to explicitly ask for consulting companies to fire Microsoft critics for that to happen. Every product company depended upon Microsoft's goodwill in order to develop security products for Windows, engineering and marketing help that could be withheld on a whim.

This meant, among other things, that Microsoft dictated the "industry standard" of how security problems ("vulnerabilities") were reported. Cybersecurity researchers who found such bugs were expected to tell the vendor in secret, and give the vendor as much time as they needed in order to fix the bug. Microsoft sometimes sat on bugs for years before fixing them, relying upon their ability to blacklist researchers to keep them quiet. Security researchers who didn't toe the line found bad things happening to them.

I experienced this personally. We found a bug in a product called TippingPoint that allowed us to decrypt their "signatures", which we planned to release at the BlackHat hacker convention, after giving the vendor months to fix the bug. According to rumors, Microsoft had a secret program with TippingPoint with special signatures designed to track down cybercriminals. Microsoft was afraid that if we disclosed how to decrypt those signatures, that their program would be found out.

Microsoft contacted our former employer, ISS, which sent us legal threats. Microsoft sent FBI agents to threaten us in the name of national security. A Microsoft consultant told the BlackHat organizer, Jeff Moss, that our research was made up, that it didn't work, so I had to sit down with Jeff at the start of the conference to prove it worked before I was allowed to speak.

My point is that a decade ago in the cybersecurity industry, Microsoft dictated terms.

Today, the proverbial shoe is on the other foot. Microsoft's products are now legacy, so Windows security is becoming as relevant as IBM mainframe security. Today's cybersecurity researchers care about Apple, Google Chrome, Android, and the cloud. Microsoft is powerless to threaten the industry. It's now Google who sets the industry's standard for reporting vulnerabilities. Their policy is that after 90 days, vulnerabilities will be reported regardless if the vendor has fixed the bug. This applies even to Google itself when researchers find bugs in products like Chrome.

This is a nasty trick, of course. Google uses modern "agile" processes to develop software. That means that after making a change, the new software is tested automatically and shipped to customers within 24 hours. Microsoft is still mired in antiquated 1980s development processes, so that it takes three months and expensive manual testing before a change is ready for release. Google's standard doesn't affect everyone equally -- it hits old vendors like Microsoft the hardest.

We saw the effect this last week, where after notifying Microsoft of a bug 90 days ago, Google dumped the 0day (the information hackers need to exploit the bug) on the Internet before Microsoft could release a fix.

I enjoyed reading Microsoft's official response to this event, full of high-minded rhetoric why Google is bad, and why Microsoft should be given more time to fix bugs. It's just whining -- Microsoft's alternative disclosure policy is even more self-serving than Google's. They are upset over their inability to adapt and fix bugs in a timely fashion. They resent how Google exploits its unfair advantage. Since Microsoft can't change their development, they try to change public opinion to force Google to change.

But Google is right. Since we can't make perfect software, we must make fast and frequent fixes the standard. Nobody should be in the business of providing "secure" software that can't turn around bugs quickly. Rather than 90 days being too short, it's really too long. Microsoft either needs to move forward with the times and adopt "agile" methodologies, or just accept its role of milking legacy for the next few decades as IBM does with mainframes.

Monday, January 05, 2015

Platitudes are only skin deep

I overdosed on Disney Channel over the holidays, because of course children control the remote. It sounds like it's teaching kids wholesome lessons, but if you pay attention, you'll realize it's not. It just repeats meaningless platitudes with no depth, and sometimes gets the platitudes wrong.

For example, it had a segment on the importance of STEAM education. This sounds a lot like "STEM", which stands for "science, technology, engineering, and math". Many of us believe in interesting kids in STEM. It's good for them, because they'll earn twice that of other college graduates. It's good for society, because there aren't enough technical graduates coming out of college to maintain our technology-based society. It's also particularly important for girls, because we still have legacy sexism that discourages girls from pursuing technical careers.

But Disney adds an 'A' in the middle, making STEM into STEAM. The 'A' stands for "Arts", meaning the entire spectrum of Liberal Arts. This is nonsense, because at this point, you've now included pretty much all education. The phrase "STEAM education" is redundant, conveying nothing more than simply "education".

What's really going on is that they attack the very idea they pretend to promote. Proponents of STEM claim those things are better than Arts, and Disney slyly says the opposite, without parents noticing.

Another example of this is a show featuring the school's debate team. They say that debate is important in order to understand all sides of an issue. But the debate topic they have is "beauty is only skin deep", and both "sides" of the debate agree with the proposition.

This is garbage. Two sides to a debate means two opposing sides. It's the very basis of enlightenment, the proposition that reasonable people can disagree. It means that if you are Protestant, that while you disagree with Catholics, you accept the fact that they are reasonable people, and not devil worshippers who eat babies. In real school debate, you are forced to debate both sides -- you can't choose which side you want to debate. This means debate isn't about your opinion, but your ability to cite support for every claim you make.

What Disney implicitly teaches kids is that there is only one side to a debate, the correct side, and that anybody who disagrees is unreasonable.


The problem with Disney is ultimately is that the writers are stupid. They aren't deep thinkers, they don't really understand the platitudes they want to teach children, so they end up teaching children the wrong thing.