Wednesday, March 30, 2016

Sometimes techy details matter

How terrorists use encryption is going to become central to the Cryptowars 2.0 debate. Both sides are going to cite the case of Reda Hame described in this NYTimes article. On one hand, it shows that terrorists do indeed use encryption. On the other hand, the terrorists used TrueCrypt, which can't be stopped, no matter how many "backdoor" laws the police-state tries to pass.

The problem with the NYTimes article is that the technical details are garbled. (Update: at the bottom, I correct them). Normally, that's not a problem, because we experts can fill in the details using basic assumptions. But the technique ISIS used is bizarre, using TrueCrypt containers uploaded to a file-sharing site. This is a horrible way to pass messages -- assumptions we make trying to fill in the blanks are likely flawed.

Tuesday, March 29, 2016

How to detect TrueCrypt blobs being passed around

So, challenge accepted:

tl;dr: The NSA should be able to go back through it's rolling 90 day backlog of Internet metadata and find all other terrorist cells using this method.

From what we can piece together from the NYTimes article, it appears that ISIS is passing around TrueCrypt container files as a way of messaging. This is really weird. It has the has the property of security through obscurity, which is that it has the nice property of evading detection for a while because we'd never consider that ISIS would do such a strange thing. But it has the bad property that once discovered, it now becomes easier to track. With the keys found on the USB drive, we can now start decrypting things that were a mystery before.

We are going off of very little information at the moment, but let's imagine some fictional things.

Some other comments on the ISIS dead-drop system

So, by the time I finished this, this New York Times article has more details. Apparently, it really is just TrueCrypt. What's still missing is how the messages are created. Presumably, it's just notepad. It's also missing the protocol used. It is HTTP/FTP file upload? Or do they log on via SMB? Or is it a service like DropBox?

Anyway, I think my way is better for sending messages that I describe below:

Old post:

CNN is reporting on how the Euro-ISIS terrorists are using encryption. The details are garbled, because neither the terrorists, the police, or the reporters understand what's going on. @thegrugq tries to untangle this nonsense in his post, but I have a different theory. It's pure guesswork, trying to create something that is plausibly useful that somehow fits the garbled story.

I assume what's really going is this.

Monday, March 28, 2016

Comments on the FBI success in hacking Farook's iPhone

Left-wing groups like the ACLU and the EFF have put out "official" responses to the news the FBI cracked Farook's phone without help from the Apple. I thought I'd give a response from a libertarian/technologist angle.

First, thank you FBI for diligently trying to protect us from terrorism. No matter how much I oppose you on the "crypto backdoors" policy question, and the constitutional questions brought up in this court case, I still expect you to keep trying to protect us.

Saturday, March 26, 2016

How the media really created Trump

This NYTimes op-ed claims to diagnose the press's failings with regard to Trump, but in its first sentence demonstrates how little press understands the problem. The problem isn't with Trump, but with the press.

The reason for Trump is that the press has discarded its principle of "objectivity". Reasonable people disagree. The failing of the press is that they misrepresent one side, the Republicans, as being unreasonable. You see that in the op-ed above, where the very first sentence decries the "Republican Party’s toxic manipulation of racial resentments". In fact, both parties are equally reasonable, or unreasonable as the case may be, with regards to race.

Thursday, March 24, 2016

I'm skeptical of NAND mirroring

Many have proposed "NAND mirroring" as the solution to the FBI's troubles in recovering data from the San Bernadino shooter's iPhone. Experts don't see any problem with this approach, but that doesn't mean experts know it will work, either. There are problems.

The problem is that iPhone's erase the flash after 10 guesses. The solution is to therefore create a backup, or "mirror", of the flash chips. When they get erased, just restore from backup, and try again.

The flaw with this approach is that it's time consuming. After every 10 failed attempts, the chips need to be removed the phone, reflashed, and reinserted back into the phone. Then the phone needs to be rebooted.

Tuesday, March 22, 2016

There's no conspiracy behind the FBI-v-Apple postponement

The FBI says it may have found another way to get data off an iPhone, and thus asked to postpone a hearing about whether Apple can be forced to do it. I thought I'd write a couple of comments. Specifically, people are looking for reasons to believe that the FBI, or Apple, or both are acting in bad faith, and that everything that happens is some sort of conspiracy. As far as I can tell, all evidence is that they are acting in good faith.

Sunday, March 20, 2016

Why we are upset with the NYTimes Paris terrorist article

On the Twitters, we've been mocking that NYTimes article on the Paris terrorists and how they used "encryption". I thought I'd write up a brief note as to why.

It's a typical example of yellow journalism. The public isn't familiar with "encryption", so it's easy to sensationalize it, to make it seem like something sinister is going on.

Friday, March 11, 2016

No, you backoff on backdoors or else

Speaking at #SXSW, President Obama threatened the tech community, telling us to backdoor our encryption ourselves or else congress will mandate a worse solution later.

No, Mr. President, it works the other way around. You'd better backoff on your encryption demands, or else the tech community will revolt, That's what's already happen with Apple's encryption efforts, as well as app developers like Signal and Wickr. Every time you turn the screws, we techies increase the encryption.

It's not a battle you can win without going full police-state. Sure, you can force Apple to backdoor its stuff, but then what about the encrypted apps? You'd have to lock them down as well. But what about encrypted apps developed in foreign countries? What about software I write myself? You aren't going to solve the "going dark" problem until you control all crypto.

If you succeed in achieving your nightmare Orwellian scenario, I promise you this: I'll emigrate to an extradition-free country, to continue the fight against the American government.

Your crypto backdoors creates a police-state beyond what even police-state advocates like Michael Hayden and Linsdey Graham can tolerate. Your point on "balance" is a lie. We've become radically unbalanced toward mass surveillance, and the courts have proven to be toothless to stop it. We techies won't tolerate it. Back off on this, or else.

Can the Apple code be misused? (Partly Retracted)

Dan Guido (@DGuido), who knows more about iOS than I do, wants me to retract this post. I'm willing to retract it based solely on his word, but he won't give me any details as to what specifically he objects to. I'm an expert in reverse-engineering and software development, but I admit there may be something to specific to iOS (such as how it encrypts firmware) that I may not know.

This post will respond to the tweet by Orin Kerr:

The government is right that the software must be signed by Apple and made to only work on Farook's phone, but the situation is more complicated than that.

The basic flaw in this picture is jailbreaks. This is a process of finding some hack that gets around Apple's "signing" security layer. Jailbreaks are popular in the user community, especially China, when people want to run software not approved by Apple. When the government says "intact security", it means "non-jailbroken".

Each new version of iOS requires the discovery of some new hack to enable jailbreaking. Hacking teams compete to see who can ship a new jailbreak to users, and other companies sell jailbreaks to intelligence agencies. Once jailbroken, the signing is bypassed, as is the second technique of locking the software specifically to Farook's phone.

Details are more complicated than this. The issue isn't that jailbreaks will allow this software to run. Instead, the issue is that jailbreaks can reverse-engineer this software to grab its secrets, and then use those secrets on other phones.

A more important flaw in this reasoning is the creation of the source code itself. This is the human readable form of the code written by the Apple engineers. This will later be compiled into "binary code" then signed. It's at the source code stage that Apple is most in danger of losing secrets.

Let's assume that Apple is infiltrated by spies from the NSA and the Chinese. Some secrets can still be kept, such as the signing keys for the software. Other secrets cannot be kept, such as source code. It's likely the NSA and/or Chinese have stolen Apple's source code multiple times. Indeed, most of the source is public anyway (the Darwin operating system, Webkit, etc.). It's not something Apple is too concerned about -- as long as the source doesn't get published.

When Apple writes this specific tool for the FBI, it'll be very hard to keep that source out of the hands of such spies. It's possible to keep it secret, but only through burdonsome heroic efforts on Apple's part that certainly weren't part of its initial estimate.

More important than the source code, though, are the ideas. Code is expressive speech that communicates ideas. Even when engineers forget the details of source code, they can still retain these ideas. Years later, they can recall those ideas and use them. I give a real example of this in my previous post on expressiveness of code. Apple cannot contain these ideas. The engineers in question, after building the code, can immediately quit Apple and got to to work for Chinese jailbreak companies or American defense contractors for twice the salary. And it's completely legal.

It's like a Hollywood failed movie project. In the end, they decide not to move forward with the project, shutting it down. The employees then go off to different companies, taking those ideas with them, using them in unrelated movie projects. That's the story told in the award-winning documentary Jodorowsky's Dune, which ties that production to other unrelated movies, like Alien, Star Wars, and Terminator.

Orin goes onto ask:
It will likely take more than a few days to write the code. The FBI misrepresents the task as consisting of only a few lines of code. But Apple estimates a much larger project. Though to be fair, some of that is testing, packaging, and documentation unrelated to the amount of code written.

The task will likely require different skills from multiple engineers, rather than being the output of a single engineer. That's because it's possible no single engineer has all the necessary skills. However, all the engineers involved will still walk away with the entire picture, able to recreate the work on their own when working for the Chinese or Booz-Allen.

In the end, it's not a huge secret that Apple will be losing. For the most part, the "backdoor" already exists, the only question is how best to exploit it. It's likely something the jailbreak community can figure out for themselves. But at the same time, Apple does have a point that there is the fundamental burden that producing this software will slightly (though not catastrophically) weaken the security of their existing phones.

Thursday, March 10, 2016

Code is expressive. Full Stop. (FBIvApple)

I write code. More than a $billion of products have been sold where my code is the key component. I've written more than a million lines of it. I point this out because I want to address this FBIvApple fight from the perspective of a coder -- from the perspective of somebody who the FBI proposes to conscript into building morally offensive code. Specifically, I want to address the First Amendment issue, whether code is expressive speech. I demonstrate expressiveness, far beyond what the government in this case imagines.

Consider Chris Valasek (@NudeHabasher), most recently famous for his car-hacking stunt of hacking into a Jeep from the Internet (along with Charlie Miller @CharlieMiller).

As Chris tells the story, he was on an airplane without WiFi writing code for his "CANbus-hack" tool that would hack the car. Without the Internet, he didn't have access to reference information, such as for strtok(). But he did remember from years earlier working on my (closed-source) code, and used the ideas he remembered to solve his immediate problem. No, he didn't remember the specifics of the code itself, and in any case, his CANbus-hack was unrelated to that code. Instead, it was the ideas expressed my code that he remembered.

What he came up with was this:

While this is CAN-bus functionality, you'll notice a certain similarity with code in my open-source masscan port-scanner:

The first piece of car hacks computers inside your car. The second piece of code scans the entire Internet. They are wholly unrelated in every way that two pieces of code can be unrelated -- except that both share an idea. That idea, state-machine parsers, was communicated by my original code, then adopted for a wholly different purpose by Chris many years later.

The government claims that computer code has limited expressiveness:

That's wrong. My code expressed an important idea to Chris Valasek, unrelated to the variable names or comments.

Only JK Rowling could've created the Harry Potter books. Only Joss Whedon could've created the first Avengers movie. Only Frank Lloyd Wright could've created Falling Water. Only I could've made something specifically like masscan (thought other similar tools exist like zmap). Only Chris Valasek could've created his specific car hacking code (thought other related code exists). These are artistic, creative works, unique to their creators. They express unique ideas, far from the mechanics of code.

It's art, but it's also revolution. How universities teach this sort of code is wrong. Many of us, especially those focused on the field "LANGSEC" like Sergey Bratus @SergeyBratus and Meredith Patterson (@maradydd), are trying to change that with different ideas. State-machine parsers is how I tackle this. I could explain these ideas with a 500 page book, but it's easier with 1000 lines of code.

I've cited here a specific example of expressive code, even if you strip the comments and randomize the variable names. Code is creative speech. The government has asserted, without evidence, that it's not significantly expressive. They are wrong.

While my code is designed for defenders, it's used by hackers and "cyber-terrorists". It's licensed, with the GPL. I can imagine that some day the court will compel code/speech out of me, when going after hackers/terrorists. This is a violation of my rights.

So, below is somebody who read my X.509 code today. The point isn't that my code is "cool", but that it so expressive that it will cause some people to say "Duuuuuuuude!!". It's been known to trigger the opposite reaction from other people who think I'm an idiot. Either way: expressive.

Captain America Civil War -- it's us

The next Marvel movie is Captain America: Civil War (May 2, 2016). The plot is this: after the Avengers keep blowing things up, there is pushback demanding accountability. Government should be in control when to call in the Avengers, and superhumans should be forced to register with the government. Ironman is pro-accountability, as you've seen his story arc evolve toward this point in the movies. Captain America is anti-accountability.

This story arc is us, in cybersecurity. Last year, Charlie Miller and Chris Valasek proved they could, through the "Internet", remotely hack in and control a car driving down the freeway. In the video, we see a frightened reporter as the engine stalls in freeway traffic. Should researchers be able to probe cars, medical equipment, and IoT devices accountable to nobody but themselves? Or should they be accountable to the public, and rules setup by government?

This story is about us personally, too. In cyberspace, many of us have superhuman powers. Should we be free to do whatever we want, without accountability, or should be be forced to register with teh government, so they can watch us? For example, I scan the Internet (the entire Internet) with relative impunity. This is what I tweeted when creating my masscan tool, an apt analogy:
Finally, this is related to the #FBIvApple debate on crypto backdoors. Should law-enforcement be able to get into all our electronics, when they have a warrant upon probably cause? Or should citizens be able to encrypt their data with impunity, so that nobody (not even the NSA codebreakers) can read it?

I'm totally #TeamCap on this one, as most of you know. It's car companies and medical device manufacturers who should be held accountable for deffects. They evade responsibility because they can pay for government lobbyists. Only a free security research community will ever hold them accountable. Similarly, as Snowden showed, 'warrents' are not enough to hold the government and law enforcement accountable, and thus, unfettered crypto must be a right of the people that government cannot abridge. Lastly, I'll never "register" or "get certified" by the government. I'll leave the country before that happens.

Wednesday, March 02, 2016

An open letter to Sec. Ashton Carter


For security research, I regularly "mass scan" the entire Internet. For example, my latest scan shows between 250,000 and 300,000 devices still vulnerable to Heartbleed. This is legal. This is necessary security research. Yet, I still happily remove those who complain and want me to stop scanning them.

The Department of Defense didn't merely complain, but made threats, forcing me to stop scanning them. You guys were quite nasty about it, forcing me to figure out for myself which address ranges belong to the DoD.

These threats are likely standard procedure at the DoD, investigating every major source of scans and shutting down those you might have power over. But the effect of this is typical government corruption, preventing me from reporting the embarrassing detail of how many DoD systems are still vulnerable to Heartbleed (but without stopping the Chinese or Russians from knowing this detail).

Please remove your threats, so that I can scan the DoD in the same way I scan the rest of the Internet. This weekend I'll be scanning the Internet for system susceptible to the DROWN attack. I would like to include DoD in those scans.

I write to you now because you are making overtures to Silicon Valley, and offering bug bounties. Fixing this problem would help in this process.

Robert Graham