Wednesday, February 03, 2016

Lawfare thinks it can redefine π, and backdoors

There is gulf between how people believe law to work (from watching TV shows like Law and Order) and how law actually works. You lawyer people know what I'm talking about. It's laughable.

The same is true of cyber: there's a gulf between how people think it works and how it actually works.

This Lawfare blogpost thinks it's come up with a clever method to get their way in the crypto-backdoor debate, by making carriers like AT&T responsible only for the what ("deliver interpretable signal in response to lawful wiretap order") without defining the how (crypto backdoors, etc.). This pressure would come in the form of removing current liability protections they now enjoy for not being responsible for what customers transmit across their network. Or as the post paraphrases the proposal:
Don’t expect us to protect you from liability for third-party conduct if you actively design your systems to frustrate government efforts to monitor that third-party conduct.
The post is proud of its own smarts, as if they've figured out how to outwit mathematicians and redefine pi (π). But their solution is nonsense, based on a hopelessly naive understanding of how the Internet works. It appears all they know about the Internet is what they learned from watching CSI:Cyber.

The Internet is end-to-end. End-to-end is the technology shift that made the Internet happen, as compared to alternative directions cyberspace might have taken.

What that means is AT&T doesn't encrypt traffic. Apple's iPhone don't encrypt traffic. Instead, it's the app installed on the phone that does the encryption. Neither AT&T nor Apple can stop encryption from happening.

You think that because most people use iMessage or Snapchat, that all you have to do is turn the screws on them in order to force them to comply with backdoors. That won't work, because the bad guys will stop using those apps and install different encrypted apps, like Signal. You imagine that it's just a game of wack-a-mole, and eventually you'll pressure all apps into compliance. But Signal is open-source. If it disappeared tomorrow, I'd still have a copy of the source, which I can compile into my own app I'll call Xignal. I'll continue making encrypted phone calls with my own app. Even if no source existed today, I could write my own source within a couple months to do this. Indeed, writing an encrypted chat app is typical homework assignment colleges might assign computer science students. (You people still haven't come to grips with the fact that in cyberspace, we are living with the equivalent of physicists able to whip up a-bombs in their basements).

Running arbitrary software is a loose end that will defeat every solution you can come up with. It's math. The only way forward to fix the "going dark" problem is to ban software code. But that you can't do without destroying the economy and converting the country into a dystopic, Orwellian police state.

You think that those of us who oppose crypto backdoors are hippies with a knee-jerk rejection of any government technological mandate. That's not true. The populists at the EFF love technological mandates in their favor, such as NetNeutrality mandates, or bans on exporting viruses to evil regimes (though they've recently walked back on that one).

Instead, we reject this specific technological mandate, because we know cyber. We know it won't work. We can see that you'll never solve your "going dark" problem, but in trying to, you'll cause a constant erosion of both the economic utility of the Internet and our own civil liberties.

I apologize for the tone of this piece, saying you are stupid about cyber, but that's what it always comes down to. The author of that piece has impressive Washington D.C. think-tanky credentials, but misfires on the basic end-to-end problem. And all think-tanky pieces on this debate are going to happen the same way, because as soon as they bring technologists in to consult on the problem, their desired op-eds become stillborn before anybody sees them.




Note: I get the π analogy from a tweet by @quinnorton, I don't know who came up with analogy originally.

4 comments:

  1. Counterpoint: While it's easy (and true) to say that people will be able to continue to use encrypted apps and write their own, the intelligence agencies are pretty happy to play whack-a-mole. Their reasoning goes like this:

    1) Most people won't really care about security enough, and for various reasons (network effects of what their friends use instead of being that weirdo who demands that people install a new app to talk) will use the largest and most popular chat program, whose makers will generally be able to be pressured. Just look at the Paris attackers using SMS, of all things.

    2) Perhaps we won't be able to successfully pressure all the big companies, but since we won't let the ones who cooperate talk, people won't know which big programs to avoid anyway.

    3) While it is a typical assignment to create encrypted chat, it's also incredibly common to make security or crypto errors when making such an app and many of the homebrew things will be broken anyway, or difficult to use (ugh, PGP/GPG), or difficult to use properly.

    4) The small percentage of people who do go out of their way to use rare unbroken apps can be treated as perspective criminals / surveillance targets, because the very fact of going out of your way creates a sort of probable cause. That will justify scooping up that traffic (which will be small enough that we can handle it) and looking at it in depth later. If the metadata leaks any identifying info (and we can get metadata from the cellular network metadata, etc., even if the application is encrypted end-to-end), we can use that to surveil the same people when they're using less secure methods.

    It's a stronger argument than I'd like. Most of us like to point out that what they're proposing is logically impossible, but they don't care about that so much as the practical implications. The Paris attackers using SMS actually boosts their argument from the practical point of view.

    ReplyDelete
  2. "Just look at the Paris attackers using SMS, of all things."

    Yes, lets look at that.

    Yeah a known-backdoored weak(end intentionally) piece of crypto tech that GCHQ has all the keys for. No forthcoming intel. Yeah they're too scared to use the data they collect to actually make people safer - and they harm privacy in the first place.

    Hang on why are we even talking about this again.

    "people won't know which big programs to avoid anyway"

    - they'll trust none and everybody will move to OSS with reproducible binaries. Wait why are we doing this again - to make commercial software impossible? Seems sensible.

    "The small percentage of people who do go out of their way to use rare unbroken apps can be treated as perspective criminals / surveillance targets"

    Except this stuff is being designed into the technical standards like http/2 which defacto *requires* crypto and everybody is using OpenSSL et al.

    There's no small numbers of users here, what they've done is pushed everybody down to more secure communications where before everybody was complacent. Good if you're an luser, bad if you're in the legitimate intelligence game.

    ReplyDelete
  3. I think you mean prospective.

    ReplyDelete
  4. The matter of redifining pi is an old joke about the "Indiana Pi Bill" (Wikipedia article)

    ReplyDelete

Note: Only a member of this blog may post a comment.