Thursday, February 12, 2015

No, you can't make things impossible to reverse-engineer

I keep seeing this Wired article about somebody announcing a trick to make software "nearly impossible" to reverse-engineer. It's hype. The technique's features are no better at stopping reverse-engineering than many existing techniques, but has an enormous cost on the system that makes it a lot worse.

We already have deterrents to reverse-engineering. Take Apple iTunes, for example, which has successfully resisted reverse-engineering for years. I think the last upgrade to patch reverse-engineered details was in 2006. Its anti-reverse-engineering techniques aren't wonderful, but are instead simply "good enough". It does dynamic code generation, so I can't easily reverse engineer the static code in IDApro. It does anti-debugging tricks, so I can't attach a debugger to the running software. I'm sure if I spent more time at it, I could defeat these mechanisms, but I'm just a casual reverse-engineer who is unwilling to put in the time.

The technique described by Wired requires that the software install itself as a "hypervisor", virtualizing parts of the system. This is bad. This is unacceptable for most commercial software, like iTunes, because it would break a lot of computers. It might be acceptable for really high-end software that costs more than the computer, in which the computer is essentially dedicated to a single task, but wouldn't be acceptable for normal software. Also, virus/malware writers would avoid this, because detecting viruses trying to become hypervisors is a common thing. Although, virus that go "all in" and become a hypervisor, which provides lots of other benefits to the virus, might use this technique.

By the way, this isn't a "crypto trick" as promised in the title. Instead, it's a hypervisor/kernel trick, or a CPU/memory-management trick. That's what's exceptional about this technique, the actual encryption is largely immaterial.

The Wired article lacks details. Presumably, one could easily reverse engineer the code that does the encryption -- then write an IDApro script that decrypts the other code. Or, one could reverse-engineer the mock hypervisor, and make the simple change of setting the "execute/read" flag instead of "execute-only", then dump the raw memory.

For us geeks, this "split-TLB" thing is really interesting. The content is probably solid. It's just that it probably doesn't live up to the hype in that Wired story. The above story uses the trope of some "major new advance changes everything". Such stories have never proven themselves in practice. It's unlikely that this one will either.


Unknown said...

Every time someone asks me for a "secure" way for people to watch videos but not be able to download or record them, or something similar... I say "You can't make things impossible for a reverse engineer, you can only make them entertaining."

tourountzis said...

Thanks for the very sane blog post ... Wired hyped the technique a little bit too much up :) Nothing is impossible especially in IT

bobby said...

The researcher also has some clarifications on his blog:

Unknown said...
This comment has been removed by the author.
Unknown said...

Thanks for your thoughts. I didn't write the article, and wasn't allowed to proof-read it and the seemingly magical claims it made. I urge you to check out my FAQ which has a bit more information, and a bit less hype. I would like to raise an objection to your claim about a VMM making it unusable for all but expensive, single-purpose systems. Windows 8 and 10 come with a type-I VMM that is always running (when enabled and on supported hardware), and that's on commodity consumer machines. Our performance impact we're seeing is close to nil for the overall system, and ~2% on our test-suite and the existing applications we've protected as part of our tests (notepad, paint, calc.exe, etc.), certainly not such a terrible impact that it's only for "software that costs more than the computer". If you have any questions, please don't hesitate to reach out on Twitter (@JacobTorrey).

Cheers, Jacob