Friday, May 27, 2016

Hiroshima is a complex memorial

In the news is Obama's visit to the Hiroshima atomic bomb memorial. The memorial is more complex than you think. It's not a simple condemnation of the bomb. Instead, it's a much more subtle presentation of the complexity of what happened.

I mention this because of articles like this one at Foreign Policy magazine, in which the author starts by claiming he frequently re-visits Hiroshima. He claims that the memorial has a clear meaning, a message, that he takes back from the site. It doesn't.

The museum puts the bombing into context. It shows how Japan had been in a constant state of war since the 1890s, with militaristic roots going back further in to Samurai culture. It showed how Japan would probably have continued their militaristic ways had the United States not demanded complete surrender, a near abdication of the emperor, and imposed a pacifist constitution on the country.

In other words, Japan accepts partial responsibility for having been bombed.

It doesn't shy away from the horror of the bomb. It makes it clear that such bombs should never again be used on humans. But even that has complexity. More people were killed in the Tokyo firebombing than Hiroshima and Nagasaki combined. Had Hiroshima not been nuked, it would've instead been flattened with conventional bombs, causing more devastation and killing nearly as many people. The Japanese were likely more afraid of the Russian invasion than American nukes when they surrendered unconditionally.

When I left the memorial, I was left with the profound sense that I just didn't know the answers.

The truth is that few have "real" opinions about the Hiroshima atomic blast. Their opinions just proxies for how they feel about American's military might today. The Foreign Policy article above, which claims to have gotten the "message" from the memorial, is a lie. Nobody gets clarity and focus on the issue coming from the memorial, just a new appreciation of the problem.

The EFF is Orwellian as fuck

As this blog has documented many times * * * *, the Electronic Frontier Foundation (EFF) is exactly the populist demagogues that Orwell targets in his books 1984 and Animal Farm. Today, the EFF performed yet another amusingly Orwellian stunt. Urging the FCC to regulate cyberspace, it cites the exact law that it had previously repudiated.

Specifically, the EFF frequently champions the document Declaration of Independence of Cyberspace, written by one of its founders, John Perry Barlow. This document says:
"Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather."
Specifically, Barlow is talking about a then recent act of Congress:
In the United States, you have today created a law, the Telecommunications Reform Act, which repudiates your own Constitution and insults the dreams of Jefferson, Washington, Mill, Madison, DeToqueville, and Brandeis. These dreams must now be born anew in us.
That 1996 Act adds sections to the telcom laws, such as this portion:
Title II is amended by inserting after section 221 (47 U.S.C. 221) the following new section:
Today, though, the EFF cites this section as to why the FCC should regulation Internet privacy:
The  Commission has the Statutory Authority Under Both Section  222  and Section 705 to Protect Consumer Privacy.
So which is it? Does the EFF repudiate the law, and want government to avoid regulating cyberspace? Or does the EFF use that law to encourage government to regulate cyberspace? Both have their pros and cons, but you really can have only one.

Of course, EFF supporters claim both. It's fascinating watching their doublethink follow the precise lines described by Orwell. They have no problems believing both ideas simultaneously. This shows that the real danger of totalitarianism isn't the evil dictators who impose it from above, but the willing populace (like EFF supporters) who champion it from below.

In any case, it's JPBarlow's document that was correct. The world is rapidly moving to SSL by default, defeating broadband provider's ability to invade their customer's privacy. That broadband providers are invading customer privacy is mostly just a strawman argument by the EFF, fearmongering in an attempt to pass unneeded regulation.

Wednesday, May 18, 2016

Technology betrays everyone

Kelly Jackson Higgins has a story about how I hacked her 10 years ago, by sniffing her email password via WiFi and displaying it on screen. It wasn't her fault -- it was technology's fault. Sooner or later, it will betray you.

The same thing happened to me at CanSecWest around the year 2001, for pretty much exactly the same reasons. I think it was HD Moore who sniffed my email password. The thing is, I'm an expert, who writes tools that sniff these passwords, so it wasn't like I was an innocent party here. Instead, simply opening my laptop with Outlook running the background was enough for it to automatically connect to WiFi, then connect to a POP3 server across the Internet. I thought I was in control of the evil technology -- but this incident proved I wasn't.

Friday, May 06, 2016

Freaking out over the DBIR

Many in the community are upset over the recent "Verizon DBIR" because it claims widespread exploitation of the "FREAK" vulnerability. They know this is impossible, because of the vulnerability details. But really, the problem lies in misconceptions about how "intrusion detection" (IDS) works. As a sort of expert in intrusion detection (by which, I mean the expert), I thought I'd describe what really went wrong.

First let's talk FREAK. It's a man-in-the-middle attack. In other words, you can't attack a web server remotely by sending bad data at it. Instead, you have to break into a network somewhere and install a man-in-the-middle computer. This fact alone means it cannot be the most widely exploited attack.

Second, let's talk FREAK. It works by downgrading RSA to 512-bit keys, which can be cracked by supercomputers. This fact alone means it cannot be the most widely exploited attack -- even the NSA does not have sufficient compute power to crack as many keys as the Verizon DBIR claim were cracked.

Now let's talk about how Verizon calculates when a vulnerability is responsible for an attack. They use this methodology:
  1. look at a compromised system (identified by AV scanning, IoCs, etc.)
  2. look at which unpatched vulnerabilities the system has (vuln scans)
  3. see if the system was attacked via those vulnerabilities (IDS)
In other words, if you are vulnerable to FREAK, and the IDS tells you people attacked you with FREAK, and indeed you were compromised, then it seems only logical that they compromised you through FREAK.

This sounds like a really good methodology -- but only to stupids. (Sorry for being harsh, I've been pointing out this methodology sucks for 15 years, and am getting frustrated people still believe in it.)

Here's the problem with all data breach investigations. Systems get hacked, and we don't know why. Yet, there is enormous pressure to figure out why. Therefore, we seize on any plausible explanation. We then go through the gauntlet of logical fallacies, such as "confirmation bias", to support our conclusion. They torture the data until it produces the right results.

In the majority of breach reports I've seen, the identified source of the compromise is bogus. That's why I never believed North Korea was behind the Sony attack -- I've read too many data breach reports fingering the wrong cause. Political pressure to come up with a cause, any cause, is immense.

This specific logic, "vulnerable to X and attacked with X == breached with X" has been around with us for a long time. 15 years ago, IDS vendors integrated with vulnerability scanners to produce exactly these sorts of events. It's nonsense that never produced actionable data.

In other words, in the Verizon report, things went this direction. FIRST, they investigated a system and found IoCs (indicators that the system had been compromised). SECOND, they did the correlation between vuln/IDS. They didn't do it the other way around, because such a system produces too much false data. False data is false data. If you aren't starting with this vuln/IDS correlation, then looking for IoCs, then there is no reason to believe such correlations will be robust afterwards.

On of the reasons the data isn't robust is that IDS events do not mean what you think they mean. Most people in our industry treat them as "magic", that if an IDS triggers on a "FREAK" attack, then that's what happen.

But that's not what happened. First of all, there is the issue of false-positives, whereby the system claims a "FREAK" attack happened, when nothing related to the issue happened. Looking at various IDSs, this should be rare for FREAK, but happens for other kinds of attacks.

Then there is the issue of another level of false-positives. It's plausible, for example, that older browsers, email clients, and other systems may accidentally be downgrading to "export" ciphers simply because these are the only ciphers old and new computers have in common. Thus, you'll see a lot of "FREAK" events, where this downgrade did indeed occur, but not for malicious reasons.

In other words, this is not a truly false-positive, because the bad thing really did occur, but it is a semi-false-positive, because this was not malicious.

Then there is the problem of misunderstood events. For FREAK, both client and server must be vulnerable -- and clients reveal their vulnerability in every SSL request. Therefore, some IDSs trigger on that, telling you about vulnerable clients. The EmergingThreats rules have one called "ET POLICY FREAK Weak Export Suite From Client (CVE-2015-0204)". The key word here is "POLICY" -- it's not an attack signature but a policy signature.

But a lot of people are confused and think it's an attack. For example, this website lists it as an attack.

If somebody has these POLICY events enabled, then it will appear that their servers are under constant attack with the FREAK vulnerability, as random people around the Internet with old browsers/clients connect to their servers, regardless if the server itself is vulnerable.

Another source of semi-false-positives are vulnerability scanners, which simply scan for the vulnerability without fully exploiting/attacking the target. Again, this is a semi-false-positive, where it is correctly identified as FREAK, but incorrectly identified as an attack rather than a scan. As other critics of the Verizon report have pointed out, people have been doing Internet-wide scans for this bug. If you have a server exposed to the Internet, then it's been scanned for "FREAK". If you have internal servers, but run vulnerability scanners, they have been scanned for "FREAK". But none of these are malicious "attacks" that can be correlated according to the Verizon DBIR methodology.

Lastly, there are "real" attacks. There are no real FREAK attacks, except maybe twice in Syria when the NSA needed to compromise some SSL communications. And the NSA never does something if they can get caught. Therefore, no IDS event identifying "FREAK" has ever been a true attack.

So here's the thing. Knowing all this, we can reduce the factors in the Verizon DBIR methodology. The factor "has the system been attacked with FREAK?" can be reduced to "does the system support SSL?", because all SSL supporting systems have been attacked with FREAK, according to IDS. Furthermore, since people just apply all or none of the Microsoft patches, we don't ask "is the system vulnerable to FREAK?" so much as "has it been patched recently?".

Thus, the Verizon DBIR methodology becomes:

1. has the system been compromised?
2. has the system been patched recently?
3. does the system support SSL?

If all three answers are "yes", then it claims the system was compromised with FREAK. As you can plainly see, this is idiotic methodology.

In the case of FREAK, we already knew the right answer, and worked backward to find the flaw. But in truth, all the other vulnerabilities have the same flaw, for related reasons. The root of the problem is that people just don't understand IDS information. They, like Verizon, treat the IDS as some sort of magic black box or oracle, and never question the data.


An IDS is wonderfully useful tool if you pay attention to how it works and why it triggers on the things it does. It's not, however, an "intrusion detection" tool, whereby every event it produces should be acted upon as if it were an intrusion. It's not a magical system -- you really need to pay attention to the details.

Verizon didn't pay attention to the details. They simply dumped the output of an IDS inappropriately into some sort of analysis. Since the input data was garbage, no amount of manipulation and analysis would ever produce a valid result.

False-positives: Notice I list a range of "false-positives", from things that might trigger that have nothing to do with FREAK, to a range of things that are FREAK, but aren't attacks, and which cannot be treated as "intrusions". Such subtleties is why we can't have nice things in infosec. Everyone studies "false-positives" when studying for their CISSP examine, but truly don't understand them.

That's why when vendors claim "no false positives" they are blowing smoke. The issue is much more subtle than that.

Wednesday, May 04, 2016

Vulns are sparse, code is dense

The question posed by Bruce Schneier is whether vulnerabilities are "sparse" or "dense". If they are sparse, then finding and fixing them will improve things. If they are "dense", then all this work put into finding/disclosing/fixing them is really doing nothing to improve things.

I propose a third option: vulns are sparse, but code is dense.

In other words, we can secure specific things, like OpenSSL and Chrome, by researching the heck out of them, finding vulns, and patching them. The vulns in those projects are sparse.

But, the amount of code out there is enormous, considering all software in the world. And it changes fast -- adding new vulns faster than our feeble efforts at disclosing/fixing them.

So measured across all software, no, the secure community hasn't found any significant amount of bugs. But when looking at critical software, like OpenSSL and Chrome, I think we've made great strides forward.

More importantly, let's ignore the actual benefits/costs of fixing bugs for the moment. What all this effort has done is teach us about the nature of vulns. Critical software is written to day in a vastly more secure manner than it was in the 1980s, 1990s, or even the 2000s. Windows, for example, is vastly more secure. Sure, others are still lagging (car makers, medical device makers), but they are quickly learning the lessons and catching up. Finding a vuln in an iPhone is hard -- so hard that hackers will earn $1 million doing it from the NSA rather than stealing your credit card info. 15 years ago, the opposite was true. The NSA didn't pay for Windows vulns because they fell out of trees, and hackers made more money from hacking your computer.

My point is this: the "are vulns sparse/dense?" puts a straight-jacket around the debate. I see it form an orthogonal point of view.

Tuesday, May 03, 2016

Satoshi: how Craig Wright's deception worked

My previous post shows how anybody can verify Satoshi using a GUI. In this post, I'll do the same, with command-line tools (openssl). It's just a simple application of crypto (hashes, public-keys) to the problem.

I go through this step-by-step discussion in order to demonstrate Craig Wright's scam. Dan Kaminsky's post and the redditors comes to the same point through a different sequence, but I think my way is clearer.

Step #1: the Bitcoin address

We know certain Bitcoin addresses correspond to Satoshi Nakamoto him/her self. For the sake of discussion, we'll use the address 15fszyyM95UANiEeVa4H5L6va7Z7UFZCYP. It's actually my address, but we'll pretend it's Satoshi's. In this post, I'm going to prove that this address belongs to me.

The address isn't the public-key, as you'd expect, but the hash of the public-key. Hashes are a lot shorter, and easier to pass around. We only pull out the public-key when we need to do a transaction. The hashing algorithm is explained on this website []. It's basically base58(ripemd(sha256(public-key)).

Step #2: You get the public-key

Hashes are one-way, so given a Bitcoin address, we can't immediately convert it into a public-key. Instead, we have to look it up in the blockchain, the vast public ledger that is at the heart of Bitcoin. The blockchain records every transaction, and is approaching 70-gigabytes in size.

To find an address's match public-key, we have to search for a transaction where the bitcoin is spent. If an address has only received Bitcoins, then its matching public-key won't appear in the Blockchain. In that case, a person trying to prove their identity will have to tell you the public-key, which is fine, of course, since the keys are designed to be public.

Luckily, there are lots of websites that store the blockchain in a database and make it easy for us to browse. I use The URL to my address is:

There is a list of transactions here where I spend coin. Let's pick the top one, at this URL:

Toward the bottom are the "scripts". Bitcoin has a small scripting language, allowing complex transactions to be created, but most transactions are simple. There are two common formats for these scripts, and old format and a new format. In the old format, you'll find the public-key in the Output Script. In the new format, you'll find the public-key in the Input Scripts. It'll be a long long number starting with "04".

In this case, my public-key is:


You can verify this hashes to my Bitcoin address by the website I mention above.

Step #3: You format the key according to OpenSSL

OpenSSL wants the public-key in it's own format (wrapped in ASN.1 DER, then encoded in BASE64). I should just insert the JavaScript form to do it directly in this post, but I'm lazy. Instead, use the following code in the file "foo.js":

KeyEncoder = require('key-encoder');
sec = new KeyEncoder('secp256k1');
args = process.argv.slice(2);
pemKey = sec.encodePublic(args[0], 'raw', 'pem');

Then run:

npm install key-encoder

node foo.js 04b19ffb77b602e4ad3294f770130c7677374b84a7a164fe6a80c81f13833a673dbcdb15c29857ce1a23fca1c808b9c29404b84b986924e6ff08fb3517f38bc099

This will output the following file pub.pem:

-----END PUBLIC KEY-----

To verify that we have a correctly formatted OpenSSL public-key, we do the following command. As you can see, the hex of the OpenSSL public-key agrees with the original hex above 04b19ffb... that I got from the Blockchain: 

$ openssl ec -in pub.pem -pubin -text -noout
read EC key
Private-Key: (256 bit)
ASN1 OID: secp256k1

Step #4: I create a message file

What are we are going to do is sign a message. That could be a message you create, that you test if I can decrypt. Or I can simply create my own message file.

In this example, I'm going to use the file message.txt:

Robert Graham is Satoshi Nakamoto

Obviously, if I can sign this file with Satoshi's key, then I'm the real Satoshi.

There's a problem here, though. The message I choose can be too long (such as when choosing a large work of Sartre). Or, in this case, depending on how you copy/paste the text into a file, it may end with varying "line-feeds" and "carriage-returns". 

Therefore, at this stage, I may instead just choose to hash the message file into something smaller and more consistent. I'm not going to in my example, but that's what Craig Wright does in his fraudulent example. And it's important.

BTW, if you just echo from the command-line, or use 'vi' to create a file, it'll automatically append a single line-feed. That's what I assume for my message. In hex you should get:

$ xxd -i message.txt
unsigned char message_txt[] = {
  0x52, 0x6f, 0x62, 0x65, 0x72, 0x74, 0x20, 0x47, 0x72, 0x61, 0x68, 0x61,
  0x6d, 0x20, 0x69, 0x73, 0x20, 0x53, 0x61, 0x74, 0x6f, 0x73, 0x68, 0x69,
  0x20, 0x4e, 0x61, 0x6b, 0x61, 0x6d, 0x6f, 0x74, 0x6f, 0x0a
unsigned int message_txt_len = 34;

Step #5: I grab my private-key from my wallet

To prove my identity, I extract my private-key from my wallet file, and convert it into an OpenSSL file in a method similar to that above, creating the file priv.pem (the sister of the pub.pem that you create). I'm skipping the steps, because I'm not actually going to show you my private key, but they are roughly the same as above. Bitcoin-qt has a little "dumprivkey" command that'll dump the private key, which I then wrap in OpenSSL ASN.1. If you want to do this, I used the following node.js code, with the "base-58" and "key-encoder" dependencies.

Base58 = require("base-58");
KeyEncoder = require('key-encoder');
sec = new KeyEncoder('secp256k1');
var args = process.argv.slice(2);
var x = Base58.decode(args[0]);
x = x.slice(1);
if (x.length == 36)
    x = x.slice(0, 32);
pemPrivateKey = sec.encodePrivate(x, 'raw', 'pem');

Step #6: I sign the message.txt with priv.pem

I then sign the file message.txt with my private-key priv.pem, and save the base64 encoded results in sig.b64.

openssl dgst -sign priv.pem message.txt | base64 >sig.b64

This produces the following file sig.b64 that hash the following contents:


How signing works is that it first creates a SHA256 hash of the file message.txt, then it encrypts it with the secp256k1 public-key algorithm. It wraps the result in a ASN.1 DER binary file. Sadly, there's no native BASE64 file format, so I have to encode it in BASE64 myself in order to post on this page, and you'll have to BASE64 decode it before you use it.

Step #6: You verify the signature

Okay, at this point you have three files. You have my public-key pub.pem, my messagemessage.txt, and the signature sig.b64.

First, you need to convert the signature back into binary:

base64 -d sig.b64 > sig.der

Now you run the verify command:

openssl dgst -verify pub.pem -signature sig.der message.txt

If I'm really who I say I am, and then you'll see the result:

Verified OK

If something has gone wrong, you'll get the error:

Verification Failure

How we know the Craig Wright post was a scam

This post is similarly structure to Craig Wright's post, and in the differences we'll figure out how he did his scam.

As I point out in Step #4 above, a large file (like a work from Sartre) would be difficult to work with, so I could just hash it, and put the binary hash into a file. It's really all the same, because I'm creating some arbitrary un-signed bytes, then signing them.

But here's the clever bit. If you've been paying attention, you'll notice that the Sartre file has been hashed twice by SHA256, before the hash has been encrypted. In other words, it looks like the function:


Now let's go back to Bitcoin transactions. Transactions are signed by first hashing twice::


Notice that the algorithms are the same. That's how how Craig Write tried to fool us. Unknown to us, he grabbed a transaction from the real Satoshi, and grabbed the initial hash (see Update below for contents ). He then claimed that his "Sartre" file had that same hash:


Which signed (hashed again, then encrypted), becomes:


That's a lie. How are we supposed to know? After all, we aren't going to type in a bunch of hex digits then go search the blockchain for those bytes. We didn't have a copy of the Sartre file to calculate the hash ourselves.

Now, when hashed an signed, the results from openssl exactly match the results from that old Bitcoin transaction. Craig Wright magically appears to have proven he knows Satoshi's private-key, when in fact he's copied the inputs/outputs and made us think we calculcated them.

It would've worked, too, but there's too many damn experts in the blockchain who immediately pick up on the subtle details. There's too many people willing to type in all those characters. Once typed in, it's a simple matter of googling them to find them in the blockchain.

Also, it looks as suspicious as all hell. He explains the trivial bits, like "what is hashing", with odd references to old publications, but then leaves out important bits. I had to write code in order to extract my own private-key from my wallet in order to make it into something that OpenSSL would accept -- I step he didn't actually have to go through, and thus, didn't have to document.


Both Bitcoin and OpenSSL are just straightforward applications of basic crypto. It's that they share the basics that made this crossover work. It's by applying our basic crypto knowledge to the problem that catches him in the lie.

I write this post not really to catch Craig Wright in a scam, but to help teach basic crypto. Working backwards from this blogpost, learning the bits you didn't understand, will teach you the important basics of crypto.


To verify that I have that Bitcoin address, you'll need the three files:


-----END PUBLIC KEY-----

Robert Graham is Satoshi Nakamoto


Now run the following command, and verify it matches the hex value for the public-key that you found in the transaction in the blockchain:

openssl ec -in pub.pem -pubin -text -noout

Now verify the message:

base64 -d sig.b64 > sig.der
openssl dgst -verify pub.pem -signature sig.der message.txt


The lie can be condensed into two images. In the first is excerpts from his post, where he claims the file "Sartre" has the specific sha256sum and contains the shown text:

But, we know that this checksum matches instead an intermediate step in the 2009 Bitcoin transaction, which if put in a file, would have the following contents:

The sha256sum result is the same in both cases, so either I'm lying or Craig Wright is. You can verify for yourself which one is lying by creating your own Sartre file from this base64 encoded data (copy/paste into file, then base64 -d > Sartre to create binary file).


I got this file from, after spending an hour looking for the right tool. Transactions are verified using a script within the transactions itself. At some intermediate step, it transmogrifies the transaction into something else, then verifies it. It's this transmogrified form of the transaction that we need to grab for the contents of the "Sartre" file.

Monday, May 02, 2016

Satoshi: That's not how any of this works

In this WIRED article, Gaven Andresen says why he believes Craig Wright's claim to be Satoshi Nakamoto:
“It’s certainly possible I was bamboozled,” Andresen says. “I could spin stories of how they hacked the hotel Wi-fi so that the insecure connection gave us a bad version of the software. But that just seems incredibly unlikely. It seems the simpler explanation is that this person is Satoshi.”
That's not how this works. That's not how any of this works.

The entire point of Bitcoin is that it's decentralized. We don't need to take Andresen's word for it. We don't need to take anybody's word for it. Nobody needs to fly to London and check it out on a private computer. Instead, you can just send somebody the signature, and they can verify it themselves. That the story was embargoed means nothing -- either way, Andresen was constrained by an NDA. Since they didn't do it the correct way, and were doing it the roundabout way, the simpler explanation is that he was being bamboozled.

Below is an example of this, using the Electrum Bitcoin wallet software:

This proves that the owner of the Bitcoin Address has signed the Message, producing the Signature. I typed the first two fields, hit the "Sign" button. The wallet looked up the address in my wallet (which can have many addresses), found the matching private key that only I posess, then signed the message by filling in the bottom window.

If you had reason to believe that this address belonged to Satoshi Nakamoto, such as if it had been the first blocks, then I would have just proven to you that I am indeed Satoshi. You wouldn't need to take anybody's word for it. You'd simply type in the fields (or copy/paste), hit "verify", and verify for yourself.

So you can verify me, here are the strings you can copy/paste:

Robert Graham is Satoshi Nakamoto
You should get either a "Signature verified" or "Wrong signature" message when you click the "Verify" button.

There may be a little strangeness since my original message is ASCII, but if you copy out of this webpage, it'll go in as Unicode, but it appears that this formatting information is ignored in the verification process, so it'll still work.


Occam's Razor is that Andresen was tricked. There was no reason to fly him to London otherwise. They could've just sent him an email with a message, a signature, and an address, and Andresen could've verified it himself.

Sunday, May 01, 2016

Touch Wipe: a question for you lawyers

Whether the police can force you to unlock your iPhone depends upon technicalities. They can't ask you for your passcode, because that would violate the 5th Amendment right against "self incrimination". On the other hand, they can force you to press your finger on the TouchID button, or (as it has been demonstrated) unlock the phone themselves using only your fingerprint.

So I propose adding a new technicality into the mix: "Touch Wipe". In addition to recording fingerprints to unlock the phone, Apple/Android should add the feature where users record fingerprints to wipe (erase) the phone. For example, I may choose my thumb to unlock, and my forefinger to wipe.

Indeed, I may record only one digit to unlock, and all nine remaining digits to wipe. Or even, I may decide to record all 10 digits on both hands to wipe, and not use Touch ID at all to unlock (relying solely on the passcode).

This now presents the problem for the police. They can't force me to unlock the phone. They can't get around that by using my fingerprints, because they might inadvertently destroy evidence.

The legal system is resilient against legal trickery such as this. If think you've figured out a way to beat the system, then it's usually because you just don't understand the system well enough. But I think I've figured out how to beat this system, so I write this up so that lawyers can explain why I'm wrong.