Friday, October 16, 2015

DH-1024 in Bitcoin terms

The recent paper on Diffie-Hellman "precomputation" estimates a cost of 45-million core-years. Of course, the NSA wouldn't buy so many computers to do the work, but would instead build ASICs to do the work. The most natural analogy is how Bitcoin works. Bitcoin hashes were originally computed on CPU cores, then moved to graphics co-processors, then FPGAs, then finally ASICs.

The current hashrate of Bitcoin 460,451,594,000 megahashes/second. An Intel x86 core computes about 3-megahashes/second, or 153,483,864,667 CPU cores. Divided this by 45-million core-years for precomputing 1024bit DH, and you get 3410 DH precomputations per year. Thus, we get the following result:
The ASIC power in the current Bitcoin network could do all the necessary precomputations for a Diffie-Hellman 1024 bit pair with 154 minutes worth of work. Or, the precomputation effort is roughly equal to 15 bitcoin blocks, at the current rate.
(Update: I did some math wrong, it's 154 minutes not 23 minutes)

Another way of comparing is by using the website "", which places the equivalent effort of cracking 1024 DH with 72 to 80 bits of symmetric crypto. At the current Bitcoin rate, 72 bits of crypto comes out to 15 bitcoin blocks, matching the estimate above. (I assume precomputation is roughly the same amount of work as computing 1024 DH).


Pawel Krawczyk said...

I once wrote a passphrase generator with strength estimator (at and after looking at different methods of password strength estimation I've ended up just calculating "how long would it take to crack using today's BTC hash rate". I find it a very reasonable estimate of current humanity's computational power and you can also pretty well estimate the cost of reproducing that cost for any other hash or crypto algorithm using average prices of mining rigs on the market.

Unknown said...

Unless you can precompute Diffie-Hellman by performing nothing but SHA256 hashing, Bitcoin ASICs will be of no use in this endeavor.

George said...

I recall an old RSA paper from 10+ years ago defending RSA key strength versus Elliptic Curve methods. The argument was that while the processing requirements were correct about RSA key length, you needed some obscene amount of RAM to factor RSA 1024. Were some new methods of factoring invented that aren't RAM intensive?