Wednesday, September 28, 2016

Some technical notes on the PlayPen case

In March of 2015, the FBI took control of a Tor onion childporn website ("PlayPen"), then used an 0day exploit to upload malware to visitors's computers, to identify them. There is some controversy over the warrant they used, and government mass hacking in general. However, much of the discussion misses some technical details, which I thought I'd discuss here.

IP address

In a post on the case, Orin Kerr claims:
retrieving IP addresses is clearly a search
He is wrong, at least, in the general case. Uploading malware to gather other things (hostname, username, MAC address) is clearly a search. But discovering the IP address is a different thing.

Today's homes contain many devices behind a single router. The home has only one public IP address, that of the router. All the other devices have local IP addresses. The router then does network address translation (NAT) in order to convert outgoing traffic to all use the public IP address.

The FBI sought the public IP address of the NAT/router, not the local IP address of the perp's computer. The malware ("NIT") didn't search the computer for the IP address. Instead the NIT generated network traffic, destined to the FBI's computers. The FBI discovered the suspect's public IP address by looking at their own computers.

Historically, there have been similar ways of getting this IP address (from a Tor hidden user) without "hacking". In the past, Tor used to leak DNS lookups, which would often lead to the user's ISP, or to the user's IP address itself. Another technique would be to provide rich content files (like PDF) or video files that the user would have to be downloaded to view, and which then would contact the Internet (contacting the FBI's computers) themselves bypassing Tor.

Since the Fourth Amendment is about where the search happens, and not what is discovered, it's not a search to find the IP address in packets arriving at FBI servers. How the FBI discovered the IP address may be a search (running malware on the suspect's computer), but the public IP address itself doesn't necessarily mean a search happened.

Of course, uploading malware just to transmit packets to an FBI server, getting the IP address from the packets, it's still problematic. It's gotta be something that requires a warrant, even though it's not precisely the malware searching the machine for its IP address.

In any event, if not for the IP address, then PlayPen searches still happened for the hostname, username, and MAC address. Imagine the FBI gets a search warrant, shows up at the suspect's house, and finds no child porn. They then look at the WiFi router, and find that suspected MAC address is indeed connected. They then use other tools to find that the device with that MAC address is located in the neighbor's house -- who has been piggybacking off the WiFi.


It's a pre-crime warrant (#MinorityReport)

The warrant allows the exploit/malware/search to be used whenever somebody logs in with a username and password.

The key thing here is that the warrant includes people who have not yet created an account on the server at the time the warrant is written. They will connect, create an account, log in, then start accessing the site.

In other words, the warrant includes people who have never committed a crime when the warrant was issued, but who first commit the crime after the warrant. It's a pre-crime warrant. 

Sure, it's possible in any warrant to catch pre-crime. For example, a warrant for a drug dealer may also catch a teenager making their first purchase of drugs. But this seems quantitatively different. It's not targeting the known/suspected criminal -- it's targeting future criminals.

This could easily be solved by limiting the warrant to only accounts that have already been created on the server.


It's more than an anticipatory warrant

People keep saying it's an anticipatory warrant, as if this explains everything.

I'm not a lawyer, but even I can see that this explains only that the warrant anticipates future probable cause. "Anticipatory warrant" doesn't explain that the warrant also anticipates future place to be searched. As far as I can tell, "anticipatory place" warrants don't exist and are a clear violation of the Fourth Amendment. It makes it look like a "general warrant", which the Fourth Amendment was designed to prevent.

Orin's post includes some "unknown place" examples -- but those specify something else in particular. A roving wiretap names a person, and the "place" is whatever phone they use. In contrast, this PlayPen warrant names no person. Orin thinks that the problem may be that more than one person is involved, but he is wrong. A warrant can (presumably) name multiple people, or you can have multiple warrants, one for each person. Instead, the problem here is that no person is named. It's not "Rob's computer", it's "the computer of whoever logs in". Even if the warrant were ultimately for a single person, it'd still be problematic because the person is not identified.

Orin cites another case, where the FBI places a beeper into a package in order to track it. The place, in this case, is the package. Again, this is nowhere close to this case, where no specific/particular place is mentioned, only a type of place. 

This could easily have been resolved. Most accounts were created before the warrant was issued. The warrant could simply have listed all the usernames, saying the computers of those using these accounts are the places to search. It's a long list of usernames (1,500?), but if you can't include them all in a single warrant, in this day and age of automation, I'd imagine you could easily create 1,500 warrants.

It's malware

As a techy, the name for what the FBI did is "hacking", and the name for their software is "malware" not "NIT". The definitions don't change depending upon who's doing it and for what purpose. That the FBI uses weasel words to distract from what it's doing seems like a violation of some sort of principle.



Conclusion

I am not a lawyer, I am a revolutionary. I care less about precedent and more about how a Police State might abuse technology. That a warrant can be issued whose condition is similar "whoever logs into the server" seems like a scary potential for abuse. That a warrant can be designed to catch pre-crime seems even scarier, like science fiction. That a warrant might not be issued for something called "malware", but would be issued for something called "NIT", scares me the most.

This warrant could easily have been narrower. It could have listed all the existing account holders. It could've been even narrower, for account holders where the server logs prove they've already downloaded child porn.

Even then, we need to be worried about FBI mass hacking. I agree that FBI has good reason to keep the 0day secret, and that it's not meaningful to the defense. But in general, I think courts should demand an overabundance of transparency -- the police could be doing something nefarious, so the courts should demand transparency to prevent that.

Beware: Attribution & Politics

tl;dr - Digital location data can be inherently wrong and it can be spoofed. Blindly assuming that it is accurate can make an ass out of you on twitter and when regulating drones.    

Guest contributor and friend of Errata Security Elizabeth Wharton (@LawyerLiz) is an attorney and host of the technology-focused weekly radio show "Buzz Off with Lawyer Liz" on America's Web Radio (listen live  each Wednesday, 2-3:00pm eastern; find  prior podcasts here or via iTunes - Lawyer Liz) This post is merely her musings and not legal advice.

Filtering through various campaign and debate analysis on social media, a tweet caught my eye. The message itself was not the concern and the underlying image has since been determined to be fake.  Rather, I was stopped by the140 character tweet's absolute certainty that internet user location data is infallible.  The author presented a data map as proof without question, caveat, or other investigation.  Boom, mic drop - attribution!

According to the tweeting pundit, "Russian trollbots" are behind the #TrumpWon hashtag trending on Twitter.
The proof? The twitter post claims that the Trendsmap showed the initial hashtag tweets as originating from accounts located in Russia.  Within the first hour the tweet and accompanying map graphic was "liked" 1,400 times and retweeted 1,495 times. A gotcha moment because a pew-pew map showed that the #TrumpWon hashtag originated from Twitter accounts located in Russia.  Boom, mic drop - attribution!

Except, not so fast. First, Trendsmap has since clarified that the map and data in the tweet above are not theirs (the Washington Post details the faked data/map ).  Moreover, location data is tricky.  According to the Trendsmap FAQ page they use the location provided in a user's profile and GeoIP provided by Google. Google's GeoIP is crafted using a proprietary system and other databases such as MaxMind.  IP mapping is not an exact art.  Kashmir Hill, editor of Fusion's Real Future, and David Maynor, delved into the issues and inaccuracies of IP mapping earlier this year.  Kashmir wrote extensively on their findings and how phantom IP addresses and MaxMind's use of randomly selected default locations created digital hells for individuals all over the country -  Internet Mapping Glitch Turned Random Farm into Digital Hell.

Reliance on such mapping and location information as an absolute has tripped up law enforcement and is poised to trip up the drone industry. Certain lawmakers like to point to geofencing and other location applications as security and safety cure-all solutions. Sen. Schumer (D-N.Y.) previously included geofencing as a key element of his 2015 drone safety bill.  Geofencing as a safety measure was mentioned during Tuesday's U.S. House Small Business Committee hearing on Commercial Drone Operations. With geofencing, the drone is programmed to prohibit operations above a certain height or to keep out of certain locations.  Attempt to fly in a prohibited area and the aircraft will automatically shut down.  Geofencing relies on location data, including geospatial data collected from a variety of sources.  As seen with GeoIP, data can be wrong.  Additionally, the data must be interpreted and analyzed by the aircraft's software systems.  Aircraft systems are not built with security first, in some cases basic systems security measures have been completely overlooked.  With mandatory geofencing, wrong data or spoofed (hacked) data can ground the aircraft.

Location mapping is helpful, one data point among many.  Beware of attribution and laws predicated solely on information that can be inaccurate by design. One errant political tweet blaming Russian twitter users based on bad data may lead to a "Pants on Fire" fact check.  Even if initially correct, a bored 400lb hacker may have spoofed the data.

(post updated to add link to "Buzz Off with Lawyer Liz Show" website and pic per Rob's request)


Sunday, September 18, 2016

Why Snowden won't be pardoned

Edward Snowden (NSA leakerblower) won’t be pardoned. I’m not arguing that he shouldn’t be pardoned, but that he won’t be pardoned. The chances are near zero, and the pro-pardon crowd doesn't seem to be doing anything to cange this. This post lists a bunch of reasons why. If your goal is to get him pardoned, these are the sorts of things you’ll have to overcome.

The tl;dr list is this:
  • Obama hates whistleblowers
  • Obama loves the NSA
  • A pardon would be betrayal
  • Snowden leaked because he was disgruntled, not because he was a man of conscience (***)
  • Snowden hasn’t yet been convicted
  • Snowden leaked too much
  • Snowden helped Russian intelligence
  • Nothing was found to be illegal or unconstitutional

Saturday, September 17, 2016

Review: "Snowden" (2016)

tldr:

  • If you are partisan toward Snowden, you'll like the movie.
  • If you know little about Snowden, it's probably too long/slow -- you'll be missing the subtext.
  • If you are anti-Snowden, you'll hate it of course.


The movie wasn't bad. I was expecting some sort of over-dramatization, a sort of Bourne-style movie doing parkour through Hong Kong ghettos. Or, I expected a The Fifth Estate sort of movie that was based on the quirky character of Assange. But instead, the movie was just a slight dramatization of the events you (as a Snowden partisan) already know. Indeed, Snowden is a boring protagonist in the movie -- which makes the movie good. All the other characters in the movie are more interesting than the main character. Even the plot isn't all that interesting -- it's just a simple dramatization of what happens -- it's that slow build-up of tension toward the final reveal that keeps your attention.

In other words, it's clear that if you like Snowden, understand the subtext, you'll enjoy riding along on this slow buildup of tension.

Those opposed to Snowden, however, will of course gag on the one-side nature of the story. There's always two sides to every story. While the film didn't go overboard hyping Snowden's side, it was still partisan, mostly ignoring the opposing side. I can imagine all my friends who work for government walking out in anger, not being able to tolerate this one-sided view of the events. I point this out because with the release of this movie, there's also been a surge in the "Pardon Snowden" movement. No, the chances of that are nil. Even though such a thing seems obvious to you, it's only because you haven't seen the other side.

So if you don't like Snowden, at best you'll be bored, at worst you'll be driven out of the theater in anger.

I don't think the movie stands alone, without all this subtext we already know. So if you haven't been following along with the whole story, I don't think you'll enjoy it.

Finally, there's watching everyone else in the audience. They seemed to like it, and they seemed to "get" the key points the director was trying to make. It was a rather slow Friday night for all the movies being shown, so the theater wasn't empty, but neither was it very full. I'd guess everyone there already had some interest in Snowden. Obviously, from the sign out front, they don't expect as much interest in this film as they do in Bridget Jones' Baby and Blair Witch 2.

Anyway, as I said, if you like Edward Snowden, you'll like Snowden. It's not over the top; it's a fair treatment of his story. I'm looking forward to the sequel.






Sunday, September 11, 2016

What's the testimonial of passwords?

In this case described by Orin Kerr, the judge asks if entering a password has any testimonial other than "I know the password". Well, rather a lot. A password is content. While it's a foregone conclusion that this encrypted drive here in this case belongs to the suspect, the password may unlock other things that currently cannot be tied to the suspect. Maybe the courts have an answer to this problem, but in case they haven't, I thought I'd address this from a computer-science point of view.


Firstly, we have to address the phrasing of entering a password, rather than disclosing the password. Clearly, the court is interested in only the content of the disk drive the password decrypts, and uninterested in the password itself. Yet, entering a password is the same as disclosing it. Technically, there's no way to enter a password in such a way that it can't be recorded. I don't know the law here, and whether courts would protect this disclosure, but for the purposes of this blog post, "entering" is treated the same as "disclosing".

Passwords have content. This paper focuses on one real, concrete example, but let's consider some hypothetical cases first.

As is well-known, people often choose the birth dates of their children as the basis for passwords. Imagine a man has a password "emily97513" -- and that he has an illegitimate child named "Emily" who was born on May 13, 1997. Such a password would be strong evidence in a paternity suite.

As is well-known, people base passwords on sports teams. Imagine a password is "GoBears2017", strong evidence the person is a fan of the Chicago Bears, despite testimony in some case that he's never been to Chicago.

Lastly, consider a password "JimmyHoffaDieDieDie" in a court case where somebody is suspected of having killed Jimmy Hoffa.

But these are hypotheticals; now let's consider a real situation with passwords. Namely, good passwords are unique. By unique we mean that good passwords are chosen such that they are they so strange that nobody else would ever have chosen that password.

For example, Wikileaks published many "insurance" files -- encrypted files containing leaks that nobody could decrypt. This allowed many people to mirror leak data without actually knowing the contents of the leaks. In a book on Wikileaks, the Guardian inadvertently disclosed that the password to the Manning leaks was ACollectionOfDiplomaticHistorySince_1966_ToThe_PresentDay#. It was then a simple matter of attempting to decrypt the many Wikileaks insurance files until the right one was found.

In other words, the content of the password was used to discover the files it applied to.

Another example is password leaks. Major sites like LinkedIn regularly get hacked and get account details dumped on the Internet. Sites like HaveIBennPwned.com track such leaks. Given a password, it's possible to search these dumps for corresponding email addresses. Thus, hypothetically, once law enforcement knows a person's password, they can then search for email accounts the user might hold that they might not previously have know about.

Statistically, passwords are even more unique (sic) than fingerprints, DNA testing, and other things police regularly relying upon (though often erroneously) as being "unique". Consider the password kaJVD7VqcR. While it's only 10 character long, it's completely unique. I just googled it to make sure -- and got zero hits. The chances of another random 10 character password matching this one is one in 1018 chances. In other words, if a billion people each chose a billion random passwords, only then would you have a chance that somebody would pick this same random password.


Thus consider the case where the court forces a child porn suspect to enter the password in order to decrypt a TrueCrypt encrypted drive found in his house. The court may consider it a foregone conclusion that the drive is his, and thus Fifth Amendment protections may not apply. However, the content of the password is itself testimonial about all sorts of other things. For example, maybe child pornographers swap drives, so law enforcement tests this password against all other encrypted drives in their possession. They then test this password against all user account information in their possession, such as hidden Tor forums or public LinkedIn-style account dumps. The suspect's unique password is testimonial about all these other things which, before the disclosure of the password, could not be tied to the suspect.

Wednesday, August 31, 2016

A quick lesson in Political Correctness

It's hard to see Political Correctness in action when it's supporting your own political beliefs. It's easier seen from the other side. You can see in in the recent case of football player Colin Kaepernick, who has refused to stand for the national anthem. Many are condemning him, on the grounds that his speech is not politically correct.

For example, ex-teammate Alex Boone criticizes him for disrespecting the flag, because his brother has friends who died in the wars in Iraq. Others in the NFL like Burgess Owen and coach Ron Rivera have made similar statements.

If you think Kaepernick is wrong, then argue that he's wrong. Don't argue that he shouldn't speak on the grounds that he's not Politically Correct, offending veterans, or is a bad citizen.

We live in a country of freedom, where anyone is free to not stand and salute the flag or sing the anthem. So many have grievances of some sort or another that you'd think more would be availing themselves of this freedom. The problem here is not that Kaepernick does it, but that so few others do it as well. The problem here is Political Correctness.

Friday, August 26, 2016

Notes on that StJude/MuddyWatters/MedSec thing

I thought I'd write up some notes on the StJude/MedSec/MuddyWaters affair. Some references: [1] [2] [3] [4].


The story so far

tl;dr: hackers drop 0day on medical device company hoping to profit by shorting their stock

St Jude Medical (STJ) is one of the largest providers of pacemakers (aka. cardiac devices) in the country, around ~$2.5 billion in revenue, which accounts for about half their business. They provide "smart" pacemakers with an on-board computer that talks via radio-waves to a nearby monitor that records the functioning of the device (and health data). That monitor, "Merlin@Home", then talks back up to St Jude (via phone lines, 3G cell phone, or wifi). Pretty much all pacemakers work that way (my father's does, although his is from a different vendor).

MedSec is a bunch of cybersecurity researchers (white-hat hackers) who have been investigating medical devices. In theory, their primary business is to sell their services to medical device companies, to help companies secure their devices. Their CEO is Justine Bone, a long-time white-hat hacker. Despite Muddy Waters garbling the research, there's no reason to doubt that there's quality research underlying all this.

Muddy Waters is an investment company known for investigating companies, finding problems like accounting fraud, and profiting by shorting the stock of misbehaving companies.

Apparently, MedSec did a survey of many pacemaker manufacturers, chose the one with the most cybersecurity problems, and went to Muddy Waters with their findings, asking for a share of the profits Muddy Waters got from shorting the stock.

Muddy Waters published their findings in [1] above. St Jude published their response in [2] above. They are both highly dishonest. I point that out because people want to discuss the ethics of using 0day to short stock when we should talk about the ethics of lying.

Thursday, August 25, 2016

Notes on the Apple/NSO Trident 0days

I thought I'd write up some comments on today's news of the NSO malware using 0days to infect human rights activist phones. For full reference, you want to read the Citizen's Lab report and the Lookout report.


Press: it's news to you, it's not news to us

I'm seeing breathless news articles appear. I dread the next time that I talk to my mom that she's going to ask about it (including "were you involved"). I suppose it is new to those outside the cybersec community, but for those of us insiders, it's not particularly newsworthy. It's just more government malware going after activists. It's just one more set of 0days.

I point this out in case press wants to contact for some awesome sounding quote about how exciting/important this is. I'll have the opposite quote.


Don't panic: all patches fix 0days

We should pay attention to context: all patches (for iPhone, Windows, etc.) fix 0days that hackers can use to break into devices. Normally these 0days are discovered by the company itself or by outside researchers intending to fix (and not exploit) the problem. What's different here is that where most 0days are just a theoretical danger, these 0days are an actual danger -- currently being exploited by the NSO Group's products. Thus, there's maybe a bit more urgency in this patch compared to other patches.


Don't panic: NSA/Chinese/Russians using secret 0days anyway

It's almost certain the NSA, the Chinese, and the Russian have similar 0days. That means applying this patch makes you safe from the NSO Group (for a while, until they find new 0days), but it's unlikely this patch makes you safe from the others.


Of course it's multiple 0days

Some people are marveling how the attack includes three 0days. That's been the norm for browser exploits for a decade now. There's sandboxes and ASLR protections to get through. There's privilege escalation to get into the kernel. And then there's persistence. How far you get in solving one or more of these problems with a single 0day depends upon luck.


It's actually four 0days

While it wasn't given a CVE number, there was a fourth 0day: the persistence using the JavaScriptCore binary to run a JavaScript text file. The JavaScriptCore program appears to be only a tool for developers and not needed the functioning of the phone. It appears that the iOS 9.3.5 patch disables. While technically, it's not a coding "bug", it's still a design bug. 0days solving the persistence problem (where the malware/implant runs when phone is rebooted) are worth over a hundred thousand dollars all on their own.


That about wraps it up for VEP

VEP is Vulnerability Equities Process that's supposed to, but doesn't, manage how the government uses 0days it acquires.

Agitators like the EFF have been fighting against the NSA's acquisition and use of 0days, as if this makes us all less secure. What today's incident shows is that acquisition/use of 0days will be widespread around the world, regardless what the NSA does. It's be nice to get more transparency about what they NSA is doing through the VEP process, but the reality is the EFF is never going to get anything close to what it's agitating for.


That about wraps is up for Wassenaar

Wassenaar is an internal arms control "treaty". Left-wing agitators convinced the Wassenaar folks to add 0days and malware to the treaty -- with horrific results. There is essentially no difference between bad code and good code, only how it's used, so the the Wassenaar extensions have essentially outlawed all good code and security research.

Some agitators are convinced Wassenaar can be still be fixed (it can't). Israel, where NSO Group is based, is not a member of Wassenaar, and thus whatever limitations Wassenaar could come up with would not stop the NSO.

Some have pointed out that Israel frequently adopts Wassenaar rules anyway, but they would then simply transfer the company somewhere else, such as Singapore.

The point is that 0day development is intensely international. There are great 0day researchers throughout the non-Wassenaar countries. It's not like precision tooling for aluminum cylinders (for nuclear enrichment) that can only be made in an industrialized country. Some of the best 0day researchers come from backwards countries, growing up with only an Internet connection.


BUY THAT MAN AN IPHONE!!!

The victim in this case, Ahmed Mansoor, has apparently been hacked many time, including with HackingTeam's malware and Finfisher malware -- notorious commercial products used by evil government's to hack into dissident's computers.

Obviously, he'll be hacked again. He's a gold mine for researchers in this area. The NSA, anti-virus companies, Apple jailbreak companies, and the like should be jumping over themselves offering this guy a phone. One way this would work is giving him a new phone every 6 months in exchange for the previous phone to analyze.

Apple, of course, should head the list of companies doing this, proving "activist phones" to activists with their own secret monitoring tools installed so that they can regularly check if some new malware/implant has been installed.


iPhones are still better, suck it Android

Despite the fact that everybody and their mother is buying iPhone 0days to hack phones, it's still the most secure phone. Androids are open to any old hacker -- iPhone are open only to nation state hackers.


Use signal, use Tor

I didn't see Signal on the list of apps the malware tapped into. There's no particular reason for this, other than NSO haven't gotten around to it yet. But I thought I'd point how yet again, Signal wins.


SMS vs. MitM

Some have pointed to SMS as the exploit vector, which gave Citizen's Lab the evidence that the phone had been hacked.

It's a Safari exploit, so getting the user to visit a web page is required. This can be done over SMS, over email, over Twitter, or over any other messaging service the user uses. Presumably, SMS was chosen because users are more paranoid of links in phishing emails than they are SMS messages.

However, the way it should be doing is with man-in-the-middle (MitM) tools in the infrastructure. Such a tool would wait until the victim visited any webpage via Safari, then magically append the exploit to the page. As Snowden showed, this is apparently how the NSA does it, which is probably why they haven't gotten caught yet after exploiting iPhones for years.

The UAE (the government who is almost certainly trying to hack Mansoor's phone) has the control over their infrastructure in theory to conduct a hack. We've already caught other governments doing similar things (like Tunisia). My guess is they were just lazy, and wanted to do it the easiest way for them.





Another lesson in confirmation bias

The biggest problem with hacker attribution is the confirmation bias problem. Once you develop a theory, your mind shifts to distorting evidence trying to prove the theory. After a while, only your theory seems possible as one that can fit all your carefully selected evidence.

You can watch this happen in two recent blogposts [1] [2] by Krypt3ia attributing bitcoin payments to the Shadow Broker hackers as coming from the government (FBI, NSA, TAO). These posts are absolutely wrong. Nonetheless, the press has picked up on the story and run with it [*]. [Note: click on the pictures in this post to blow them up so you can see them better].


The Shadow Brokers published their bitcoin address (19BY2XCgbDe6WtTVbTyzM9eR3LYr6VitWK) asking for donations to release the rest of their tools. They've received 66 transactions so far, totally 1.78 bitcoin, or roughly $1000 at today's exchange rate.

Bitcoin is not anonymous by pseudonymous. Bitcoin is a public ledger with all transaction visible by everyone. Sometimes we can't tie addresses back to people, but sometimes we can. There are a lot of researchers who spent a lot of time on "taint anlysis" trying to track down the real identity of evildoers. Thus, it seems plausible that we might be able to discover the identities of those people making contributions to Shadow Brokers.

The first of Krypt3ia's errant blogposts tries to use the Bitcoin taint analysis plugin within Maltego in order to do some analysis on the Shadow Broker address. What he found was links to the Silk Road address -- the address controlled by the FBI since they took down that darknet marketplace several years ago. Therefore, he created the theory that the government (FBI? NSA? TAO?) was up to some evil tricks, such as trying to fill the account with money so that they could then track where the money went in the public blockchain.

But he misinterpreted the links. (He was wrong.) There were no payments from the Silk Road accounts to the Shadow Broker account. Instead, there were people making payments to both accounts. As a prank.

To demonstrate how this prank wors, I made my own transaction, where I pay money to the Shadow Brokers (19BY2...), to Silk Road (1F1A...), and to a few other well-known accounts controlled by the government.


The point here is that anybody can do these shenanigans. That government controlled addresses are involved means nothing. They are public, and anybody can send coin to them.

That blogpost points to yet more shenanigans, such as somebody "rick rolling", to confirm that TAO hackers were involved. What you see in the picture below is a series of transactions using bitcoin addresses containing the phrase "never gonna give you up", the title of Rich Astley's song (I underlined the words in red).



Far from the government being involved, somebody else took credit for the hack, with the Twitter handle @MalwareTechBlog. In a blogpost [*], he describes what he did. He then proves his identity by signing a message at the bottom of his post, using the same key (the 1never.... key above) in his tricks. Below is a screenshot of how I verified (and how anybody can verify) the key.


Moreover, these pranks should be seen in context. Goofball shenanigans on the blockchain are really, really common. An example is the following transaction:


Notice the vanity bitcoin address transfering money to the Silk Road account. There is also a "Public Note" on this transaction, a feature unique to BlockChain.info -- which recently removed the feature because it was so extensively abused.

Bitcoin also has a feature where 40 bytes of a message can be added to transactions. The first transaction sending bitcoins to both Shadow Brokers and Silk Road was this one. If you tell it to "show scripts", you see that it contains an email address for Cryptome, the biggest and oldest Internet leaks site (albeit not as notorious as Wikileaks).


The point is this: shenanigans and pranks are common on the Internet. What we see with Shadow Brokers is normal trickery. If you are unfamiliar with Bitcoin culture, it may look like some extra special trickery just for Shadow Brokers, but it isn't.


After much criticism why his first blogpost was wrong, Krypt3ia published a second. The point of the second was to lambaste his critics -- just because he jotted down some idle thoughts in a post doesn't make him responsible for journalists like ZDnet picking up as a story that's now suddenly being passed around.

But his continues with the claim that there is somehow evidence of government involvement, even though his original claim of payments from Silk Road were wrong. As he says:
However, my contention still stands that there be some fuckery going on here with those wallet transactions by the looks of it and that the likely candidate would be the government
Krypt3ia goes onto then claim, about the Rick Astley trick:
So yeah, these accounts as far as I can tell so far without going and spending way to many fucking hours on bitcoin.ifo or some such site, were created to purposely rick roll and fuck with the ShadowBrokers. Now, they may be fractions of bitcoins but I ask you, who the fuck has bitcoin money to burn here? Any of you out there? I certainly don’t and the way it was done, so tongue in cheek kinda reminds me of the audacity of TAO…
Who has bitcoin money to burn? The answer is everyone. Krypt3ia obvious isn't paying attention to the value of bitcoin here, which are pennies. Each transaction of 0.0001337 bitcoins is worth about 10 cents at current exchange rates, meaning this Rick Roll was less than $1. It takes minutes to open an account (like at Circle.com) and use your credit card (or debit card) to $1 worth of bitcoin and carry out this prank.

He goes on to say:
If you also look at the wallets that I have marked with the super cool “Invisible Man” logo, you can see how some of those were actually transfering money from wallet to wallet in sequence to then each post transactions to Shadow. Now what is that all about huh? More wallets acting together? As Velma would often say in Scooby Doo, JINKY’S! Something is going on there.
Well, no, it's normal bitcoin transactions. (I've made this mistake too -- learned about it, then forgot about it, then had to relearn about it). A Bitcoin transaction needs to consume all the previous transactions that it refers to. This invariably leaves some bitcoin left over, so has to be transferred back into the user's wallet. Thus, on my hijinx at the top of this post, you see the address 1HFWw... receives most of the bitcoin. That was a newly created by my wallet back in 2014 to receive the unspent portions of transactions. While it looks strange, it's perfectly normal.

It's easy to point out that Krypt3ia just doesn't understand much about bitcoin, and is getting excited by Maltego output he doesn't understand.

But the real issue is confirmation bias. He's developed a theory, and searches for confirmation of that theory. He says "there are connections that cannot be discounted", when in fact all the connections can easily be discounted with more research, with more knowledge. When he gets attacked, he's becomes even more motivated to search for reasons why he's actually right. He's not motivated to be proven wrong.


And this is the case of most "attribution" in the cybersec issue. We don't have smoking guns (such as bitcoin coming from the Silk Road account), and must make do with flimsy data (like here, bitcoin going to the Silk Road account). Sometimes our intuition is right, and this flimsy data does indeed point us to the hacker. In other cases, it leads us astray, as I've documented before in this blog. The less we understand something, the more it confirms our theory rather than conforming we just don't understand. That "we just don't know" is rarely an acceptable answer.

I point this out because I'm always the skeptic when the government attributes attacks to North Korea, China, Russia, Iran, and so on. I've seen them be right sometimes, and I've seem them be absolutely wrong. And when they are wrong, it's easy figuring out why -- because of things like confirmation bias.

Maltego plugin showing my Bitcoin hijinx transaction from above

Creating vanity addresses, for rickrolling or other reasons