Sunday, February 28, 2021

We are living in 1984 (ETERNALBLUE)

In the book 1984, the protagonist questions his sanity, because his memory differs from what appears to be everybody else's memory.

The Party said that Oceania had never been in alliance with Eurasia. He, Winston Smith, knew that Oceania had been in alliance with Eurasia as short a time as four years ago. But where did that knowledge exist? Only in his own consciousness, which in any case must soon be annihilated. And if all others accepted the lie which the Party imposed—if all records told the same tale—then the lie passed into history and became truth. ‘Who controls the past,’ ran the Party slogan, ‘controls the future: who controls the present controls the past.’ And yet the past, though of its nature alterable, never had been altered. Whatever was true now was true from everlasting to everlasting. It was quite simple. All that was needed was an unending series of victories over your own memory. ‘Reality control’, they called it: in Newspeak, ‘doublethink’.

I know that EternalBlue didn't cause the Baltimore ransomware attack. When the attack happened, the entire cybersecurity community agreed that EternalBlue wasn't responsible.

But this New York Times article said otherwise, blaming the Baltimore attack on EternalBlue. And there are hundreds of other news articles [eg] that agree, citing the New York Times. There are no news articles that dispute this.

In a recent book, the author of that article admits it's not true, that EternalBlue didn't cause the ransomware to spread. But they defend themselves as it being essentially true, that EternalBlue is responsible for a lot of bad things, even if technically, not in this case. Such errors are justified, on the grounds they are generalizations and simplifications needed for the mass audience.

So we are left with the situation Orwell describes: all records tell the same tale -- when the lie passes into history, it becomes the truth.

Orwell continues:

He wondered, as he had many times wondered before, whether he himself was a lunatic. Perhaps a lunatic was simply a minority of one. At one time it had been a sign of madness to believe that the earth goes round the sun; today, to believe that the past is inalterable. He might be ALONE in holding that belief, and if alone, then a lunatic. But the thought of being a lunatic did not greatly trouble him: the horror was that he might also be wrong.

I'm definitely a lunatic, alone in my beliefs. I sure hope I'm not wrong.




Update: Other lunatics document their struggles with Minitrue:

Saturday, February 27, 2021

Review: Perlroth's book on the cyberarms market

New York Times reporter Nicole Perlroth has written a book on zero-days and nation-state hacking entitled “This Is How They Tell Me The World Ends”. Here is my review.


I’m not sure what the book intends to be. The blurbs from the publisher implies a work of investigative journalism, in which case it’s full of unforgivable factual errors. However, it reads more like a memoir, in which case errors are to be expected/forgivable, with content often from memory rather than rigorously fact checked notes.


But even with this more lenient interpretation, there are important flaws that should be pointed out. For example, the book claims the Saudi’s hacked Bezos with a zero-day. I claim that’s bunk. The book claims zero-days are “God mode” compared to other hacking techniques, I claim they are no better than the alternatives, usually worse, and rarely used.


But I can’t really list all the things I disagree with. It’s no use. She’s a New York Times reporter, impervious to disagreement.


If this were written by a tech journalist, then criticism would be the expected norm. Tech is full of factual truths, such as whether 2+2=5, where it’s possible for a thing to be conclusively known. All journalists make errors -- tech journalists are constantly making small revisions correcting their errors after publication.


The best example of this is Ars Technica. They pride themselves on their reader forums, where readers comment, opine, criticize, and correct stories. Sometimes readers add more interesting information to the story, providing free content to other readers. Sometimes they fix errors.


It’s often unpleasant for the journalists who steel themselves after hitting “Submit…”. They have a lot of practice defending or correcting every assertion they make, from both legitimate and illegitimate criticism. This makes them astoundingly good journalists -- mistakes editors miss readers don’t. They get trained fast to deal with criticism.


The mainstream press doesn’t have this tradition. To be fair, it couldn’t. Tech forums have techies with knowledge and experience, while the mainstream press has ignorant readers with opinions. Regardless of the story’s original content it’ll devolve into people arguing about whether Epstein was murdered (for example).


Nicole Perlroth is a mainstream reporter on a techy beat. So you see a conflict here between the expectation both sides have for each other. Techies expect a tech journalist who’ll respond to factual errors, she doesn’t expect all this criticism. She doesn’t see techie critics for what they are -- subject matter experts that would be useful sources to make her stories better. She sees them as enemies that must be ignored. This makes her stories sloppy by technical standards. I hate that this sounds like a personal attack when it’s really more a NYTimes problem -- most of their cyber stories struggle with technical details, regardless of author.


This problem is made worse by the fact that the New York Times doesn’t have “news stories” so much as “narratives”. They don’t have neutral stories reporting what happened, but narratives explaining a larger point.


A good example is this story that blames the Baltimore ransomware attack on the NSA’s EternalBlue. The narrative is that EternalBlue is to blame for damage all over the place, and it uses the Baltimore ransomware as an example. However, EternalBlue wasn’t responsible for that particular ransomware -- as techies point out.


Perlroth doesn’t fix the story. In her book, she instead criticizes techies for focusing on “the technical detail that in this particular case, the ransomware attack had not spread with EternalBlue”, and that techies don’t acknowledge “the wreckage from EternalBlue in towns and cities across the country”.


It’s a bizarre response from a journalist, refusing to fix a falsehood in a story because the rest of the narrative is true.


Some of the book is correct, telling you some real details about the zero-day market. I can't say it won't be useful to some readers, though the useful bits are buried in a lot of non-useful stuff. But most of the book is wrong about the zero-day market, a slave to the narrative that zero-days are going to end the world. I mean, I should say, I disagree with the narrative and her political policy ideas -- I guess it's up to you to decide for yourself if it's "wrong". Apart from inaccuracies, a lot is missing -- for example, you really can't understand what a "zero-day" is without also understanding the 40 year history of vuln-disclosure.


I could go on a long spree of corrections, and others have their own long list of inaccuracies, but there’s really no point. She's already defended her book as being more of a memoir than a work of journalistic integrity, so her subjective point of view is what it's about, not facts. Her fundamental narrative of the Big Bad Cyberarms Market is a political one, so any discussion of accuracy will be in service of political sides rather than the side of truth.


Moreover, she’ll just attack me for my “bruised male ego”, as she has already done to other expert critics.


Thursday, February 25, 2021

No, 1,000 engineers were not needed for SolarWinds

Microsoft estimates it would take 1,000 to carry out the famous SolarWinds hacker attacks. This means in reality that it was probably fewer than 100 skilled engineers. I base this claim on the following Tweet:


Yes, it would take Microsoft 1,000 engineers to replicate the attacks. But it takes a large company like Microsoft 10-times the effort to replicate anything. This is partly because Microsoft is a big, stodgy corporation. But this is mostly because this is a fundamental property of software engineering, where replicating something takes 10-times the effort of creating the original thing.

It's like painting. The effort to produce a work is often less than the effort to reproduce it. I can throw some random paint strokes on canvas with almost no effort. It would take you an immense amount of work to replicate those same strokes -- even to figure out the exact color of paint that I randomly mixed together.

Software Engineering

The process of software engineering is about creating software that meets a certain set of requirements, or a specification. It is an extremely costly process verify the specification is correct. It's like if you build a bridge but forget a piece and the entire bridge collapses.

But code slinging by hackers and open-source programmers works differently. They aren't building toward a spec. They are building whatever they can and whatever they want. It takes a tenth, or even a hundredth of the effort of software engineering. Yes, it usually builds things that few people (other than the original programmer) want to use. But sometimes it produces gems that lots of people use.

Take my most popular code slinging effort, masscan. I spent about 6-months of total effort writing it at this point. But if you run code analysis tools on it, they'll tell you that it would take several millions of dollars to replicate the amount of code I've written. And that's just measuring the bulk code, not the numerous clever capabilities and innovations in the code.

According to these metrics, I'm either a 100x engineer (a hundred times better than the average engineer) or my claim is true that "code slinging" is a fraction of the effort of "software engineering".

The same is true of everything the SolarWinds hackers produced. They didn't have to software engineer code according to Microsoft's processes. They only had to sling code to satisfy their own needs. They don't have to train/hire engineers with the skills necessary to meet a specification, they can write the specification according to what their own engineers can produce. They can do whatever they want with the code because they don't have to satisfy somebody else's needs.

Hacking

Something is similarly true with hacking. Hacking a specific target, a specific way, is very hard. Hacking any target, any way, is easy.

Like most well-known hackers, I regularly get those emails asking me to hack somebody's Facebook account. This is very hard. I can try a lot of things, and in the end, chances are I cannot succeed. On the other hand, if you ask me to hack anybody's Facebook account, I can do that in seconds. I can download one of the many hacker dumps of email addresses, then try to log into Facebook with every email address using the password "Password1234". Eventually I'll fine somebody who has that password -- I just don't know who.

Hacking is overwhelmingly opportunistic. Hackers go into it not being sure who they'll hack, or how they'll hack. They just try a bunch of things against a bunch of targets and see what works. No two hacks are the same. You can't look at one hack and reproduce it exactly against another target.

Well, you reproduce things a bit. Some limited techniques have become "operationalized". A good example is "phishing", sending emails tricking people into running software or divulging a password. But that's usually only the start of a complete attack, getting the initial foothold into a target, rather than the full hack itself.

In other words, hacking is based a lot on luck. You can create luck for yourself by trying lots of things. But it's hard reproducing luck.

This principle of hacking is why Stuxnet is such an incredible achievement. It wasn't opportunistic hacking. It had a very narrow target that could only be hacked in a very narrow way, jumping across an "airgap" to infect the controllers into order to subtly destabilize the uranium centrifuges. With my lifetime experience with hacking, I'm amazed at Stuxnet.

But SolarWinds was no Stuxnet. Instead, it shows a steady effort over a number of years, capitalizing on the lucky result of one step to then move ahead to the next step. Replicating that chain of luck would be nearly impossible.

Business

Now let's talk about big companies vs. startups. Every month, big companies like Apple, Microsoft, Cisco, etc. are acquiring yet another small startup that has done something that a big company cannot do. These companies often have small (but growing) market share, so it's rarely for the market share alone that big companies acquire small ones.

Instead, it's for the thing that the startup produced. The reason big companies acquire outsiders is again because of the difficulty that insiders would have in reproducing the work. The engineering managers are asked how much it would cost insiders to reproduce the work of the outsiders, the potential acquisition candidate. The answer is almost always "at least 10-times more than what the small company invested in building the thing".

This is reflected by the purchase price, which is often 10-times what the original investors put into the company to build the thing. In other words, Microsoft regularly buys a company for 10-times than all the money the original investors put into the company -- meaning much more than 10-times the effort it would take for their own engineers to replicate the product in question.

Thus, the question people should ask Brad Smith of Microsoft is not simply how many skilled Microsoft engineers it would take to reproduce SolarWinds, but also how many skilled Microsoft engineers it would take to reproduce the engineer effort of their last 10 acquisitions.

Conclusion

I've looked at the problem three different ways, from the point of view of software engineering, hacking, or business. If it takes 1,000 Microsoft engineers to reproduce the SolarWinds hacks, then that means there's fewer than 100 skilled engineers involved in the actual hacks.

SolarWinds is probably the most consequential hack of the last decade. There are many eager to exaggerate things to serve their own agenda. Those types have been pushing this "1,000 engineer" claim. I'm an expert in all three these areas, software engineering, hacking, and business. I've written millions of lines of code, I've well known for my hacking, and I've sold startups. I can assure you: Microsoft's estimate means that likely fewer than 100 skilled engineers were involved.


Wednesday, December 09, 2020

The deal with DMCA 1201 reform

There are two fights in Congress now against the DMCA, the "Digital Millennium Copyright Act". One is over Section 512 covering "takedowns" on the web. The other is over Section 1201 covering "reverse engineering", which weakens cybersecurity.

Even before digital computers, since the 1880s, an important principle of cybersecurity has been openness and transparency ("Kerckhoff's Principle"). Only through making details public can security flaws be found, discussed, and fixed. This includes reverse-engineering to search for flaws.

Cybersecurity experts have long struggled against the ignorant who hold the naive belief we should instead coverup information, so that evildoers cannot find and exploit flaws. Surely, they believe, given just anybody access to critical details of our security weakens it. The ignorant have little faith in technology, that it can be made secure. They have more faith in government's ability to control information.

Technologists believe this information coverup hinders well-meaning people and protects the incompetent from embarrassment. When you hide information about how something works, you prevent people on your own side from discovering and fixing flaws. It also means that you can't hold those accountable for their security, since it's impossible to notice security flaws until after they've been exploited. At the same time, the information coverup does not do much to stop evildoers. Technology can work, it can be perfected, but only if we can search for flaws.

It seems counterintuitive the revealing your encryption algorithms to your enemy is the best way to secure them, but history has proven time and again that this is indeed true. Encryption algorithms your enemy cannot see are insecure. The same is true of the rest of cybersecurity.

Today, I'm composing and posting this blogpost securely from a public WiFi hotspot because the technology is secure. It's secure because of two decades of security researchers finding flaws in WiFi, publishing them, and getting them fixed.

Yet in the year 1998, ignorance prevailed with the "Digital Millennium Copyright Act". Section 1201 makes reverse-engineering illegal. It attempts to secure copyright not through strong technological means, but by the heavy hand of government punishment.

The law was not completely ignorant. It includes an exception allow what it calls "security testing" -- in theory. But that exception does not work in practice, imposing too many conditions on such research to be workable.

The U.S. Copyright Office has authority under the law to add its own exemptions every 3 years. It has repeatedly added exceptions for security research, but the process is unsatisfactory. It's a protracted political battle every 3 years to get the exception back on the list, and each time it can change slightly. These exemptions are still less than what we want. This causes a chilling effect on permissible research. It would be better if such exceptions were put directly into the law.

You can understand the nature of the debate by looking at those on each side.

Those lobbying for the exceptions are those trying to make technology more secure, such as Rapid7, Bugcrowd, Duo Security, Luta Security, and Hackerone. These organizations have no interest in violating copyright -- their only concern is cybersecurity, finding and fixing flaws.

The opposing side includes the copyright industry, as you'd expect, such as the "DVD" association who doesn't want hackers breaking the DRM on DVDs.

However, much of the opposing side has nothing do with copyright as such.

This notably includes the three major voting machine suppliers in the United States: Dominion Voting, ES&S, and Hart InterCivic. Security professionals have been pointing out security flaws in their equipment for the past several years. These vendors are explicitly trying to coverup their security flaws by using the law to silence critics.

This goes back to the struggle mentioned at the top of this post. The ignorant and naive believe that we need to coverup information, so that hackers can't discover flaws. This is expressed in their filing opposing the latest 3-year exemption:

The proponents are wrong and misguided in their argument that the Register’s allowing independent hackers unfettered access to election software is a necessary – or even appropriate – way to address the national security issues raised by election system security. The federal government already has ways of ensuring election system security through programs conducted by the EAC and DHS. These programs, in combination with testing done in partnership between system providers, independent voting system test labs and election officials, provide a high degree of confidence that election systems are secure and can be used to run fair and accurate elections. Giving anonymous hackers a license to attack critical infrastructure would not serve the public interest. 

Not only does this blatantly violate Kerckhoff's Principle stated above, it was proven a fallacy in the last two DEF CON cybersecurity conferences. These conferences bought voting machines off eBay and presented them at the conference for anybody to hack. Widespread and typical vulnerabilities were found. These systems were certified as secure by state and federal governments, yet teenagers were able to trivially bypass the security of these systems.

The danger these companies are afraid of is not a nation state actor being able to play with these systems, but of teenagers playing with their systems at DEF CON embarrassing them by pointing out their laughable security. This proves Kerckhoff's Principle.

That's why the leading technology firms take the opposite approach to security than election systems vendors. This includes Apple, Amazon, Microsoft, Google, and so on. They've gotten over their embarrassment. They are every much as critical to modern infrastructure as election systems or the power grid. They publish their flaws roughly every month, along with a patch that fixes them. That's why you end up having to patch your software every month. Far from trying to coverup flaws and punish researchers, they publicly praise researchers, and in many cases, offer "bug bounties" to encourage them to find more bugs.

It's important to understand that the "security research" we are talking about is always "ad hoc" rather than formal.

These companies already do "formal" research and development. They invest billions of dollars in securing their technology. But no matter how much formal research they do, informal poking around by users, hobbyists, and hackers still finds unexpected things.

One reason is simply a corollary to the Infinite Monkey Theorem that states that an infinite number of monkeys banging on an infinite number of typewriters will eventually reproduce the exact works of William Shakespeare. A large number of monkeys banging on your product will eventually find security flaws.

A common example is a parent who brings their kid to work, who then plays around with a product doing things that no reasonable person would every conceive of, and accidentally breaks into the computer. Formal research and development focuses on the known threats, but has trouble of imagining unknown threats.

Another reason informal research is successful is how the modern technology stack works. Whether it's a mobile phone, a WiFi enabled teddy bear for the kids, a connected pacemaker jolting the grandparent's heart, or an industrial control computer controlling manufacturing equipment, all modern products share a common base of code.

Somebody can be an expert in an individual piece of code used in all these products without understanding anything about these products.

I experience this effect myself. I regularly scan the entire Internet looking for a particular flaw. All I see is the flaw itself, exposed to the Internet, but not anything else about the system I've probed. Maybe it's a robot. Maybe it's a car. Maybe it's somebody's television. Maybe it's any one of the billions of IoT ("Internet of Things") devices attached to the Internet. I'm clueless about the products -- but an expert about the flaw.

A company, even as big as Apple or Microsoft, cannot hire enough people to be experts in every piece of technology they use. Instead, they can offer bounties encouraging those who are experts in obscure bits of technology to come forward and examine their products.

This ad hoc nature is important when looking at the solution to the problem. Many think this can be formalized, such as with the requirement of contacting a company asking for permission to look at their product before doing any reverse-engineering.

This doesn't work. A security researcher will buy a bunch of used products off eBay to test out a theory. They don't know enough about the products or the original vendor to know who they should contact for permission. This would take more effort to resolve than the research itself.

It's solely informal and ad hoc "research" that needs protection. It's the same as with everything else that preaches openness and transparency. Imagine if we had freedom of the press, but only for journalists who first were licensed by the government. Imagine if it were freedom of religion, but only for churches officially designated by the government.

Those companies selling voting systems they promise as being "secure" will never give permission. It's only through ad hoc and informal security research, hostile to the interests of those companies, that the public interest will be advanced.

The current exemptions have a number of "gotchas" that seem reasonable, but which create an unacceptable chilling effect.

For example, they allow informal security research "as long as no other laws are violated". That sounds reasonable, but with so many laws and regulations, it's usually possible to argue they violated some obscure and meaningless law in their research. It means a security researcher is now threatened by years in jail for violating a regulation that would've resulted in a $10 fine during the course of their research.

Exceptions to the DMCA need to be clear and unambiguous that finding security bugs is not a crime. If the researcher commits some other crime during research, then prosecute them for that crime, not for violating the DMCA.

The strongest opposition to a "security research exemption" in the DMCA is going to come from the copyright industry itself -- those companies who depend upon copyright for their existence, such as movies, television, music, books, and so on.

The United States position in the world is driven by intellectual property. Hollywood is not simply the center of American film industry, but the world's film industry. Congress has an enormous incentive to protect these industries. Industry organizations like the RIAA and MPAA have enormous influence on Congress.

Many of us in tech believe copyright is already too strong. They've made a mockery of the Constitution's statement of copyrights being for a "limited time", which now means works copyrighted decades before you were born will still be under copyright decades after you die. Section 512 takedown notices are widely abused to silence speech.

Yet the copyright-protected industries perceive themselves as too weak. Once a copyrighted work is post to the Internet for anybody to download, it because virtually impossible to remove (like removing pee from a pool). Takedown notices only remove content from the major websites, like YouTube. They do nothing to remove content from the "dark web".

Thus, they jealously defend against any attempt that would weaken their position. This includes "security research exemptions", which threatens "DRM" technologies that prevent copying.

One fear is of security researchers themselves, that in the process of doing legitimate research that they'll find and disclose other secrets, such as the encryption keys that protect DVDs from being copied, that are built into every DVD player on the market. There is some truth to that, as security researchers have indeed publish some information that the industries didn't want published, such as the DVD encryption algorithm.

The bigger fear is that evildoers trying to break DRM will be free to do so, claiming their activities are just "security research". They would be free to openly collaborate with each other, because it's simply research, while privately pirating content.

But these fears are overblown. Commercial piracy is already forbidden by other laws, and underground piracy happens regardless of the law.

This law has little impact on whether reverse-engineering happens so much as impact whether the fruits of research are published. And that's the key point: we call it "security research", but all that's meaningful is "published security research".

In other words, we are talking about a minor cost to copyright compared with a huge cost to cybersecurity. The cybersecurity of voting machines is a prime example: voting security is bad, and it's not going to improve until we can publicly challenge it. But we can't easily challenge voting security without being prosecuted under the DMCA.

Conclusion

The only credible encryption algorithms are public ones. The only cybersecurity we trust is cybersecurity that we can probe and test, where most details are publicly available. That such transparency is necessary to security has been recognized since the 1880s with Kerckhoff's Principle. Yet, the naive still believe in coverups. As the election industry claimed in their brief: "Giving anonymous hackers a license to attack critical infrastructure would not serve the public interest". Giving anonymous hackers ad hoc, informal access to probe critical infrastructure like voting machines not only serves the public interest, but is necessary to the public interest. As has already been proven, voting machines have cybersecurity weaknesses that they are covering up, which can only be revealed by anonymous hackers.

This research needs to be ad hoc and informal. Attempts at reforming the DMCA, or the Copyright Office's attempt at exemptions, get modified into adding exemptions for formal research. This ends up having the same chilling effect on research while claiming to allow research.

Copyright, like other forms of intellectual property, is important, and it's proper for government to protect it. Even radical anarchists in our industry want government to protect "copyleft", the use of copyright to keep open-source code open.

But it's not so important that it should allow abuse to silence security research. Transparency and ad hoc testing is critical to research, and is more and more often being silenced using copyright law.


Sunday, October 25, 2020

Why Biden: Principle over Party

There exist many #NeverTrump Republicans who agree that while Trump would best achieve their Party's policies, that he must nonetheless be opposed on Principle. The Principle at question isn't about character flaws, such as being a liar, a misogynist, or a racist. The Principle isn't about political policies, such as how to handle the coronavirus pandemic, or the policies Democrats want. Instead, the Principle is that he's a populist autocrat who is eroding our liberal institutions ("liberal" as in the classic sense).

Countries don't fail when there's a leftward shift in government policies. Many prosperous, peaceful European countries are to the left of Biden. What makes prosperous countries fail is when civic institutions break down, when a party or dear leader starts ruling by decree, such as in the European countries of Russia or Hungary.

Our system of government is like football. While the teams (parties) compete vigorously against each other, they largely respect the rules of the game, both written and unwritten traditions. They respect each other -- while doing their best to win (according to the rules), they nonetheless shake hands at the end of the match, and agree that their opponents are legitimate.

The rules of the sport we are playing is described in the Wikipedia page on "liberal democracy".

Sport matches can be enjoyable even if you don't understand the rules. The same is true of liberal democracy: there's little civic education in the country so most don't know the rules game. Most are unaware even that there are rules.

You see that in action with this concern over Trump conceding the election, his unwillingness to commit to a "peaceful transfer of power". His supporters widely believed this is a made-up controversy, a "principle" created on the spot as just another way to criticize Trump.

But it's not a new principle. A "peaceful transfer of power" is the #1 bedrock principle from which everything else derives. It's the first way we measure whether a country is actually the "liberal democracy" that they claim. For example, the fact that Putin has been in power for 20 years makes us doubt that they are really the "liberal democracy" that they claim. The reason you haven't heard of it, the reason it isn't discussed much, is that it's so unthinkable that a politician would reject it the way Trump has.

The historic importance of this principle can be seen when you go back and read the concession speeches of HillaryMcCainGore, and Bush Sr., and Carter, you see that all of them stressed the legitimacy of their opponent's win, and a commitment to a peaceful transfer of power. (It goes back further than that, to the founding of our country, but I can't link every speech). The following quote from Hillary's concession to Trump demonstrates this principle:

But I still believe in America and I always will. And if you do, then we must accept this result and then look to the future. Donald Trump is going to be our president. We owe him an open mind and the chance to lead.

Our constitutional democracy enshrines the peaceful transfer of power and we don't just respect that, we cherish it. It also enshrines other things; the rule of law, the principle that we are all equal in rights and dignity, freedom of worship and expression. We respect and cherish these values too and we must defend them.

If this were Trump's only failure, then we could excuse it and work around it. As long as he defended all the other liberal institutions, then we could accept one aberration.

The problem is that he's attacking every institution. He's doing his best to act like a populist autocrat we see in non-democratic nations. Our commitment to liberal institutions is keeping him in check -- but less and less well as time goes on. For example, when Jeff Sessions refused to politicize the DoJ, Trump replaced him with Barr, who notoriously has corrupted the DoJ to serve Trump's political interests. I mean this only as yet another example -- a complete enumeration of his long train of abuses and usurpations would take many more pages than I intend for this blogpost.

Four more years of Trump means four more years of erosion of our liberal democratic institutions.

The problem isn't just what Trump can get away with, but the precedent he sets for his successor.

The strength of our liberal institutions to hold the opposing Party in check comes only from our defense of those institutions when our own Party is in power. When we cross the line, it means the opposing party will feel justified in likewise crossing the line when they get power.

We see that with the continual erosion of the Supreme Court over the last several decades. It's easy to blame the other Party for this, but the reality is that both parties have been been going back and forth corrupting this institution. The Republicans refusal to confirm Garland and their eagerness to confirm Barrett is egregious, but justified by the Democrats use of the nuclear option when they were in power. When Biden gets power, he's going to try to pack the court, which historically has been taught to school children as a breakdown of liberal democratic institutions, but which will be justified by the Republican's bad behavior in eroding those institutions. We might be able to avert court packing if Biden gets into power now, but we won't after four more years of Trump court appointments.

It's not just the politicization of the Supreme Court, it's the destruction of all our institutions. Somebody is going to have to stand for Principle over Party and put a stop to this. That is the commitment of the #NeverTrump. The Democrats are going to be bad when they get into power, but stopping them means putting our own house in order first.

This post makes it look like I'm trying to convince fellow Republicans why they should vote against Trump, and I suppose it is. However, my real purpose is to communicate with Democrats. My Twitter feed is full of leftists who oppose liberal democratic institutions even more than Trump. I want evidence to prove that I actually stand for Principle, and not just Party.


Friday, October 16, 2020

No, that's not how warrantee expiration works

The NYPost Hunter Biden story has triggered a lot of sleuths obsessing on technical details trying to prove it's a hoax. So far, these claims are wrong. The story is certainly bad journalism aiming to misinform readers, but it has not yet been shown to be a hoax.

In this post, we look at claim the timelines don't match up with the manufacturing dates of the drives. Sleuths claim to prove the drives were manufactured after the events in question, based on serial numbers.

What this post will show is that the theory is wrong. Manufacturers pad warrantee periods. Thus, you can't assume a date of manufacture based upon the end of a warrantee period.


The story starts with Hunter Biden (or associates) dropping off a laptop at a repair shop because of water damage. The repair shop made a copy of the laptop's hard drive, stored on an external drive. Later, the FBI swooped in and confiscated both the laptop and that external drive.

The serial numbers of both devices are listed in the subpoena published by the NYPost:


You can enter these serial numbers in the support pages at Apple (FVFXC2MMHV29) and Western Digital (WX21A19ATFF3) to discover precisely what hardware this is, and when the warrantee periods expire -- and presumably, when they started.

In the case of that external drive, the 3-year warrantee expires May 17, 2022 -- meaning the drive was manufactured on May 17, 2019 (or so they claim). This is a full month after the claimed date of April 12, 2019, when the laptop was dropped off at the repair shop.


There are lots of explanations for this. One of which is that the drive subpoenaed by the government (on Dec 9, 2019) was a copy of the original drive.

But a simpler explanation is this: warrant periods are padded by the manufacturer by several months. In other words, if the warrantee ends May 17, it means the drive was probably manufactured in February.

I can prove this. Coincidentally, I purchased a Western Digital drive a few days ago. If we used the same logic as above to work backward from warrantee expiration, then it means the drive was manufactured 7 days in the future.

Here is a screenshot from Amazon.com showing I purchased the drive Oct 12.


Here is a picture of the drive itself, from which you can read the serial number:


The Date of Manufacture (DOM) is printed right on the device as July 31, 2020.

But let's see what Western Digital reports as the end of warrantee period:


We can see that the warrantee ends on Oct 25, 2025. According to Amazon where I purchased the drive, the warrantee period is 5 years:


Thus, if we were to insist on working back from the expiration date precisely 5 years, then that means this drive was manufactured 7 days in the future. Today's date is Oct 16, the warrantee starts Oct 23. 

The reality is that Western Digital has no idea when the drive arrives, and hence when I (as the consumer) expect the warrantee period to start. Thus, they pad the period by a few months to account for how long they expect the device to be in the sales channel, the period between manufacture and when they are likely to arrive at the customer. Computer devices rapidly depreciate so are unlikely to be in the channel more than a few months.

Thus, instead of proving the timeline wrong, the serial number and warrantee expiration shows the timeline right. This is exactly the sort of thing you'd expect if the repair shop recovered the files onto a new external drive.

Another issue in the thread is about the "recovery" of files, which the author claims is improbable. In Apple's latest MacBooks, if the motherboard is damaged, then it's impractical to recover the data from the drive. These days, in the year 2020, the SSD drive inside notebooks are soldered right on the motherboard, and besides, encrypted with a TPM chip on the motherboard.

But here we are talking about a 2017 MacBook Pro which apparently had a removeable SSD. Other notebooks by Apple have had special connectors for reading SSDs from dead motherboards. Thus, recovery of files for notebooks of that era is not as impossible as a it sounds.

Moreover, maybe the repair shop fixed the notebook. "Water damage" varies in extent. It may have been possible to repair the damage and boot the device, at least in some sort of recovery mode.


Conclusion

Grabbing serial numbers and looking them is exactly what hackers should be doing in stories like this. Challenging the narrative is great -- especially with regards to the NYPost story, which is clearly bad journalism.

On the other hand, it goes both ways. We should be even more concerned about challenging those things that agree with us. This is a great example -- it appears we've found conclusive evidence that the NYPost story was a hoax. We need to carefully challenge that, too.


No, font errors mean nothing in that NYPost article

The NYPost has an article on Hunter Biden emails. Critics claim that these don't look like emails, and that there are errors with the fonts, thus showing they are forgeries. This is false. This is how Apple's "Mail" app prints emails to a PDF file. The font errors are due to viewing PDF files within a web browser -- you don't see them in a PDF app.

In this blogpost, I prove this.

I'm going to do this by creating forged email. The point isn't to prove the email wasn't forged, it could easily have been -- the NYPost didn't do due diligence to prove they weren't forged. The point is simply that that these inexplicable problems aren't evidence of forgery. All emails printed by the Mail app to a PDF, then displayed with Scribd, will look the same way.

To start with, we are going to create a simple text file on the computer called "erratarob-conspire.eml". That's what email messages are at the core -- text files. I use Apple's "TextEdit" app on my MacBook to create the file.

The structure of an email is simple. It has a block of "metadata" consisting of fields separated by a colon ":" character. This block ends with a blank line, after which we have the contents of the email.

Clicking on the file launches Apple's "Mail" app. It opens the email and renders it on the screen like this:
Notice how the "Mail" app has reformatted the metadata. In addition to displaying the email, it's making it simple to click on the names to add them to your address book. That's why there is a (VP) to the right on the screen -- it creates a placeholder icon for every account in your address book. I note this because in my version of Mail, the (VP) doesn't get printed to the PDF, but it does appear in the PDF on the NYPost site. I assume this is because their Mail app is 3 years older than mine.

One thing I can do with emails is to save them as a PDF document.

This creates a PDF file on the disk that we can view like any other PDF file. Note that yet again, the app has reformatted the metadata, different from both how it displayed it on the screen and how it appears in the original email text.

Sometimes web pages, such as this one, wants to display the PDF within the web page. The Scribd website can be used for this purpose, causing PDFs to appear like below:

Erratarob Conspire by asdfasdf




How this shows up on your screen will change depending on a lot of factors. For most people, though, they'll see slight font problems, especially in the name "Hunter Biden". Below is a screenshot of how it appears in my browser. You can clearly see how the 'n' and 't' characters crowd each other in the name "Hunter".


Again, while this is a fake email message, any real email message would show the same problems. It's a consequence of the process of generating a PDF and using Scribd. You can just click through on Scribd to download the original PDF (either mine or the one on the NYPost site), and then use your favorite PDF viewing app. This gets rid of Scribd's rendering errors.

Others have claimed that this isn't how email works, that email clients always show brackets around email message, using the < and > characters. Usually, yes, but not in all cases. Here, Apple's "Mail" app is clearly doing a lot more work to make things look pretty, not showing them.

There are some slight difference between what my 2020 MacBook produces and what the original NYPost article shows. As we can see from the metadata on their PDF, it was produced by a 2017 MacBook. My reproduction isn't exact, but it's pretty darn close that we don't need to doubt it.

We would just apply Occam's Razor here. Let's assume that the emails were forged. Then the easiest way would be to create a text document like I've shown above and open it in an email client to print out the message. It took me less than a minute, including carefully typing an unfamiliar Russian name. The hardest way would be to use Photoshop or some other technique to manipulate pixels, causing those font errors. Therefore, if you see font problems, the most likely explanation is simply "something I don't understand" and not "evidence of the conspiracy".

Conclusion

The problem with conspiracy theories is that everything not explained is used to "prove" the conspiracy.

We see that happening here. If there are unexplained formatting errors in the information the NYPost published, and the only known theory that explains them is a conspiracy, then they prove the conspiracy.

That's stupid. Unknown things may simply be unknown, that while you can't explain them doesn't mean they are unexplainable. That's what we see here: people are have convinced themselves they have "proof" because of unexplainable formatting errors, when in fact, such formatting can be explained.

The NYPost story has many problems. It is data taken out of context in an attempt to misinform the reader. We know it's a garbage story, even if all the emails are authentic. We don't need to invent conspiracy theories to explain it.

Wednesday, October 14, 2020

Yes, we can validate leaked emails

When emails leak, we can know whether they are authenticate or forged. It's the first question we should ask of today's leak of emails of Hunter Biden. It has a definitive answer.

Today's emails have "cryptographic signatures" inside the metadata. Such signatures have been common for the past decade as one way of controlling spam, to verify the sender is who they claim to be. These signatures verify not only the sender, but also that the contents have not been altered. In other words, it authenticates the document, who sent it, and when it was sent.

Crypto works. The only way to bypass these signatures is to hack into the servers. In other words, when we see a 6 year old message with a valid Gmail signature, we know either (a) it's valid or (b) they hacked into Gmail to steal the signing key. Since (b) is extremely unlikely, and if they could hack Google, they could a ton more important stuff with the information, we have to assume (a).

Your email client normally hides this metadata from you, because it's boring and humans rarely want to see it. But it's still there in the original email document. An email message is simply a text document consisting of metadata followed by the message contents.

It takes no special skills to see metadata. If the person has enough skill to export the email to a PDF document, they have enough skill to export the email source. If they can upload the PDF to Scribd (as in the story), they can upload the email source. I show how to below.

To show how this works, I send an email using Gmail to my private email server (from gmail.com to robertgraham.com).

The NYPost story shows the email printed as a PDF document. Thus, I do the same thing when the email arrives on my MacBook, using the Apple "Mail" app. It looks like the following:

The "raw" form originally sent from my Gmail account is simply a text document that looked like the following:

This is rather simple. Client's insert details like a "Message-ID" that humans don't care about. There's also internal formatting details, like the fact that this is a "plain text" message rather than an "HTML" email.

But this raw document was the one sent by the Gmail web client. It then passed through Gmail's servers, then was passed across the Internet to my private server, where I finally retrieved it using my MacBook.

As email messages pass through servers, the servers add their own metadata.

When it arrived, the "raw" document looked like the following. None of the important bits changed, but a lot more metadata was added:
The bit you care about here is the "DKIM-Signature:" metadata.
This is added by Gmail's servers, for anything sent from gmail.com. It "authenticates" or "verifies" that this email actually did come from those servers, and that the essential content hasn't been altered. The long strings of random-looking characters are the "cryptographic signature". That's what all crypto is based upon -- long chunks of random-looking data.

To extract this document, I used Apple's "Mail" client program and selected "Save As..." from the "File" menu, saving as "Raw Message Source".




I uploaded this this document to Scrib so that anybody can download and play with it, such as verifying the signature.

To verify the email signature, I simply open the email document using Thunderbird (Firefox's email client) with the "DKIM Verifier" extension, which validates that the signature is indeed correct. Thus we see it's a valid email sent by Gmail and that the key headers have not been changed:
The same could be done with those emails from the purported Hunter Biden laptop. If they can be printed as a PDF (as in the news story) then they can also be saved in raw form and have their DKIM signatures verified.

This sort of thing is extraordinarily easy, something anybody with minimal computer expertise can accomplish. It would go a long way to establishing the credibility of the story, proving that the emails were not forged. The lack leads me to believe that nobody with minimal computer expertise was involved in the story.

The story contains the following paragraph about one of the emails recovered from the drive (the smoking gun claiming Pozharskyi met Joe Biden), claiming how it was "allegedly sent". Who alleges this? If they have the email with a verifiable DKIM signature, no "alleging" is needed -- it's confirmed. Since Pozharskyi used Gmail, we know the original would have had a valid signature.


The lack of unconfirmed allegations that could be confirmed seems odd for a story of this magnitude.

Note that the NYPost claims to have a copy of the original, so they should be able to do this sort of verification:

However, while they could in theory, it appears they didn't in practice. The PDF displayed in the story is up on Scribd, allowing anybody to download it. PDF's, like email, also have metadata, which most PDF viewers will show you. It appears this PDF was not created after Sunday when the NYPost got the hard drive, but back in September when Trump's allies got the hard drive.





Conclusion

It takes no special skills to do any of this. If the person has enough skill to export the email to a PDF document, they have enough skill to export the email source. Instead of "Export to PDF", select "Save As ... Raw Message Source". Instead of uploading the .pdf file, upload the resulting .txt to Scribd.

At this point, a journalist wouldn't need to verify DKIM, or consult an expert: anybody could verify it. There a ton of tools out there that can simply load that raw source email and verify it, such as the Thunderbird example I did above.