Thursday, January 22, 2015

Needs more Hitler

Godwin's Law doesn't not apply to every mention of Hitler, as the Wikipedia page explains:
Godwin's law applies especially to inappropriate, inordinate, or hyperbolic comparisons with Nazis. The law would not apply to mainstays of Nazi Germany such as genocide, eugenics, racial superiority, or to a discussion of other totalitarian regimes, if that was the explicit topic of conversation, because a Nazi comparison in those circumstances may be appropriate.
Last week, I wrote a piece about how President Obama's proposed cyber laws were creating a Cyber Police State. The explicit topic of my conversation is totalitarian regimes.

This week, during the State of the Union address, I compared the text of Mein Kampf to the text of President Obama's speech. Specifically, Mein Kampf said this:
The state must declare the child to be the most precious treasure of the people. As long as the government is perceived as working for the benefit of the children, the people will happily endure almost any curtailment of liberty and almost any deprivation.
Obama's speech in support of his cyber legislation says this:
No foreign nation, no hacker, should be able to shut down our networks, steal our trade secrets, or invade the privacy of American families, especially our kids. We are making sure our government integrates intelligence to combat cyber threats, just as we have done to combat terrorism. And tonight, I urge this Congress to finally pass the legislation we need to better meet the evolving threat of cyber-attacks, combat identity theft, and protect our children’s information.
There is no reason to mention children here. None of the big news stories about hacker attacks have mentioned children. None of the credit cards scandals, or the Sony attack, involved children. Hackers don't care about children, have never targeted children in the past, and are unlikely to target children in the future. Children are wholly irrelevant to the discussion.

Tuesday, January 20, 2015

Drums of cyberwar: North Korea's cyber-WMDs

People ask me if today's NYTimes story changes my opinion that North Korea didn't do the Sony hack. Of course it doesn't. Any rational person can tell that the story is bogus. Indeed, such stories hint the government is hiding something.

Wednesday, January 14, 2015

Notes on the CIA spying case

The CIA announced it wasn't going to punish those responsible for spying/hacking on Senate computers. Since journalists widely get this story wrong, I thought I'd write up some notes getting it right. That's because while the CIA organization is guilty of gross misconduct, it's actually likely that no individual employees did anything wrong. The organization is guilty, but (possibly) the people aren't.

The first thing to note is that no hacking happened. These were CIA computers, at a CIA facility, managed by CIA sysadmins, who had the admin passwords.

That's the complicated bit. In 2009 when the Intelligence committee demanded to look at the torture/interrogation documents, the CIA balked about the security issues of staffers taking documents offsite. Therefore, they came to an agreement with the Senate: the CIA would set up a special secured network at their building, disconnected from the rest of the CIA network. The Senate staffers would go there to work. Documents would be transferred from the CIA's main network onto this special network by hand (probably USB flash drive or something).

The Senate committee didn't have to agree to this. By law, they have oversight, and can make decisions that screw the CIA. But the Senate committee recognized this was a legitimate concern, and agreed to the compromise. However, they demanded concessions from the CIA, such as not "spying" on their staffers.

I say "spying" here because that's the word used in the press, but it was more complex than that. Spying on employees is routine within the CIA. There's always compliance officers running around checking computers to make sure they don't have documents on them they shouldn't. So "compliance" is the better word than "spying", it sounds much nicer.

But the agreement was specifically that only IT techies would have access to the computers purely for the purposes of IT techy stuff, and that nobody else at the CIA would have access -- not even for compliance purposes.

Well, in the course of events, other people at the CIA did access these computers, did do compliance checks. Judging from Dianne Feinstein's comments, it appears that most of these incidents were just honest mistakes, at least, she's not concerned by them. The one incident she's concerned about involves the Panetta report -- the internal CIA investigation that found gross misconduct in the torturing/interrogation.

The Panetta report wasn't one of the documents the Senate staffers were supposed to see. Nobody knows how it got onto these special computers. The staffers just found it there accidentally. At least, that's the information we have publicly. The CIA accuses the staffers of doing nefarious things, but we outsiders can't know really what happened. (Maybe somebody at the CIA leaked it to the staffers).

When the CIA heard the staffers had the Panetta document, they did what they always do when things like this happen: their normal compliance checks and investigation. Among the things they would do in such situations is thoroughly scan the computers they'd given the Senate staffers, read their emails, search their files, and so forth. Yes, at the top level, the head of the CIA agreed that this would not happen -- but the employees didn't necessarily know this. Apparently, nobody told them about the agreement -- they didn't get the memo.

The problem is ultimately this: that while the CIA as an organization broke the rules here, it's possible that no individual person did anything intentionally bad.

Personally, I think this is bullshit. I think lower level flunkies knew what they were doing was wrong, that high-level managers gave them direction, and that many at the CIA deliberately pushed the rules as much as they could in order to interfere with the Senate investigation. But I don't have proof of this, and no such proof has been made public.


I don't like the CIA. I think their torture is a stain on our national honor. I think it's a travesty that the torturers aren't punished. It's clear I don't support the CIA, and that I have no wish to defend them. But I still defend truth, and the truth is this: the CIA did not "hack senate computer" as many claim.




These notes where compiled mostly from Dianne Feinstein's description of events http://www.feinstein.senate.gov/public/index.cfm/2014/3/feinstein-statement-on-intelligence-committee-s-cia-detention-interrogation-report.


Obama's War on Hackers


In next week's State of the Union address, President Obama will propose new laws against hacking that could make either retweeting or clicking on the above (fictional) link illegal. The new laws make it a felony to intentionally access unauthorized information even if it's been posted to a public website. The new laws make it a felony to traffic in information like passwords, where "trafficking" includes posting a link.

You might assume that things would never become that bad, but it’s already happening even with the current laws. Prosecutors went after Andrew “weev” Auernheimer for downloading a customer list AT&T negligently made public. They prosecuted Barrett Brown for copying a URL to the Stratfor hack from one chatroom to another. A single click is all it takes. Prosecutors went after the PayPal-14 for clicking on a single link they knew would flood PayPal’s site with traffic. The proposed changes make such prosecutions much easier.

Even if you don’t do any of this, you can still be guilty if you hang around with people who do. Obama proposes upgrading hacking to a “racketeering” offense, means you can be guilty of being a hacker by simply acting like a hacker (without otherwise committing a specific crime). Hanging out in an IRC chat room giving advice to people now makes you a member of a “criminal enterprise”, allowing the FBI to sweep in and confiscate all your assets without charging you with a crime. If you innocently clicked on the link above, and think you can defend yourself in court, prosecutors can still use the 20-year sentence of a racketeering charge in order to force you to plea bargain down to a 1-year sentence for hacking. (Civil libertarians hate the police-state nature of racketeering laws).

Obama’s proposals come from a feeling in Washington D.C. that more needs to be done about hacking in response to massive data breaches of the last couple years. But they are blunt political solutions which reflect no technical understanding of the problem.

Most hacking is international and anonymous. They can’t catch the perpetrators no matter how much they criminalize the activities. This War on Hackers is likely to be no more effective than the War on Drugs, where after three decades the prison population has sky rocketed from 0.1% of the population to a staggering 1%. With 5% the world’s population, we have 25% of the world’s prisoners – and this has done nothing to stop drugs. Likewise, while Obama’s new laws will dramatically increase hacking prosecutions, they’ll be of largely innocent people rather than the real hackers that matter.

Internet innovation happens by trying things first then asking for permission later. Obama’s law will change that. For example, a search engine like Google downloads a copy of every website in order to create a search “index”. This sort of thing is grandfathered in, but if “copying the entire website” were a new idea, it would be something made illegal by the new laws. Such copies knowingly get information that website owners don’t intend to make public. Similarly, had hacking laws been around in the 1980s, the founders of Apple might’ve still been in jail today, serving out long sentences for trafficking in illegal access devices.

The most important innovators this law would affect are the cybersecurity professionals that protect the Internet. If you cared about things such as "national security" and "cyberterrorism", then this should be your biggest fear. Because of our knowledge, we do innocent things that look to outsiders like "hacking". Protecting computers often means attacking them. The more you crack down on hackers, the more of a chilling effect you create in our profession. This creates an open-door for nation-state hackers and the real cybercriminals.

Along with its Hacking Prohibition law, Obama is also proposing a massive Internet Surveillance law. Companies currently monitor their networks, using cybersecurity products like firewalls, IPSs, and anti-virus. Obama wants to strong-arm companies into sharing that information with the government, creating a virtualized or “cloud” surveillance system.

In short, President Obama’s War on Hackers is a bad thing, creating a Cyber Police State. The current laws already overcriminalize innocent actions and allow surveillance of innocent people. We need to roll those laws back, not extend them.



Monday, January 12, 2015

A Call for Better Vulnerability Response

Microsoft forced a self-serving vulnerability disclosure policy on the industry 10 years ago, but cries foul when Google does the same today.

Ten years ago, Microsoft dominated the cybersecurity industry. It employed, directly or through consultancies, the largest chunk of security experts. The ability to grant or withhold business meant influencing those consulting companies -- Microsoft didn't even have to explicitly ask for consulting companies to fire Microsoft critics for that to happen. Every product company depended upon Microsoft's goodwill in order to develop security products for Windows, engineering and marketing help that could be withheld on a whim.

This meant, among other things, that Microsoft dictated the "industry standard" of how security problems ("vulnerabilities") were reported. Cybersecurity researchers who found such bugs were expected to tell the vendor in secret, and give the vendor as much time as they needed in order to fix the bug. Microsoft sometimes sat on bugs for years before fixing them, relying upon their ability to blacklist researchers to keep them quiet. Security researchers who didn't toe the line found bad things happening to them.

I experienced this personally. We found a bug in a product called TippingPoint that allowed us to decrypt their "signatures", which we planned to release at the BlackHat hacker convention, after giving the vendor months to fix the bug. According to rumors, Microsoft had a secret program with TippingPoint with special signatures designed to track down cybercriminals. Microsoft was afraid that if we disclosed how to decrypt those signatures, that their program would be found out.

Microsoft contacted our former employer, ISS, which sent us legal threats. Microsoft sent FBI agents to threaten us in the name of national security. A Microsoft consultant told the BlackHat organizer, Jeff Moss, that our research was made up, that it didn't work, so I had to sit down with Jeff at the start of the conference to prove it worked before I was allowed to speak.

My point is that a decade ago in the cybersecurity industry, Microsoft dictated terms.

Today, the proverbial shoe is on the other foot. Microsoft's products are now legacy, so Windows security is becoming as relevant as IBM mainframe security. Today's cybersecurity researchers care about Apple, Google Chrome, Android, and the cloud. Microsoft is powerless to threaten the industry. It's now Google who sets the industry's standard for reporting vulnerabilities. Their policy is that after 90 days, vulnerabilities will be reported regardless if the vendor has fixed the bug. This applies even to Google itself when researchers find bugs in products like Chrome.

This is a nasty trick, of course. Google uses modern "agile" processes to develop software. That means that after making a change, the new software is tested automatically and shipped to customers within 24 hours. Microsoft is still mired in antiquated 1980s development processes, so that it takes three months and expensive manual testing before a change is ready for release. Google's standard doesn't affect everyone equally -- it hits old vendors like Microsoft the hardest.

We saw the effect this last week, where after notifying Microsoft of a bug 90 days ago, Google dumped the 0day (the information hackers need to exploit the bug) on the Internet before Microsoft could release a fix.

I enjoyed reading Microsoft's official response to this event, full of high-minded rhetoric why Google is bad, and why Microsoft should be given more time to fix bugs. It's just whining -- Microsoft's alternative disclosure policy is even more self-serving than Google's. They are upset over their inability to adapt and fix bugs in a timely fashion. They resent how Google exploits its unfair advantage. Since Microsoft can't change their development, they try to change public opinion to force Google to change.

But Google is right. Since we can't make perfect software, we must make fast and frequent fixes the standard. Nobody should be in the business of providing "secure" software that can't turn around bugs quickly. Rather than 90 days being too short, it's really too long. Microsoft either needs to move forward with the times and adopt "agile" methodologies, or just accept its role of milking legacy for the next few decades as IBM does with mainframes.

Monday, January 05, 2015

Platitudes are only skin deep

I overdosed on Disney Channel over the holidays, because of course children control the remote. It sounds like it's teaching kids wholesome lessons, but if you pay attention, you'll realize it's not. It just repeats meaningless platitudes with no depth, and sometimes gets the platitudes wrong.

For example, it had a segment on the importance of STEAM education. This sounds a lot like "STEM", which stands for "science, technology, engineering, and math". Many of us believe in interesting kids in STEM. It's good for them, because they'll earn twice that of other college graduates. It's good for society, because there aren't enough technical graduates coming out of college to maintain our technology-based society. It's also particularly important for girls, because we still have legacy sexism that discourages girls from pursuing technical careers.

But Disney adds an 'A' in the middle, making STEM into STEAM. The 'A' stands for "Arts", meaning the entire spectrum of Liberal Arts. This is nonsense, because at this point, you've now included pretty much all education. The phrase "STEAM education" is redundant, conveying nothing more than simply "education".

What's really going on is that they attack the very idea they pretend to promote. Proponents of STEM claim those things are better than Arts, and Disney slyly says the opposite, without parents noticing.

Another example of this is a show featuring the school's debate team. They say that debate is important in order to understand all sides of an issue. But the debate topic they have is "beauty is only skin deep", and both "sides" of the debate agree with the proposition.

This is garbage. Two sides to a debate means two opposing sides. It's the very basis of enlightenment, the proposition that reasonable people can disagree. It means that if you are Protestant, that while you disagree with Catholics, you accept the fact that they are reasonable people, and not devil worshippers who eat babies. In real school debate, you are forced to debate both sides -- you can't choose which side you want to debate. This means debate isn't about your opinion, but your ability to cite support for every claim you make.

What Disney implicitly teaches kids is that there is only one side to a debate, the correct side, and that anybody who disagrees is unreasonable.


The problem with Disney is ultimately is that the writers are stupid. They aren't deep thinkers, they don't really understand the platitudes they want to teach children, so they end up teaching children the wrong thing.

Thursday, January 01, 2015

Anybody can take North Korea offline

A couple days after the FBI blamed the Sony hack on North Korea, that country went offline. Many suspected the U.S. government, but the reality is that anybody can do it -- even you. I mention this because of a Vox.com story that claims "There is no way that Anonymous pulled off this scale of an attack on North Korea". That's laughably wrong, overestimating the scale of North Korea's Internet connection, and underestimating the scale of Anonymous's capabilities.

North Korea has a roughly ~10-gbps link to the Internet for it's IP addresses. That's only about ten times what Google fiber provides. In other words, 10 American households can have as much bandwidth as the entire country. Anonymous's capabilities exceed this, scaling past 1-terabit/second, or a hundred times more than needed to take down North Korea.

Attacks are made easier due to amplifiers on the Internet, which can increase the level of traffic by about 100 times. Thus, in order to overload a 10-gbps link of your target, you only need a 100-mbps link yourself. This is well within the capabilities of a single person.

Such attacks are difficult to do from your home, because your network connection is asymmetric. A 100-mbps from Comcast refers to the download speed -- it's only about 20-mbps in the other direction. You'll probably need to use web host services that sell high upload speed. You can cheaply get a 100-mbps or even 1-gbps upload connection for about $30 per month in bitcoin. You'll need to find one that doesn't do egress filtering, because you'll be spoofing North Korea's addresses, but that's rarely a problem.

You need some familiarity with command-line tools. In this age of iPads, the command-line seems like Dark Magic to some people, but it's something all computer geeks use regularly. Thus, to do these attacks, you'll need some basic geek skills, but they are something that can be acquired in a week.

How I would do it is roughly shown by the following command-line command. This uses some software I wrote for port-scanning, but as a side effect, it can also be used for these sorts of "amplified DDoS" attacks.
What we see in this command-line is the following:

  • use spoofing as part of the attack
  • targeting the North Korean IP addresses around 175.45.176.0
  • bouncing the packets off a list of amplifiers
  • building a custom NTP monlist packet that causes amplification
  • sending to port 123 (NTP)
  • sending at a rate of one million packets/second
  • repeating the attack infinitely (never stopping)

For this attack to work, you'll need a list of amplifiers. You can find these lists in hacker forums, or you can just find the amplifiers yourself using masscan (after all, that's what port scanners are supposed to do).

I use masscan in my example because it's my tool, so it's how I'd do it, but no special tool is needed. You can write you own code to do it pretty easily, and there are tons of other tools that can be configured to do this. I stress this because people have this belief in the power of cyberweapons, that powerful effects like disabling a country can't happen without powerful weapons. This belief is nonsense.

It's unknown if Anonymous hackers actually DDoSed North Korea, like the "Lizard Squad" that claims responsibility, but it's easily within their capabilities. What's actually astonishing is that since millions of people can so easily DDoS North Korea why it doesn't happen more often.




Note: This only takes down one aspect of the North Korean Internet. Satellite links, other telephony links, cell phones, and the ".kp" domain names would still be unaffected. It would take some skill to attack all those possibilities, but it appears that the hackers only did the simple DDoS.

The GoP pastebin hoax

Neither the FBI nor the press is terribly honest or competent when discussing "hackers". That's demonstrated by yesterday's "terrorists threaten CNN" story.

It started with Glenn Greenwald's paper The Intercept which reported:
The cyberterrorists who hacked Sony Pictures Entertainment’s computer servers have threatened to attack an American news media organization, according to an FBI bulletin obtained by The Intercept.
They were refering to this bulletin which says:
On 20 December, the GOP posted Pastebin messages that specifically taunted the FBI and USPER2 for the "quality" of their investigations and implied an additional threat. No specific consequence was mentioned in the posting.
Which was refering to this pastebin with the vague threat:
P.S. You have 24 hours to give us the Wolf.
Today, @DavidGarrettJr took credit for the Pastebin, claiming it was a hoax. He offered some evidence in the form of the following picture of his browser history:


Of course, this admission of a hoax could itself be a hoax, but it's more convincing than the original Pastebin. It demonstrates we have no reason to believe the original pastebin.

In the hacker underground, including pastebin, words get thrown around a lot. There was nothing in the pastebin that deserved the FBI's attention, not even the extremely lukewarm warning it gave. The FBI was clear that they didn't consider this a big deal, The Intercept was clearly out-of-line blowing this up into a "cyberterrorist" threat. They later edited their article downgrading it to mere "hackers", but still they exaggerated the threat.



Update: A picture of the full original pastebin is below. I hereby place this in the public domain, please use it anyway you want. I mention this because Mathew Keys plasters his name all over the image, which is a douchey thing to do when you've put essentially zero effort into it [ok, it might be an automated feature of the website, or just habit, but it's crappy in this context). Moreover, instead of reporting the URL (http://pastebin.com/6mYBrp96) so anybody can see for themselves, news stories hide the URL in stories so that you have to rely upon the images.

Sunday, December 28, 2014

That Spiegel NSA story is activist nonsense

Yet again activists demonstrate they are less honest than the NSA. Today, Der Spiegel has released more documents about the NSA. They largely confirm that the NSA is actually doing, in real-world situations, what we'ved suspected they can do. The text of the article describing these documents, however, wildly distorts what the documents show. A specific example is a discussion of something call "TUNDRA".

It is difficult to figure out why TUNDRA is even mentioned in the story. It's cited to support some conclusion, but I'm not sure what that conclusion is. It appears the authors wanted to discuss the "conflict of interest" problem the NSA has, but had nothing new to support this, so just inserted something at random. They are exploiting the fact the average reader can't understand what's going on. In this post, I'm going to describe the context around this.

TUNDRA was a undergraduate student project, as the original document makes clear, not some super-secret government program into cryptography. The purpose of the program is to fund students and find recruits, not to create major new advances in cryptography.

It's given a code-name "TUNDRA" and the paragraph in the document is labeled "TOP SECRET". The public has the misconception that this means something important is going on. The opposite is true: the NSA puts codenames on nearly everything. Among the reasons is that by putting codenames even on trivial things, it prevents adversaries from knowing which codenames are important. The NSA routinely overclassifies things. That's why so many FOIA requests come with the "TOP SECRET" item crossed out -- you classify everything as highly as you can first, then relax the restriction later. Thus, unimportant student projects get classified codenames.

The Spiegel article correctly says that the "agency is actively looking for ways to break the very standard it recommends", and it's obvious from context that that the Spiegel is implying this is a bad thing. But it's a good thing, as part of the effort in improving encryption. You secure things by trying to break them. That's why this student project was funded by the IAD side of the NSA -- the side dedicated to improving cryptography. Most of us in the cybersecurity industry are trying to break things -- we only trust things that we've tried to break but couldn't.

The Spiegel document talks about AES, but it's not AES being attacked. Instead, it's all block ciphers in "electronic codebook" modes that are being attacked. The NSA, like all cryptographers, recommends that you don't use the basic "electronic codebook" mode, because it reveals information about the encrypted data, as the well known "ECB penguin" shows. As you can see in the image, when you encrypt a bitmap image of a penguin, you can still see it's a penguin despite the encryption. Finding appropriate modes other than "electronic codebook" is an important area of research. [***]

The NSA already has ways of attacking ECB mode, as the penguin image demonstrates. I point this out because if the NSA already has a "handful of ways" of doing something, adding one more really isn't a major new development. Thus, even if you don't understand cryptography, it should be obvious that the inclusion of TUNDRA in this story is pretty stupid.

Journalism is supposed to be different from activism. Journalists are supposed to be accurate and fair, to communicate rather than convince. The activist has the oppose goal, to convince the reader, even if that means exploiting misinformation. We see that in this Der Spiegel article, where the TUNDRA item is distorted into order to convince the reader that the NSA is doing something evil.



Update: [***] There has been some discussion on Twitter about the ECB penguin above. That's because where the document says "electronic codebook", it may not necessarily be referring to ECB mode (even though ECB stands for "electronic codebook"). That's because "codebook" is also just another name for "block cipher", the more common/modern name for encryption algorithms like AES.

Regardless, the principle still holds: it's not AES that TUNDRA attacks, but the underlying "codebook" property, whatever that refers to, whether it's "block ciphers" or "block ciphers in ECB mode". Also regardless, since it's an undergraduate project designed for recruitment, it's probably something basic (like the ECB penguin) rather than a major advancement in cryptography.