Monday, November 24, 2014

That wraps it up for end-to-end

The defining feature of the Internet back in 1980 was "end-to-end", the idea that all the intelligence was on the "ends" of the network, and not in middle. This feature is becoming increasingly obsolete.

This was a radical design at the time. Big corporations and big government still believed in the opposite model, with all the intelligence in big "mainframe" computers at the core of the network. Users would just interact with "dumb terminals" on the ends.

The reason the Internet was radical was the way it gave power to the users. Take video phones, for example. AT&T had been promising this since the 1960s, as the short segment in "2001 A Space Odyssey" showed. However, getting that feature to work meant replacing all the equipment inside the telephone network. Telephone switches would need to know the difference between a normal phone call and a video call. Moreover, there could be only one standard, world wide, so that calling Japan or Europe would work with their video telephone systems. Users were powerless to develop video calling on their own -- they would have to wait for the big telcom monopolies to develop it, however long it took.

That changed with the Internet. The Internet carries packets without knowing their content. Video calling with Facetime or Skype or LINE is just an app, from your iPhone or Android or PC. People keep imagining new applications for the Internet every day, and implement them, without having to change anything in core Internet routing hardware.

I've used Facetime, Skype, and LINE to talk to people in Japan. That's because there is no real international standard for video calling. Each person I call requires me to install whichever app they are using. Traditional thinking is that government ought to create standards, so that every app would be compatible with every other app, so that I could Skype from Windows to somebody's iPhone using Facetime. This tradition is nonsense. If we waited for government standards, it'd take forever. Teenagers who heavily use video today would be grown up with kids of their own before government got around to creating the right standard. Lack of standards means freedom to innovate.

Such freedom was almost not the case. You may have heard of something called the "OSI 7 Layer Model". Everything you know about that model is wrong. It was an attempt by Big Corporations and Big Government to enforce their model of core-centric networking. It demanded such things as a "connection oriented network protocol", meaning smart routers rather than the dumbs ones we have today. It demanded that applications be standardized, so that there would be only one video conferencing standard, for example. Governments in US, Japan, and Europe mandated that the computers they bought supporting OSI conformant protocols. (The Internet's TCP/IP protocols do not conform to the OSI model.) Such rules were on the book through into the late 1990s dot-com era, when many in government still believed that the TCP/IP Internet was just a brief experiment on the way to a Glorious Government OSI Internetwork.

The Internet did have standards, of course, but they were developed in the opposite manner. Individuals innovated first, on the ends of the network, developing apps. Only when such apps became popular did they finally get documented as a "standard'. In other words, Internet standards we more de facto than de jure. People innovated first, on their own ends of the network, and the infrastructure and standards caught up later.

But here's the thing: the Internet ideal of end-to-end isn't perfect, either. There are reasons why not all innovation happens on the ends.

Take your home network as an example. The way your home likely works is that you have a single home router with cable/fiber/DSL on one side talking to the Internet, and WiFi on the other side talking to the devices in your home. Attached to your router you have a desktop computer, a couple notebooks, an iPad, your phones, an Xbox/Playstation, and your TV.

In the true end-to-end model, all these devices would be on the Internet directly -- that they could be "pinged" from the Internet. In today's reality, though, that's not the way things work. Your home router is a firewall. It blocks incoming connections, so that devices in your home can connect outwards, but nothing on the Internet can connect inwards. This fundamentally breaks the ideal of end-to-end, as a smart device sits in the network controlling access to the ends.

This is done for two reasons. The first is security, so that hackers can't hack the devices in your home. Blocking inbound traffic blocks 99% of hacker attacks against devices.

The second reason for smart home routers is the well-known limitation on Internet addresses: there are only 4 billion of them. However, there are more than 4 billion devices connected to the Internet. This fix this, your home router does address translation. Your router has only a single public Internet address. All the devices in your home have private addresses that wouldn't work on the Internet. As packets flow in/out of your home, your router transparently changes the private addresses in the packets into the single public address.

Thus, when you google "what's my IP address", you'll get a different address than your local machine. Your machine will have a private address like 10.x.x.x or 192.168.x.x, but servers on the Internet won't see that -- they'll see the public address you've been assigned by your ISP.

According to Gartner, nearly billion smarthphones were sold in 2013. These are all on the Internet. That represents a quarter of the Internet address space used up in only a single year. Yet, virtually none of them are assigned real Internet addresses. Almost all of them are behind address translators -- not the small devices like you have in your home, but massive translators that can handle millions of simultaneous devices.

The consequence is this: there are more devices with private addresses, that must go through translators, than there are devices with public addresses. In other words, less than 50% of the Internet is end-to-end.

The "address space exhaustion" of tradition Internet addresses inspired an update to the protocol to use larger addresses, known as IPv6. It uses 128-bit addresses, or 4 billion times 4 billion times 4 billion times 4 billion. This is enough to assign a unique address to all the grains of sand on all the beaches on Earth. It's enough to restore end-to-end access to every device on the Internet, times billions and billlions.

My one conversation with Vint Cerf (one of the key Internet creators) was over this address space issue. Back in 1992, every Internet engineer knew for certain that the Internet would run out of addresses by around the year 2000. Every engineer knew this would cause the Internet to collapse. At the IETF meeting, I tried to argue otherwise. I used the Simon-Ehrlich Wager as an analogy. Namely, the 4 billion addresses weren't a fixed resource, because we would become increasingly efficient at using them. For example, "dynamic" addresses would use space more efficiently, and translation would reuse addresses.

Cerf's response was the tautology "but that would break the end-to-end principle".

Well, yes, but no such principle should be a straightjacket. The end-to-end principle is already broken by hackers. Even with IPv6, when all your home devices have a public rather than private address on the Internet, you still want a firewall breaking the end-to-end principle blocking inbound connections. Once you've decided to firewall a network, it no longer matters whether it's using IPv6 or address translation of private addresses. Indeed, address translation is better for firewalling, as it defaults to "fail close". That means if a failure occurs, all communication is blocked. With IPv6, firewalls become "fail open", where failures allow communication to continue.

Firewalls are only the start in breaking end-to-end. It's the "cloud" where we see a radical reversion back to old principles.

Your phone is no longer a true "end" of the network. Sure, your phone has a powerful processor that's faster than supercomputers of the last decade, but that power is used primarily for display not for computation. Your data and computation is instead done in the cloud. Indeed, when you lose or destroy your phone, you simply buy a new one and "restore" it form the cloud.

Thus, we are right back to the old world of smart core network with "mainframes", and "dumb terminals" on the ends. That your phone has supercomputer power doesn't matter -- it still does just what it's told by the cloud.

But the last nail in the coffin to the "end-to-end" principle is the idea of "net neutrality". While many claim it's a technical concept, it's just a meaningless political slogan. Congestion is an inherent problem of the Internet, and no matter how objectively you try to solve it, it'll end up adversely affecting somebody -- somebody who will then lobby politicians to rule in their favor. The Comcast-NetFlix issue is a good example where the true technical details are at odds with the way this congestion issue has been politicized. Things like "fast-lanes" are everywhere, from content-delivery-networks to channelized cable/fiber. Rhetoric creates political distinctions among various "fast-lanes" when there are no technical distinctions.

This politicization of the Internet ends the personal control over the Internet that was promised by end-to-end. Instead of being able to act first and asking for forgiveness later, you must first wait for permission from Big Government. Instead of being able to create your own services, you must wait for Big Corporations (the only ones that can afford lawyers to lobby government) to deliver those services to you.


We aren't going to regress completely to the days of mainframes, of course, but we've given up much of the territory of individualistic computing. In some ways, this is a good thing. I don't want to manage my own data, losing it when a hard drive crashes because I forgot to back it up. In other ways, it's a bad thing. The more we regulate the Internet to insure good things, the more we stop innovations that don't fit within our preconceived notions. Worse, the more it's regulated, the more companies have to invest in lobbying the government for favorable regulation, rather than developing new technology..

Monday, November 10, 2014

Don't mistake masturbation for insight [NOT SAFE FOR WORK]

Stroking prejudices isn't insight. I mention this because people keep sending me this Oatmeal cartoon that does nothing but furiously stroke its supporters until they ejaculate all over the screen.

The comic claims NetNeutrality is a bipartisan issue. By bipartisan it means that Democrats and the Green Party overwhelming support it. The comic is certainly not referring to Republicans, who overwhelming oppose NetNeutrality, as any googling of "republican net neutrality" would demonstrate. I suspect the problem here is that Oatmeal readers are in a filter-bubble (a technical term for "sitting in a circle jerking each other off") and therefore don't seriously believe Republicans exist.

The comic seriously says this: support for NetNeutrality is bipartisan, but opposition is partisan. I suspect they like words like "shit smear" because they are so accustomed to having their heads up their own asses.

The Oatmeal claims NetNeutrality won't mean the feds can dictate how much your ISP charges. I suspect that's because the comic's fingering of his own ass distracts him from reading. Obama's proposal today is to reclassify the Internet as a common-carrier under section II of the Telecommunication's Act. Luckily, we have something called the "Internet" were we can immediately click on a link and read the fucking act, which starts with "Service and Charges", declaring that the government can indeed outlaw charges it deems "unreasonable". Obama acknowledges this in his speech, saying that while Title II puts the Internet in the hands of the FCC so that they can dictate prices, they should "forbear" from doing so.

The Oatmeal shows a graph as "proof" that Comcast was "throttling" Netflix traffic. But all the data comes from Netflix -- a highly biased source. Moreover, the graph doesn't show throttling -- it shows how Netflix's rapid growth has overloaded interconnection points. On relevant links, the amount of Netflix traffic exceeds all other traffic combined. Some companies are willing pay to upgrade the links and let Netflix free-ride. Others refuse to put up with nonsense, and want Netflix to pay for its own traffic. Seriously, not even Netflix claims Comcast is "throttling" its content. I suspect the Oatmeal picked that that word out of thin air because its reference to auto-asphyxiation gets its readers off.

The premise of the Oatmeal cartoon is that Ted Cruz is stupid, unlike its readers who are good looking, special, and just cleverer than everybody else. It pretends to use simple language to explain an obvious issue so that even a mere politician can understand. Only, it gets things fundamentally wrong. NetNeutrality is just a political slogan. Slogans don't work, laws do -- and here's the thing: NetNeutrality isn't currently the law. There is nothing now, nor has there ever been, anything stopping a company like Comcast from doing the evil scenarios outlined in the comic. And indeed, some companies do block things like that. I suspect that if the writer of the Oatmeal comic stopped admiring his cock in the mirror long enough to actually read something, he'd know more about whether NetNeutrality rules were actually in force, or how Title II works.

Ted Cruz's tweet isn't bad. Obamacare is an apt (albeit exaggerated) analogy for a change that heaps tons of regulation on an industry. However, it is the same sort of mutual masturbation. If you hate Obamacare, you'll hate NetNeutrality regulation. If you love Obamacare, you'll love NetNeutrality. Thus, Cruz's tweet is there just to stroke his supporters, rather than change minds.

Please please please, in the future when you think you have something clever to say, don't link me an Oatmeal cartoon. Neither it nor you are as smart as you think. Even Ted Cruz is smarter.

This Vox NetNeutrality article is wrong

There is no reasoned debate over NetNeutrality because the press is so biased. An example is this article by Timothy B. Lee at Vox "explaining" NetNeutrality. It doesn't explain, it advocates.

1. Fast Lanes

Fast-lanes have been an integral part of the Internet since the beginning. Whenever somebody was unhappy with their speeds, they paid money to fix the problem. Most importantly, Facebook pays for fast-lanes, contrary to the example provided.

One prominent example of fast-lanes is "channels" in the local ISP network to avoid congestion. This allows them to provide VoIP and streaming video over their own private TCP/IP network that won't be impacted by the congestion that everything else experiences. That's why during prime-time (7pm to 10pm), your NetFlix streams are low-def (to reduce bandwidth), while your cable TV video-on-demand are hi-def.

Historically, these channels were all "MPEG-TS", transport streams based on the MPEG video standard. Even your Internet packets would be contained inside the MPEG streams on channels.

Today, the situation is usually reversed. New fiber-optic services have TCP/IP network everywhere, putting MPEG streams on top of TCP/IP. They just separate the channels into their private TCP/IP network that doesn't suffer congestion (for voice and video-on-demand), and the public Internet access that does. Their services don't suffer congestion, other people's services do.

The more important fast-lanes are known as "content delivery networks" or "CDNs". These companies pay ISPs to co-locate servers on their network, putting servers in every major city. Companies like Facebook then pay the CDNs to host their data.

If you monitor your traffic, you'll see that the vast majority goes to CDNs located in your city. When you access different, often competing companies like Facebook and Apple, your traffic may in fact go to the same IP address of the CDN server.

Smaller companies that cannot afford CDNs most host their content in just a couple locations. Since these locations are thousands of miles from most of their customers, access is slower than CDN hosted content like Facebook. Pay-for-play has, with preferred and faster access, has been an integral part of the Internet since the very beginning.

This demonstrates that the Vox example of Facebook is a complete lie. Their worst-case scenario already exists, and has existed since before the dot-com era even started, and has enabled competition and innovation rather than hindering it.

2. Innovation

Vox claims: "Advocates say the neutrality of the internet is a big reason there has been so much online innovation over the last two decades".

No, it's opponents who claim the lack of government regulation is the reason there has been so much online innovation in the last decades.

NetNeutality means sweeping government regulation that forces companies to ask permission first before innovating. NetNeutrality means spending money lobbying for government for special rules, surviving or failing based on the success of paying off politicians rather than surviving or failing based on the own merits.

Take GoGo Inflight broadband Internet service on airplanes. They block NetFlix in favor of their own video streaming service. This exactly the sort of thing that NetNeutrality regulations are supposed to block. However, it's technically necessary. A single person streaming video form NetFlix would overload the connection for everyone else. To satisfy video customers, GoGo puts servers on the plane for its streaming service -- allowing streaming without using the Internet connection to the ground.

If NetNeutrality became law, such things would be banned. But of course, since that would kill Internet service on airplanes, the FCC would immediately create rules to allow this. But then everyone would start lobbying the FCC for their own exceptions. In the end, you'd have the same thing with every other highly regulated industry, where companies with the most lobbying dollars win.

Innovation happens because companies innovate first and ask for permission (or forgiveness) later. A few years ago, Comcast throttled BitTorrent traffic during prime time. NetNeutrality proponents think this is bad, and use it as an example of why we need regulation. But no matter how bad it is, it's a healthy sign of innovation. Not all innovations are good, sometimes companies will try things, realize they are bad, then stop doing them. Under NetNeutrality regulations, nothing bad will happen ever again, because government regulators won't allow it. But that also means good innovations won't happen either -- companies won't be able to freely try them out without regulators putting a stop to it.

Right now, you can start a company like Facebook without spending any money lobbying the government. In the NetNeutrality future, that will no longer be possible. A significant amount of investor money will go toward lobbying the government for favorable regulation, to ask permission.

3. What's Taking So Long

Vox imagines that NetNeutality is such a good idea that the only thing stopping it is technicalities.

The opposite is true. The thing stopping NetNeutrality is that it's a horrible idea that kills innovation. It's not a technical idea, but a political one. It's pure left-wing wing politics that demands the government run everything. The thing stopping it is right-wing politics that wants the free-market to run things.

The refusal of Vox to recognize that this is a left-wing vs. right-wing debate demonstrates their overwhelming political bias on this issue.

4. FCC Bypassing Congress

The Internet is new and different. If regulating it like a utility is a good idea, then it's Congress who should pass a law to do this.

What Obama wants to do is bypass congress and seize control of the Internet himself.

5. Opponent's arguments

Vox gets this partly right, but fundamentally wrong.

The fundamental argument by opponents is that nothing bad is happening now. None of the evil scenarios of what might happen are actually happening now.

Sure, sometimes companies do bad things, but the market immediately corrects. That's the consequence of permission-free innovation: innovate first, and ask for permission (or forgiveness) later. That sometimes companies have to ask for forgiveness is a good sign.

Let's wait until Comcast actually permanently blocks content, or charges NetFlix more than other CDNs, or any of the other hypothetical evils, then let's start talking about the government taking control.

6. Red Tape

Strangling with red-tape isn't a binary proposition.

What red-tape means is that network access becomes politicized, as only those with the right political connections get to act. What red-tape means is that only huge corporations can afford the cost. If you like a world dominated by big, connected corporations, then you want NetNeutrality regulations.

While it won't strangle innovation, it'll drastically slow it down.

7. YouTube

Vox claims that startups like YouTube would have difficulty getting off the ground with NetNeutrality regulation. The opposite is true: companies like YouTube would no longer be able to get off the ground without lobbying the government for permission.

8. Level Playing Field

Vox description of the NetFlix-Comcast situation is completely biased on wrong, taking NetFlix's and leftist description at face value. It's not true.

Descriptions of the NetFlix-Comcast issue completely ignore the technical details, but the technical details matter. For one thing, it doesn't stream "across the Internet". The long-distance links between cities cannot support that level of traffic. Instead, NetFlix puts servers in every major city to stream from. These servers are often co-located in the same building as Comcast's major peering points.

In other words, what we are often talking about is how to get video streaming from NetFlix servers from one end of a building to another.

During prime time (7pm to 10pm), NetFlix's bandwidth requirements are many times greater than all non-video traffic put together. That essentially means that companies like Comcast have to specially engineer their networks just to handle NetFlix. So far, NetFlix has been exploiting loopholes in "peering agreements" designed for non-video traffic in order to get a free ride.

Re-architecting the Internet to make NetFlix work requires a lot of money. Right now, those costs are born by all Comcast subscribers -- even those who don't watch NetFlix. The 90% of customers with low-bandwidth needs are subsidizing those 10% who watch NetFlix at prime time. We like to think of Comcast as having monopolistic power, but it doesn't. The truth is that Comcast has very little power in pricing. It can't meter traffic, charging those who abuse the network during prime time to account for their costs. Thus, instead of charging NetFlix abusers directly, it just passes its costs to NetFlix.

Converting the Internet into a public-utility wouldn't change this. It simply means that instead of fighting in the market place, the Comcast-NetFlix battle would be decided by regulators. And, the result of the decision would be whichever company did the best job lobbying the FCC and paying off politicians -- which would probably be Comcast.

Tuesday, November 04, 2014

Voters are jerks

Out and about today, jerks are proudly displaying a "I Voted!" sticker. My twitter feed is likewise full of people proudly declaring they voted. They only serve to perpetuate the problem.

Most voted for incumbents, while spending the rest of the year bitching about how bad the incumbents are.

Most base their voting on vapid political rhetoric, rather than understanding the issues. Their political analysis comes from late night comedians rather than serious sources. Those like Vox or the Economist do a good job with analysis, but of course, few read them because that would require thinking. It's much easier watching Jon Stewart or Stephen Colbert and laugh about how stupid other people are.

Though, understanding the issues is really just a smokescreen. What people really vote for is to take money from other groups and give it to themselves. They mask it in issues like national defense or the environment, but it's really just a money grab.

People proudly vote in this election, where few contests are competitive. These same people ignored the primaries, where their votes could have made a difference.

People waste their vote on major parties. Frankly, we live in a one Party state with two factions, where the factions share power and collude to exclude outsiders. People proudly claim to support democracy while voting for the Parties that subvert it.

You might proudly display a "I Voted" sticker today, but I think you are just a douchebag.

Saturday, November 01, 2014

Adding protocols to masscan

The unique feature of Masscan is that it has it’s own TCP/IP stack, bypassing the kernel’s stack. This has interesting benefits, such as being able to maintain a TCP connection with all 30 million HTTPS servers on the Internet simultaneously. However, it means that (at the moment) it’s difficult to write your own protocols. At some point I’m going to add LUA scripting to the system and this technical detail won’t matter, but in the meanwhile, if you want to write your own protocols, you’ll have to know the tricks.


The issue Masscan solves is scalability, such as maintaining 30 million concurrent TCP connections. In a standard Linux environment, the system requires about 40 kilobytes per TCP connection, meaning a system would need 1.2 terabytes of RAM to hold all the connections. This is beyond what you can get for standard servers.

Masscan reduces this. At the moment, it uses only 442 bytes per TCP connection – including the memory for difficult protocols like SSL. That’s less than 16-gigabytes of RAM for 30 million concurrent connections.

This is a little excessive, because connections are quick. Even a fast scan of the Internet takes long enough that at any point in time, fewer than 100,000 connections are needed. Therefore, there is no technical reason why masscan should be so paranoid about reducing memory consumption. I do this way for trying out other things.

TCP reassembly

Masscan’s stack does no TCP reassembly. It does handle overlaps and ordering, but it doesn’t reassemble fragments.

Protocol parsers are written as “state-machines”. This means they don’t need reassembly. The state-machine pauses when it runs off the end of one packet and resumes where it left off at the start of the next packet.

The lack of reassembly conserves a lot of memory in the system, and increases speed. Instead of buffering incoming packets, waiting for the application to read the buffer, Masscan forces the application to parse packets immediately as it arrives, because the packet will be discarded immediately afterward.

State-machine parsers

All parsers are “state-machines” in theory. The way masscan does parsers is to make this explicit. The parser reads a stream of bytes from the input and parses then one-by-one. Each byte causes a transition in from one state to the next.

A model of this is the SSL parser. Put a breakpoint at the start of ‘ssl_parse_record()’ and run masscan under a debugger with the “—selftest” parameter. This will exercise the SSL protocol by sending a dummy packet to it. Simultaneously, look at Wireshark and how it decodes the initial SSL packet. In the debugger, you’ll see how masscan does this a byte-at-a-time in a state-machine fashion, eventually decoding everything Wireshark does, but in a very strange manner.


After the TCP connection has been established, the next step is the “hellos” from either side of the connection. Sometimes the server initiates this, as in the case of FTP, SMTP, SSH, and VNC. Sometimes the client initiates this, as in the case of  HTTP and SSL.

Masscan waits three seconds before sending client-hellos, in case the server sends a hello first. That way, when scanning for SSL or HTTP, it can detect that the port is actually being used for SSH or VNC. In other words, when you scan for HTTP, you’ll get some SSH and VNC records in response.

The file “proto-banner1.c” currently contains the list of patterns in server-hellos, and the logic it will use in order to configure which protocol parser should handle a TCP connection.

The file “proto-ftp.c” is a good example of a server-hello protocol. If you just search everywhere for “FTP” in the source, you’ll see how to write a similar protocol for yourself. Yes, it’s an ugly hack that needs to be cleaned up.

For a client-hello protocol, then use HTTP as your example.


For simple “banner” checking, all you need is to either send or receive the hello. More complicated tasks require additional transmits, with back-and-forth exchanges with the server.

These exchanges are stateless. In other words, you write your TCP parser for the data coming back from the server without regard to what you think you’ve transmitted. All the state is on the server side.

The best example of this is the “proto-vnc.c” parser. It must do several back-and-forth exchanges with the server. You’ll see that at several points it must call the “tcp_transmit()” function when parsing the response in order to send a request to the server.

Long term direction

The reason the model sucks right now is because I’m working on adding LUA scripting in the long run.

Of all the scripting languages, it looks like LUA will have the least overhead per TCP connection.

Of all the scripting languages, it looks like LUA has the easiest support for “coroutines”. That means when a script calls “read()” to read bytes from the network, I can do a user-mode context switch. Thus, while LUA parsers appear synchronous, they are in fact asynchronous.

But of course, the real reason is to get nmap NSE compatibility.


This is a short guide for hacking your own protocol interactions into masscan. I seriously need to get working on the LUA integration, but in the meanwhile, this how you’d add stuff.

The best way is to contact me, describe your problem, then have me integrate a prototype for your protocol that you can then fill out at your leisure.

Friday, October 31, 2014

Appropriate Halloween costumes

There has been some debate over Halloween costumes, whether ISIS terrorist garb or hazmat suites are appropriate. Of course they are. Culture responds to current events; everything is fair game.

An example of this are Afghan "war rugs", as pictured below. The one on the left is in response to the old Soviet invasion, and the one on the right is in response to the post-9/11 invasion by the U.S. Rug weavers incorporated things from their environment into the rugs. In particular, the design of the rug on the right comes from leaflets we carpet bombed the country with before invading them.

Indeed, the entire "Halloween" theme is about death. It's just that it got incorporated in our culture long before the Internet enabled wimpy nitpickers from debating what was, or wasn't, appropriate. I'm not sure if the holiday would've survived in the modern politically correct climate.

Tuesday, October 28, 2014

No evidence feds hacked Attkisson

Former CBS journalist Sharyl Attkisson is coming out with a book claiming the government hacked her computer in order to suppress reporting on Benghazi. None of her "evidence" is credible. Instead, it's bizarre technobabble. Maybe her book is better, but those with advance copies quoting excerpts  make it sound like the worst "ninjas are after me" conspiracy theory.

Your electronics are not possessed by demons

Technology doesn't work by magic. Each symptom has a specific cause.

Attkisson says "My television is misbehaving. It spontaneously jitters, mutes, and freeze-frames". This is not a symptom of hackers. Instead, it's a common consumer complaint caused by the fact that cables leading to homes (and inside the home) are often bad. My TV behaves like this on certain channels.

She says "I call home from my mobile phone and it rings on my end, but not at the house", implying that her phone call is being redirected elsewhere. This is a common problem with VoIP technologies. Old analog phones echoed back the ring signal, so the other side had to actually ring for you to hear it. New VoIP technologies can't do that. The ringing is therefore simulated and has nothing to do with whether it's ringing on the other end. This is a common consumer complaint with VoIP systems, and is not a symptom of hacking.

She says that her alarm triggers at odd hours in the night. Alarms work over phone lines and will trigger when power is lost on the lines (such as when an intruder cuts them). She implies that the alarm system goes over the VoIP system on the FiOS box. The FiOS box losing power or rebooting in the middle of the night can cause this. This is a symptom of hardware troubles on the FiOS box, or Verizon maintenance updating the box, not hackers.

She says that her computer made odd "Reeeeee" noises at 3:14am. That's common. For one thing, when computers crash, they'll make this sound. I woke two nights ago to my computer doing this, because the WiMax driver crashed, causing the CPU to peg at 100%, causing the computer to overheat and for the fan to whir at max speed. Other causes could be the nightly Timemachine backup system. This is a common symptom of bugs in the system, but not a symptom of hackers.

It's not that hackers can't cause these problems, it's that they usually don't. Even if hackers have thoroughly infested your electronics, these symptoms are still more likely to be caused by normal failure than by the hackers themselves. Moreover, even if a hacker caused any one of these symptoms, it's insane to think they caused them all.

Hacking is not sophisticated

There's really no such thing as a "sophisticated hack". That's a fictional trope, used by people who don't understand hacking. It's like how people who don't know crypto use phrases like "military grade encryption" -- no such thing exists, the military's encryption is usually worse than what you have on your laptop or iPhone.

Hacking is rarely sophisticated because the simplest techniques work. Once I get a virus onto your machine, even the least sophisticated one, I have full control. I can view/delete all your files, view the contents of your screen, control your mouse/keyboard, turn on your camera/microphone, and so on. Also, it's trivially easy to evade anti-virus protection. There's no need for me to do anything particularly sophisticated.

We are experts are jaded and unimpressed. Sure, we have experience with what's normal hacking, and might describe something as abnormal. But here's the thing: ever hack I've seen has had something abnormal about it. Something strange that I've never seen before doesn't make a hack "sophisticated".

Attkisson quotes an "expert" using the pseudonym "Jerry Patel" saying that the hack is "far beyond the abilities of even the best nongovernment hackers". Government hackers are no better than nongovernment ones -- they are usually a lot worse. Hackers can earn a lot more working outside government. Government hackers spend most of their time on paperwork, whereas nongovernment hackers spend most of their time hacking. Government hacker skills atrophy, while nongovernment hackers get better and better.

That's not to say government hackers are crap. Some are willing to forgo the larger paycheck for a more stable job. Some are willing to put up with the nonsense in government in order to be able to tackle interesting (and secret) problems. There are indeed very good hackers in government. It's just that it's foolish to assume that they are inherently better than nongovernmental ones. Anybody who says so, like "Jerry Patel", is not an expert.

Contradictory evidence

Attkisson quotes one expert as saying intrusions of this caliber are "far beyond the the abilities of even the best nongovernment hackers", while at the same time quoting another expert saying the "ISP address" is a smoking gun pointing to a government computer.

Both can't be true. Hiding ones IP address is the first step in any hack. You can't simultaneously believe that these are the most expert hackers ever for deleting log files, but that they make the rookie mistake of using their own IP address rather than anonymizing it through Tor or a VPN. It's almost always the other way around: everyone (except those like the Chinese who don't care) hides their IP address first, and some forget to delete the log files.

Attkisson quotes experts saying non-expert things. Patel's claims about logfiles and government hackers are false. Don Allison's claims about IP addresses being a smoking gun is false. It may be that the people she's quoting aren't experts, or that her ignorance causes her to misquote them.


Attkisson quotes an expert as identifying an "ISP address" of a government computer. That's not a term that has any meaning. He probably meant "IP address" and she's misquoting him.

Attkisson says "Suddenly data in my computer file begins wiping at hyperspeed before my very eyes. Deleted line by line in a split second". This doesn't even make sense. She claims to have videotaped it, but if this is actually a thing, it sounds like more something kids do to scare people, not what real "sophisticated" hackers do. Update: she has released the video, the behavior is identical to a stuck delete/backspace key, and not evidence of hackers.

So far, none of the quotes I've read from the book use any technical terminology that I, as an expert, feel comfortable with.

Lack of technical details

We don't need her quoting (often unnamed) experts to support her conclusion. Instead, she could just report the technical details.

For example, instead of quoting what an expert says about the government IP address, she could simply report the IP address. If it's "75.748.86.91", then we can judge for ourselves whether it's the address of a government computer. That's important because nobody I know believes that this would be a smoking gun -- maybe if we knew more technical details she could change our minds.

Maybe that's in her book, along with pictures of the offending cable attached to the FiOS ONT, or the pictures of her screen deleting at "hyperspeed". So far, though, none of those with advanced copies have released these details.

Lastly, she's muzzled the one computer security "expert" that she named in the story so he can't reveal any technical details, or even defend himself against charges that he's a quack.


Attkisson's book isn't out yet. The source material for this post if from those with advance copies quoting her [1]][2]. But, everything quoted so far is garbled technobabble from fiction rather that hard technical facts.

Disclosure: Some might believe this post is from political bias instead of technical expertise. The opposite is true. I'm a right-winger. I believe her accusations that CBS put a left-wing slant on the news. I believe the current administration is suppressing information about the Benghazi incident. I believe journalists with details about Benghazi have been both hacked and suppressed. It's just that in her case, her technical details sounds like a paranoid conspiracy theory.

The deal with the FTDI driver scandal

The FTDI driver scandal is in the news, so I thought I'd write up some background, and show what a big deal this is.

Devices are connected to your computer using a serial port. Such devices include keyboards, mice, flash drives, printers, your iPhone, and so on. The original serial port standard called RS232 was created in 1962. It got faster over the years (75-bps to 115-kbps), but ultimately, the technology became obsolete.

In 1998, the RS232 standards was replaced by the new USB standard. Not only is USB faster (a million times so), it's more complex and smarter. The initials stand for "Universal Serial Bus", and it truly is universal. Not only does your laptop have USB ports on the outside for connecting to things like flash drives, it interconnects much of the things on the inside of your computer, such as your keyboard, Bluetooth, SD card reader, and camera.

What FTDI sells is a chip that converts between the old RS232 and the new USB. It allows old devices to be connected to modern computers. Even new devices come with RS232 instead of USB simply because it's simple and reliable.

The FTDI chip is a simple devices that goes for about $2. While there are competitors (such as Silicon Labs), FTDI is by far the most popular vendor of RS232-to-USB converters. This $2 may sound cheap, but relatively expensive for small devices which cost less than $50. That $2 is often greater than the profit margin on the entire device. Therefore, device manufacturers have a strong incentive to find cheaper alternatives.

That's where clones come in. While the FTDI sells them for $2, the raw chips cost only pennies to manufacture. Clone chips are similarly cheap to manufacture, and can be sold for a fraction of FTDI's price. On Alibaba, people are advertising "real" FTDI chips for between $0.10 and $1 apiece, with the FTDI logo on the outside and everything. They are, of course, conterfeits.

FTDI is understandably upset about this. They have to sell millions of chips to make back development and support costs, which they can't do with clones undercutting them.

FTDI's strategy was to release a driver update that intentionally disabled the clone chips. Hardware devices in a computer need software drivers to operate. Clone chips use the same drivers from FTDI. Therefore, FTDI put code in their software that attacked the clones, disabling them. The latest FTDI driver through Windows Update contains this exploit. If your computer automatically updates itself, it may have downloaded this new driver.

Every USB devices comes with a vendor identifier (VID) and a product identifier (PID). It's these two numbers that tells operating systems like Windows or Linux which driver to load. What FTDI did was reprogram these numbers to zero. This, in effect, ruined the devices. From that point on, they can no longer be recognized, either by FTDI's driver or any other. In theory, somebody could write software that reprogrammed them back to the original settings, but for the moment, they are bricked (meaning, the hardware is no more useful than a brick).

This can have a devastating effect. One place that uses RS232 heavily is industrial control systems, the sort of thing that controls the power grid. This means installing the latest Windows update on one of these computers could mean blacking out an entire city.

FTDI's actions are unprecedented. Never before has a company released a driver that deliberately damages hardware. Bad driver updates are common. Counterfeits aren't perfect clones, therefore a new driver may fail to work properly, either intentionally or unintentionally. In such cases, users can simply go back to the older, working driver. But when FTDI changes the hardware, the old drivers won't work either.. Because the VID/PIDs have been reprogrammed, the operating system can no longer figure out which drives to load for the device..

Many people have gotten upset over this, but it's a complex debate.

One might think that the evil buyers of counterfeits are getting what they deserve. After all, satellite TV providers have been known to brick counterfeit access cards. But there is a difference. Buyers of satellite cards know they are breaking the rules, whereas buyers of devices containing counterfeit chips don't. Most don't know what chips are inside a device. Indeed, many times even the manufacturers don't know the chips are counterfeit.

On the other hand, ignorance of the law is no excuse. Customers buying devices with clone chips harm FTDI whether they know it or not. They have the responsibility to buy from reputable vendors. It's not FTDI's fault that the eventual end customer chose poorly.

It rankles that FTDI would charge $2 for a chip that costs maybe $0.02 to manufacturer, but it costs money to develop such chips. It likewise costs money to maintain software drivers for over 20 operating systems, ranging from Windows to Linux to VxWorks. It can easily cost $2 million for all this work, while selling only one million chips. If companies like FTDI cannot get a return on their investment in RND, then there will be a lot less RND -- and that will hurt all of us.

One way to protect RND investment is draconian intellectual-property laws. Right now, such laws are are a cure that's worse than the disease. The alternative to bad laws is to encourage companies like FTDI to protect themselves. What FTDI did is bad, but at least nobody held a gun to anybody's head.

Counterfeits have another problem: they are dangerous. From nuclear control systems to airplane navigation systems to medical equipment, electronics are used in places where failure costs human lives. These systems are validated using the real chips. Replacing them with counterfeits can lead to human lives lost. However, counterfeit chips have been widespread for decades with no documented loss of life, so this danger is so far purely theoretical.

Separate from the counterfeit issue is the software update issue. In the last decade we've learned that software is dynamic. It must be updated on a regular basis. You can't deploy a device and expect it to run unmodified for years. That's because hackers regularly find flaws in software, even simple drivers, so they must be patched to prevent hacker intrusions. Many industries, such as medical devices and industrial control systems, are struggling with this concept, putting lives at risk due to hackers because they are unwilling to put lives at (lesser) risk when changing software. They need more trust in the software update process. However, this action by FTDI has threatened that trust.


As a typical Libertarian, I simultaneously appreciate the value of protecting RND investments while hating the current draconian government regime of intellectual property protection. Therefore, I support FTDI's actions. On the other hand, this isn't full support -- there are problems with their actions.

Update: As Jose Nazario points out, when Microsoft used Windows Update to disable pirated copies of WinXP, pirates stopped updating to fix security flaws. This resulted in hackers breaking into desktops all over the Internet, endangering the rest of us. Trust in updates is a big thing.

Saturday, October 25, 2014

Review: The Peripheral, by William Gibson

After four years, William Gibson is finally coming out with a new book, “The Peripheral”. Time to preorder now.

There’s not much to review. If you like Gibson’s work, you’ll like this book. (Also, if you don't like Gibon's work, then you are wrong).

What I like about Gibson’s work is his investment in the supporting characters, which are often more interesting than the main characters. Each has a complex backstory, but more importantly, each has a story that unfolds during the book. It’s as if Gibson takes each minor character and writes a short story for them, where they grow and evolve, then combines them all into the main story. It’s a little confusing at the start, because it’s sometimes hard to identify which are the main characters, but it pays off in the end. (I experienced that in this book, among the numerous characters he introduced at the start, it was the least interesting ones that turned out to be the main characters -- it's not that they were boring, it's that they took longer to develop).

One departure from his normal work is that this book is maybe a little more autobiographical. Gibson grew up on the countryside in the south, which is part of the setting in this book. He describes it in such detail that the reader feels at home there every much as in the urban dystopic fantasy.

Another departure from his normal work is that it’s as much about a dystopic present as it is about a dystopic future. Frankly, the modern world has caught up with Gibson – we are the future he was writing about 30 years ago. He can’t very well dream of a “cyberspace” when it’s all around us right now.

He deals with the dystopic present with nuance. For example, there is an analogue to the Westboro Baptist Church. These guys are a bunch of bastards that are easy to hate, so the average fiction writer would come up with a horrible disfiguring plague to wipe them out, to give us readers satisfaction. Gibson doesn’t.

The book has a scifi trick that you don’t figure out until about a quarter of the way through the book. Most reviews of the book give this up as a spoiler. I won’t here, because I really enjoyed trying to figure it out for myself as I read the book. I therefore recommend that you don’t read other reviews.