Monday, November 24, 2014

That wraps it up for end-to-end

The defining feature of the Internet back in 1980 was "end-to-end", the idea that all the intelligence was on the "ends" of the network, and not in middle. This feature is becoming increasingly obsolete.

This was a radical design at the time. Big corporations and big government still believed in the opposite model, with all the intelligence in big "mainframe" computers at the core of the network. Users would just interact with "dumb terminals" on the ends.

The reason the Internet was radical was the way it gave power to the users. Take video phones, for example. AT&T had been promising this since the 1960s, as the short segment in "2001 A Space Odyssey" showed. However, getting that feature to work meant replacing all the equipment inside the telephone network. Telephone switches would need to know the difference between a normal phone call and a video call. Moreover, there could be only one standard, world wide, so that calling Japan or Europe would work with their video telephone systems. Users were powerless to develop video calling on their own -- they would have to wait for the big telcom monopolies to develop it, however long it took.

That changed with the Internet. The Internet carries packets without knowing their content. Video calling with Facetime or Skype or LINE is just an app, from your iPhone or Android or PC. People keep imagining new applications for the Internet every day, and implement them, without having to change anything in core Internet routing hardware.

I've used Facetime, Skype, and LINE to talk to people in Japan. That's because there is no real international standard for video calling. Each person I call requires me to install whichever app they are using. Traditional thinking is that government ought to create standards, so that every app would be compatible with every other app, so that I could Skype from Windows to somebody's iPhone using Facetime. This tradition is nonsense. If we waited for government standards, it'd take forever. Teenagers who heavily use video today would be grown up with kids of their own before government got around to creating the right standard. Lack of standards means freedom to innovate.

Such freedom was almost not the case. You may have heard of something called the "OSI 7 Layer Model". Everything you know about that model is wrong. It was an attempt by Big Corporations and Big Government to enforce their model of core-centric networking. It demanded such things as a "connection oriented network protocol", meaning smart routers rather than the dumbs ones we have today. It demanded that applications be standardized, so that there would be only one video conferencing standard, for example. Governments in US, Japan, and Europe mandated that the computers they bought supporting OSI conformant protocols. (The Internet's TCP/IP protocols do not conform to the OSI model.) Such rules were on the book through into the late 1990s dot-com era, when many in government still believed that the TCP/IP Internet was just a brief experiment on the way to a Glorious Government OSI Internetwork.

The Internet did have standards, of course, but they were developed in the opposite manner. Individuals innovated first, on the ends of the network, developing apps. Only when such apps became popular did they finally get documented as a "standard'. In other words, Internet standards we more de facto than de jure. People innovated first, on their own ends of the network, and the infrastructure and standards caught up later.

But here's the thing: the Internet ideal of end-to-end isn't perfect, either. There are reasons why not all innovation happens on the ends.

Take your home network as an example. The way your home likely works is that you have a single home router with cable/fiber/DSL on one side talking to the Internet, and WiFi on the other side talking to the devices in your home. Attached to your router you have a desktop computer, a couple notebooks, an iPad, your phones, an Xbox/Playstation, and your TV.

In the true end-to-end model, all these devices would be on the Internet directly -- that they could be "pinged" from the Internet. In today's reality, though, that's not the way things work. Your home router is a firewall. It blocks incoming connections, so that devices in your home can connect outwards, but nothing on the Internet can connect inwards. This fundamentally breaks the ideal of end-to-end, as a smart device sits in the network controlling access to the ends.

This is done for two reasons. The first is security, so that hackers can't hack the devices in your home. Blocking inbound traffic blocks 99% of hacker attacks against devices.

The second reason for smart home routers is the well-known limitation on Internet addresses: there are only 4 billion of them. However, there are more than 4 billion devices connected to the Internet. This fix this, your home router does address translation. Your router has only a single public Internet address. All the devices in your home have private addresses that wouldn't work on the Internet. As packets flow in/out of your home, your router transparently changes the private addresses in the packets into the single public address.

Thus, when you google "what's my IP address", you'll get a different address than your local machine. Your machine will have a private address like 10.x.x.x or 192.168.x.x, but servers on the Internet won't see that -- they'll see the public address you've been assigned by your ISP.

According to Gartner, nearly billion smarthphones were sold in 2013. These are all on the Internet. That represents a quarter of the Internet address space used up in only a single year. Yet, virtually none of them are assigned real Internet addresses. Almost all of them are behind address translators -- not the small devices like you have in your home, but massive translators that can handle millions of simultaneous devices.

The consequence is this: there are more devices with private addresses, that must go through translators, than there are devices with public addresses. In other words, less than 50% of the Internet is end-to-end.

The "address space exhaustion" of tradition Internet addresses inspired an update to the protocol to use larger addresses, known as IPv6. It uses 128-bit addresses, or 4 billion times 4 billion times 4 billion times 4 billion. This is enough to assign a unique address to all the grains of sand on all the beaches on Earth. It's enough to restore end-to-end access to every device on the Internet, times billions and billlions.

My one conversation with Vint Cerf (one of the key Internet creators) was over this address space issue. Back in 1992, every Internet engineer knew for certain that the Internet would run out of addresses by around the year 2000. Every engineer knew this would cause the Internet to collapse. At the IETF meeting, I tried to argue otherwise. I used the Simon-Ehrlich Wager as an analogy. Namely, the 4 billion addresses weren't a fixed resource, because we would become increasingly efficient at using them. For example, "dynamic" addresses would use space more efficiently, and translation would reuse addresses.

Cerf's response was the tautology "but that would break the end-to-end principle".

Well, yes, but no such principle should be a straightjacket. The end-to-end principle is already broken by hackers. Even with IPv6, when all your home devices have a public rather than private address on the Internet, you still want a firewall breaking the end-to-end principle blocking inbound connections. Once you've decided to firewall a network, it no longer matters whether it's using IPv6 or address translation of private addresses. Indeed, address translation is better for firewalling, as it defaults to "fail close". That means if a failure occurs, all communication is blocked. With IPv6, firewalls become "fail open", where failures allow communication to continue.

Firewalls are only the start in breaking end-to-end. It's the "cloud" where we see a radical reversion back to old principles.

Your phone is no longer a true "end" of the network. Sure, your phone has a powerful processor that's faster than supercomputers of the last decade, but that power is used primarily for display not for computation. Your data and computation is instead done in the cloud. Indeed, when you lose or destroy your phone, you simply buy a new one and "restore" it form the cloud.

Thus, we are right back to the old world of smart core network with "mainframes", and "dumb terminals" on the ends. That your phone has supercomputer power doesn't matter -- it still does just what it's told by the cloud.

But the last nail in the coffin to the "end-to-end" principle is the idea of "net neutrality". While many claim it's a technical concept, it's just a meaningless political slogan. Congestion is an inherent problem of the Internet, and no matter how objectively you try to solve it, it'll end up adversely affecting somebody -- somebody who will then lobby politicians to rule in their favor. The Comcast-NetFlix issue is a good example where the true technical details are at odds with the way this congestion issue has been politicized. Things like "fast-lanes" are everywhere, from content-delivery-networks to channelized cable/fiber. Rhetoric creates political distinctions among various "fast-lanes" when there are no technical distinctions.

This politicization of the Internet ends the personal control over the Internet that was promised by end-to-end. Instead of being able to act first and asking for forgiveness later, you must first wait for permission from Big Government. Instead of being able to create your own services, you must wait for Big Corporations (the only ones that can afford lawyers to lobby government) to deliver those services to you.

Conclusion

We aren't going to regress completely to the days of mainframes, of course, but we've given up much of the territory of individualistic computing. In some ways, this is a good thing. I don't want to manage my own data, losing it when a hard drive crashes because I forgot to back it up. In other ways, it's a bad thing. The more we regulate the Internet to insure good things, the more we stop innovations that don't fit within our preconceived notions. Worse, the more it's regulated, the more companies have to invest in lobbying the government for favorable regulation, rather than developing new technology..

1 comment:

Anonymous said...

I really do not get this blog hate towards standards. Sure from a certain point of view they might limit creativity and "freedom to innovate" but it thanks to standards that you can actually talk to people in Japan, since you use audio and video technologies that everybody agreed upon.