I have a very basic computer networking question: when sending a TCP packet, is the packet ACK'ed at every node in the route between the sender and the recipient, or just by the final recipient?This isn't just a basic question, it is the basic question, the defining aspect of TCP/IP that makes the Internet different from the telephone network that predated it.
Remember that the telephone network was already a cyberspace before the Internet came around. It allowed anybody to create a connection to anybody else. Most circuits/connections were 56-kilobits-per-secondl using the "T" system, these could be aggregated into faster circuits/connections. The "T1" line consisting of 1.544-mbps was an important standard back in the day.
In the phone system, when a connection is established, resources must be allocated in every switch along the path between the source and destination. When the phone system is overloaded, such as when you call loved ones when there's been an earthquake/tornado in their area, you'll sometimes get a message "No circuits are available". Due to congestion, it can't reserve the necessary resources in one of the switches along the route, so the call can't be established.
"Congestion" is important. Keep that in mind. We'll get to it a bit further down.
The idea that each router needs to ACK a TCP packet means that the router needs to know about the TCP connection, that it needs to reserve resources to it.
This was actually the original design of the the OSI Network Layer.
Let's rewind a bit and discuss "OSI". Back in the 1970s, the major computer companies of the time all had their own proprietary network stacks. IBM computers couldn't talk to DEC computers, and neither could talk to Xerox computers. They all worked differently. The need for a standard protocol stack was obvious.
To do this, the "Open Systems Interconnect" or "OSI" group was established under the auspices of the ISO, the international standards organization.
The first thing the OSI did was create a model for how protocol stacks would work. That's because different parts of the stack need to be independent from each other.
For example, consider the local/physical link between two nodes, such as between your computer and the local router, or your router to the next router. You use Ethernet or WiFi to talk to your router. You may use 802.11n WiFi in the 2.4GHz band, or 802.11ac in the 5GHz band. However you do this, it doesn't matter as far as the TCP/IP packets are concerned. This is just between you and your router, and all the information is stripped out of the packets before they are forwarded to across the Internet.
Likewise, your ISP may use cable modems (DOCSIS) to connect your router to their routers, or they may use xDSL. This information is likewise is stripped off before packets go further into the Internet. When your packets reach the other end, like at Google's servers, they contain no traces of this.
There are 7 layers to the OSI model. The one we are most interested in is layer 3, the "Network Layer". This is the layer at which IPv4 and IPv6 operate. TCP will be layer 4, the "Transport Layer".
The original idea for the network layer was that it would be connection oriented, modeled after the phone system. The phone system was already offering such a service, called X.25, which the OSI model was built around. X.25 was important in the pre-Internet era for creating long-distance computer connections, allowing cheaper connections than renting a full T1 circuit from the phone company. Normal telephone circuits are designed for a continuous flow of data, whereas computer communication is bursty. X.25 was especially popular for terminals, because it only needed to send packets from the terminal when users were typing.
Layer 3 also included the possibility of a connectionless network protocol, like IPv4 and IPv6, but it was assumed that connection oriented protocols would be more popular, because that's how the phone system worked, which meant that was just how things were done.
The designers of the early Internet, like Bob Kahn (pbuh) and Vint Cerf (pbuh), debated this. They looked at Cyclades, a French network, which had a philosophical point of view called the end-to-end principle, by which I mean the End-To-End Principle. This principle distinguishes the Internet from the older phone system. The Internet is an independent network from the phone system, rather than an extension of the phone system like X.25.
The phone system was defined as a smart network with dumb terminals. Your home phone was a simple circuit with a few resisters, speaker, and microphone. It had no intelligence. All the intelligence was within the network. Unix was developed in the 1970s to run on phone switches, because it was the switches inside the network that were intelligent, not the terminals on the end. That you are now using Unix in your iPhone is the opposite of what they intended.
Even mainframe computing was designed this way. Terminals were dumb devices with just enough power to display text. All the smart processing of databases happened in huge rooms containing the mainframe.
The end-to-end principle changes this. It instead puts all the intelligence on the ends of the network, with smart terminals and smart phones. It dumbs down the switches/routers to their minimum functionality, which is to route packets individually with no knowledge about what connection they might be a part of. A router receives a packet on a link, looks at it's destination IP address, and forwards it out the appropriate link in necessary direction. Whether it eventually reaches its destination is of no concern to the router.
In the view of the telephone network, new applications meant upgrading the telephone switches, and providing the user a dumb terminal. Movies of the time, like 2001: A Space Odyssey and Blade Runner would show video phone calls, offered by AT&T, with the Bell logo. That's because such applications where always something that the phone company would provide in the future.
With the end-to-end principle the phone company simply routes the packets, and the apps are something the user chooses separately. You make video phones calls today, but you use FaceTime, Skype, WhatsApp, Signal, and so on. My wireless carrier is AT&T, but it's absurd thinking I would ever make a video phone call using an app provided to me by AT&T, as I was shown in the sci-fi movies of my youth.
So now let's talk about congestion or other errors that cause packets to be lost.
It seems obvious that the best way to deal with lost packets is at the point where it happens, to retransmit packets locally instead of all the way from the remote ends of the network.
This turns out not to be the case. Consider streaming video from Netflix when congestion happens. When that happens, it wants to change the encoding of the video to a lower bit rate. You see this when watching Netflix during prime time (6pm to 11pm), where videos are of poorer quality than during other times of the day. It's streaming them at a lower bit rate due to their system being overloaded.
If routers try to handle dropped packets locally, then they give limited feedback about the congestion. It would require some sort of complex signaling back to the ends of the network informing them about congestion in the middle.
With the end-to-end principle, when congestion happens, when a router can't forward a packet, it silently drops it, performing no other processing or signaling about the event. It's up to the ends to notice this. The sender doesn't receive an ACK, and after a certain period of time, resends the data. This in turn allows the app to discover congestion is happening, and to change its behavior accordingly, such as lowering the bitrate at which its sending video.
Consider what happens with a large file download, such as your latest iOS update, which can be a gigabyte in size. How fast can the download happen?
Well, with TCP, it uses what's known as the slow start algorithm. It starts downloading the file slowly, but keeps increasing the speed of transmission until a packet is dropped, at which point it backs off.
You can see this behavior when visiting a website like speedtest.net. You see it slowly increase the speed until it reaches its maximum level. This isn't a property of the SpeedTest app, but a property of how TCP works.
TCP also tracks the round trip time (RTT), the time it takes for a packet to be acknowledged. If the two ends are close, RTT should be small, and the amount of time waiting to resend a lost packet should be shorter, which means it can respond to congestion faster, and more carefully tune the proper transmit rate.
This is why buffer bloat is a problem. When a router gets overloaded, instead of dropping a packet immediately, it can instead decide to buffer the packet for a while. If the congestion is transitory, then it'll be able to send the packet a tiny bit later. Only if the congestion endures, and the buffer fills up, will it start dropping packets.
This sounds like a good idea, to improve reliability, but it messes up TCP's end-to-end behavior. It can no longer reliably reliably measure RTT, and it can no longer detect congestion quickly and backoff on how fast it's transmitting, causing congestion problems to be worse. It means that buffering in the router doesn't work, because when congestion happens, instead of backing off quickly, TCP stacks on the ends will continue to transmit at the wrong speed, filling the buffer. In many situations, buffering increases dropped packets instead of decreasing them.
Thus, the idea of trying to fix congestion in routers by adding buffers is a bad idea.
Routers will still do a little bit of buffering. Even on lightly loaded networks, two packets will arrive at precisely the same time, so one needs to be sent before the other. It's insane to drop the other at this point when there's plenty of bandwidth available, so routers will buffer a few packets. The solution is to reduce buffer to the minimum, but not below the minimum.
Consider Google's HTTP/3 protocol, how they are moving to UDP instead of TCP. There are various reasons for doing this, which I won't go into here. Notice how if routers insisted on being involved in the transport layer, of retransmitting TCP packets locally, how the HTTP/3 upgrade on the ends within the browser wouldn't work. HTTP/3 takes into consideration information that is encrypted within the the protocol, something routers don't have access to.
This end-to-end decision was made back in the early 1970s as the Internet has wildly evolved. Our experience 45 years later is that this decision was a good one.
Now let's discuss IPv6 and NAT.
As you know, IPv4 uses 32-bit network addresses, which have only 4-billion combinations, allowing only 4-billion devices on the Internet. However, there are more than 10-billion devices on the network currently, more than 20-billion by some estimates.
The way this is handled is network address translation or NAT. Your home router has one public IPv4 address, like 50.73.69.230. Then, internal to your home or business, you get a local private IPv4 address, likely in the range 10.x.x.x or 192.168.x.x. When you transmit packets, your local router changes the source address from the private one to the public one, and on incoming packets, changes the public address back to your private address.
It does this by tracking the TCP connection, tracking the source and destination TCP port numbers. It's really a TCP/IP translator rather than just an IP translator.
This violates the end-to-end principle, but only a little bit. While the NAT is translating addresses, it's still not doing things like acknowledging TCP packets. That's still the job of the ends.
As we all know, IPv6 was created in order to expand the size of addresses, from 32-bits to 128-bits, making a gazillion addresses available. It's often described in terms of the Internet running out addresses needing more, but that's not the case. With NAT, the IPv4 Internet will never run out of addresses.
Instead, what IPv6 does is preserve the end-to-end principle, by keeping routers dumb.
I mention this because I find discussions of IPv6 a bit tedious. The standard litany is that we need IPv6 so that we can have more than 4-billion devices on the Internet, and people keep repeating this despite there being more than 10-billion devices on the IPv4 Internet.
Conclusion
As I stated above, this isn't just a basic question, but the basic question. It's at the center of a whole web of interlocking decisions that define the nature of cyberspace itself.
From the time the phone system was created in the 1800s up until the 2007 release of the iPhone, phone companies wanted to control the applications that users ran on their network. The OSI Model that you learn as the basis of networking isn't what you think it is: it was designed with the AT&T phone network and IBM mainframes being in control over your applications.
The creation of TCP/IP and the Internet changed this, putting all the power in the hands of the ends of the network. The version of the OSI Model you end up learning is a retconned model, with all the original important stuff stripped out, and only the bits that apply to TCP/IP left remaining.
Of course, now we live in a world monopolized by the Google, the Amazon, and the Facebook, so we live in some sort of dystopic future. But it's not a future dominated by AT&T.
Haywood Floyd phones home in 2001: A Space Oddesy |
In this article, I find an overview of the computer networking devices such as hubs, switches, modems, and wireless access points. I also need Secure Random Password Generator for my website. We use passwords everywhere and they became one of everyday realities.
ReplyDeleteWe are living in the technology world, this has both the benefit of being harmful to awareness so we have to learn how to filter information in the most effective way.novel updates
ReplyDelete