The (wrong) theory
Worm propagation looks like the following graph, an S-curve that starts out from zero with exponential growth:
However, Witty looked different. In the first few minutes, it looked like the following, before then following the normal S-curve:
|Measurement of Witty Worm propagation from|
CAIDA network sensor
Witty used a random number generator could not generate all possible IP addresses. It misses 10% of all possible IP addresses – meaning some systems are safe from infection. Yet, some of those initially infected systems were among the “safe” systems.
Most of those early systems were in the range 184.108.40.206/16, a range assigned to Fort Huachuca, an Army base in Arizona.
These three facts, the initial bump, the infected “safe” systems, and the group of systems at an Army base all point to the same conclusion: the hacker launched the worm by targeting those systems. Somehow the hacker learned of vulnerable systems at the base, and when s/he launched the worm, s/he did so supplying the initial launch program those IP addresses.
The (right) theory
There is another explanation for the above oddities: promiscuous mode.
Witty didn’t just infect the version of BlackICE that ran on desktops, but also the version that ran as a network intrusion detection system (IDS). An IDS runs in promiscuous mode, monitoring not just it’s own IP address, but all the traffic on the target link, regardless of IP address.
A promiscuous mode device monitoring a Class A network is 16-million times more likely to be infected than a device with a single IP address. What we saw wasn’t a bump of initial traffic, but the infection of two different populations: a population of promiscuous devices infected in a few seconds, and a population of non-promiscuous devices infected in a few hours.
In other words, the infection graph is the combination of two S-curves, as I crudely draw below:
This explains the first two problems (the bump and non-infectable IP addresses getting infected), but it doesn’t explain why those systems were at a single Fort.
The answer to that question is that Force Huachuca is the part of the Army responsible for monitoring the rest of the Army with IDS systems. That network address range isn’t located at the fort – it’s broken up into smaller segments that are spread around the the Army’s network. When they put an IDS at another Army base, instead of using that local base’s IP address range, they manage the IDS through their own network range. It’s a common “out-of-band” management policy used in other networks besides the military.
The Army owned vast chunks of IPv4 address space, many class A sized blocks. That guaranteed their IDSs monitoring this space would be infected almost immediately.
This tweet helps confirm this, by a former Army cybersec engineer:
Proving the theory: slack
We have two competing theories, how can we tell which one is the right one?
The answer is to create a “who-infected-whom” graph.
The way I do this is in the “slack” area of the worm. The worm generates packets between 768 and 1280 bytes long, the length chosen randomly. Packets sent to infect new systems are just copies of the original packet that infected the current system. When the worm transmits a packet longer than the packet it was infected by, it’ll copy un-initialized bytes that trail the packet. For a given infection, the contents of the packet the worm sends will always be the same, just the length will be different.
Those researchers who wrote the paper identified the hacker as using IP address 220.127.116.11, the “patient-zero” address. The hacker’s program transmitted packets that were only 647 bytes long – the size of the worm’s code. The minimum size packet the worm generates is 768 bytes. That means when the first generation systems transmit packets, the remaining 121 would come from their slack area. Furthermore, every subsequent generation would copy at least those 121 bytes, since they cannot transmit shorter packets, either.
If there were a hundred systems in the seed list, we should see a hundred different variations of those 121 slack bytes – either directly from those first generation systems, or from later generations.
However, we saw only two variations, where “two” is a reasonable number for a hacker to have accidentally infected without special seeding (considering that they were promiscuous mode systems monitoring large address space). Those two systems are “18.104.22.168”, a system I call “Papa”, and “22.214.171.124”, a system I call “Mama”. The following is the full 1280 byte packet that “Papa” would send:
I say “random”, but you’ll notice that it isn’t quite random. In fact, you know what that is: BASE64 encoded data of some sort, possibly some fragment of an email attachment.
The following diagram shows how slack can trace the path of infection through four generations, starting with “Papa”, showing how it infects “Junior”, then “Big Abe”, then “Neorg”. As long as one infected system infects the next system in the chain with a slightly longer packet, we can trace this:
Notice that I name the systems with hints from the contents of their slack. “Big Abe” starts its slack area with “BgAb”, so it’s a good name. It’s also a good name because 80% of infected systems are downstream from this node in the chain. Also notice that these four generations are all in the first third of a second from the start of the worm.
Now that we’ve developed a system for creating a who-infected-whom graph, let’s create one. For the first half second of the infection, it looks like this:
The dotted line there is because I believe the system “Venice” derives from "Debby", based on timing information and from reverse engineering the random number generator. However, the infectious packet for Venice is shorter than for Debby, so all the slack information can show by itself is that the infection derives from Junior.
Over at SANS, they have an incident note on this. They show a packet. Note that it comes from the BigAbe branch, like almost all packets. I analyzed it, comparing it to all the other systems. There are over 10 generations in the slack area -- the ones we can count, though there are probably additional ones from short packets.
This slack information is pretty conclusive proof, but still it’s hard to believe, because it happened so darn fast, often less than a millisecond between systems in the chain of infection. However, it’s a simple matter of running the numbers. The DoD owns 20% of all IPv4 address space, and it’s likely around 10% was monitored by these systems. That means, by pure chance, one of the first random packets generated by the hacker would infect the first DoD system. These DoD computers were connected by high-speed links, with more promiscuous mode systems, enabling the worm to spread very fast indeed. The only slow part in the spread is the 30 millisecond hop from one side of the continent to the other.
Researchers used the CAIDA network telescope to record the incoming packets from the Witty worm. The first packet they received was already several generations old. That’s because CAIDA only monitored 1/255 of the address space (126.96.36.199/8) – which is a fraction of the address space monitored by vulnerable promiscuous mode IDS sensors. Thus, while CAIDA would be an “early” sensor in all other worms, it was a “late” sensor in the case of Witty.
One might claim that the hacker is trying to trick us, forging slack to throw us off the sent. If we had the right data, we could prove this not to be the case. Let’s assume a population of Class B sensors – IDSs monitoring /16 address space, that we know the DoD has a lot of. According to CAIDA, 90% of all Class B sensors would be infected within the first 1.2 seconds of the worm propagation. Here is a chart for all sensors, for a given size address space monitored, how long it took for 90% of those systems to become infected:
/12 in 0.29 seconds
/13 in 0.39 seconds
/14 in 0.56 seconds
/15 in 0.88 seconds
/16 in 1.2 seconds
/17 in 1.8 seconds
/18 in 2.9 seconds
/19 in 5.0 seconds
/20 in 8.5 seconds
/21 in 16 seconds
/22 in 28 seconds
/23 in 53 seconds
/24 in 97 seconds
/25 in 160 seconds
/26 in 250 seconds
/27 in 370 seconds
We would have to know the number of sensors the Army had deployed, and the size of address space each was monitoring, but assuming those numbers matched the predictions from this chart of CAIDA information, then we could prove the hacker could not have forged the slack information.
Analyzing the “slack” space of variable length packets allows us to create a who-infected-whom graph, with the only inaccuracy being that some systems appear closer to the root due to truncated packets. This graph conclusively proves that there were two first generation systems infected directly by the hacker. Moreover, the graph is lopsided, showing that 95% of second and later generations came through only one of the first generation systems. Even more, 80% of later infections came from a third generation system. These results can easily be verified by anybody who has a complete (including payload) packet log of the event.
Note: All content on this page, including images (except for the CAIDA image, which I don't control), is Creative Commons if you want to fix the erroneous Wikipedia entry.