Friday, July 08, 2011
Space Shuttle: good riddance
The problem with the Shuttle has always been that it’s a moral argument. Everyone knows that “reusable” is morally superior to “disposable”. Therefore, a reusable spacecraft has to be better than a disposable one. The moral superiority of this argument has blinded people for 40 years to the fact that it just doesn't work.
The flaw in the program can be seen in the two Shuttle disasters, when the Challenger exploded after liftoff, and the Columbia burned up on re-entry. The cause of both disasters was the complexity of the Shuttle. Disposable spacecraft are simple, and harder to mess up. A reusable space plane is horribly complex, and impossible to get right. If we’d ever achieved the thousands of launches (rather than the mere 135), we would have had many more disasters.
As a risk expert, I was horrified by the finger pointing after the Columbia tragedy. What everyone points to as the “cause” was foam falling off the tank and hitting the heat resistant tiles. The enormous heat during re-entry burned through the broken tiles, and destroyed the space craft. But the real “cause” was the complexity of the tiles themselves. There were over 20,000 tiles, no two alike, that had to be individually inspected, removed, repaired/replaced, and glued back on after every flight.
The tiles alone made the Shuttle too expensive, and too risky to operate, but it's just a small part of Shuttle complexity.
But worse than the tiles themselves was the blame game following the Columbia disaster. As the news reports, there were people within NASA who has warned management of the risk, but management covered up the problem. Of course this was the case. By any rational risk analysis, the Shuttle was too risky to fly, but NASA was told by Congress and the American people to make it fly. Therefore, NASA management had to decide which risks they were willing to live with. If the Shuttle were to keep flying after today, there would be another disaster soon, and its cause would be one of the many other risks management knows about but “ignores”.
This is a useful lesson for cybersecurity. Today’s networks are too complex to secure. Getting hacked is as inevitable as a Shuttle blowing up. Despite this, corporations are convinces that they can solve the complexity. They believe their firewalls will not have holes hackers can get through. They believe that they can control their website code to prevent all SQL injection and cross-site-scripting. They believe enough anti-virus will prevent users from infecting themselves with viruses. They believe they can keep all patches up-to-date all the time. They believe they can isolate critical bits from the Internet so that hackers can’t reach them. When they get hacked, they can always point backwards at the path they failed to apply, or the Web 2.0 code they failed to inspect, or the virus their AV failed to catch. They believe that was the only problem that allowed the hack, and once fixed, they will be secure from now on. They believe if they “just take security seriously enough”, then such problems won’t happen. But they are wrong, as wrong as Shuttle engineers who thought “if we just take safety serious enough, disasters won’t happen”.