Tuesday, April 12, 2011

So what's wrong with optimism?

In a response to my post "Transactive Memory Systems" , Rob Graham was uncharacteristically gracious when he called my theory "optimistic." He goes on to disagree with me and describes the cybersecurity industry as full of false memes, the echo chamber, and groupthink.

Being Optimistic

So what's wrong with being optimistic? By definition these criticisms imply that the industry believes in ideas that are untrue. But not all generally agreed upon ideas are inaccurate. Most of the ideas are believed by the person first saying it, and can be backed up by their own research. I believe that the majority of the ideas discussed in the community have merit, and it's practical to be optimistic. When Rob says, "The market doesn't care about cybersecurity" it is merely a different kind of groupthink. Remember that in groupthink there is only one correct answer, and that the self-appointed 'mind guards' are the ones who have it. If the symptoms of groupthink are protecting the group, rejecting alternatives, and silencing opposition, then optimistic belief in the likelihood of accurate ideas is the ultimate rejection of groupthink. Said differently, I believe in the abilities of the best and brightest scientists of our industry because there is a reasonable likelihood that they are correct.

Transactive Memory

To answer Rob's criticisms of the transactive memory of the security community, I didn't say that transactive memory was a good thing, just an efficient way of making decisions. It's success or failure is based upon being able to communicate the skills each person has to each other. In this I think we are very successful. It doesn't claim that the ideas of the community are True, but merely describes people's motivation for believing them.

Rob said that the three components of transactive memory were not consistent with his experience of the security community.
"Specialization: People don't actually specialize. Certainly, there are people that talk a lot about something, but that doesn't make them specialists."
In the first post, I make the point that for the purposes of "metamemory," the person who speaks about a single topic frequently is labeled by the community as a specialist, not the person themselves or any board of certification. This is a result of our human nature to simplify things to a level we can process.
"Coordination: Marisa points to conferences as an example of "transactive memory", but the reverse is true. It is the ability to act without a lot of formal meetings that is the hallmark of this "transactive" model."
The theory doesn't say that there is not a time where people get to know each other's strengths. In fact the benefits of teamwork with transactive memory depend on this period of learning about each other.
"Credibility is totally misplaced. People get credibility in our industry by pimping themselves. Vendors market themselves. Market analysts (like Gartner) also market themselves. People with little ability nonetheless get "certifications". Hackers, using tools built by their betters, are able to gain notoriety despite being little more than "script kiddies". There are those with technical ability (e.g. Schneier) that really deserve respect, but they are in the minority."
Credibility is the crux of our debate. Who should we believe? I submit that we have to believe *someone.* As an industry, the fact is we're only as good as our "experts." People like Schneier and Rob are good representatives of people who make good experts, but lousy community members. They rarely ever believe the ideas of their fellow experts. They constantly have to double check. This is inefficient and doesn't work for the broader community. But I agree that we need a better way to sort out the experts from the marketing whizzes. Or a better understanding of the implications for being wrong about our ideas.


Saying the community just suffers groupthink is problematic because it necessitates that the commonly held beliefs of the security community are more often wrong than right, when in reality they are more often right than wrong. I don't have a source for this observation, but if the top scientific minds in our field can't even get their theories right more than half the time, we have bigger problems on our hands than who believes them and for what reason. Call me optimistic, but I've met a lot of smart people in my time in the community, and if they say they've got conclusions, I believe them. I believe them not because I am pressured by mind control or subliminal catch phrases, but because it is the healthy human reaction to respect the ideas of experts in a field I am not an expert in. (Because really, what choice do I have?) In the same vein of optimism, I believe it is my duty to produce excellent research in the field I may be an expert in, so that those left in a similar predicament of inexperience can trust my expertise. This arrangement is infinitely more efficient than having to learn *everything* on your own, and often is the reason we have seen such successful collaborations across organizations in the security community.

No comments: