From shrdlu at Layer 8:
"A vendor—or analyst firm, whatever—produces a paper touting the conventional wisdom that it’s a lot cheaper to fix software vulnerabilities early in the SDLC than just before or after deployment. And I can get behind that idea, certainly. But the reasoning being produced to support it often ends up to be circular. [...]When it comes to counting the investment against the cost to fix actual breaches, the whitepapers mostly get vague. They list all the vulnerabilities found, describe how bad they were—but don’t actually show that they led to specific breaches that incurred real costs. They’re assuming that a vulnerability is bad and needs to be fixed, regardless of whether the vulnerability is EVER exploited."
"Just saying that it’s a problem because it says right here that it’s a problem is what we’re doing too much of today."
Is this a call for better attack trees? For instance, instead of just saying "SQLi is bad" we say "This line of code will lead to this SQL database being deleted because it is not sanitized. It ranks a 5 on the 'Oh Sh*t' scale."
Do we really need to see people burned to know the stove is hot? We make preventative business decisions all the time, many that require an even bigger leap of faith than security spending. Managers understand the logic of that kind of spending, but I think the reluctance to do so is actually risk calculation. The organization doesn't need the security pro to tell them what the financial effects of the breach would be; they know and we can't. When they choose not to use a secure coding program they are accepting that risk.
"We need to trace a discovered vulnerability from its creation, through the SDLC, into deployment, and then connect it to an actual breach, leading to a real monetary loss suffered by the business. THERE’S your ROI"
I agree that nothing motivates the management like a breach on the news. The greatest security programs are operating with the goal to never let a serious bug happen AGAIN. But there are companies that can survive this kind of gamble, and companies that can't. Companies putting lives at risk have a different prevention obligation than companies that make video games. Also, remember there are intangible costs to a breach like brand image and company culture, but also intangible benefits like learning from the process and justifying change that can't be measured with ROI equations.
This is why “fixing it now vs fixing it sooner” is a flawed argument. The premise is that you MUST fix, and that’s what executives aren’t buying. We have to make the logic work better.
The call for logic and evidence of breaches feeds in to this premise. You're saying that if we can just get more data then we can justify the fix to management. If you're using historical evidence to justify fixing a bug, you need only look hard enough. Somewhere there is a scary example of the bug in an exploit that burned someone else. The premise of the argument is not a judgement that everything must be fixed. It's not the job of a security pro to tell the executive what must be fixed, only the ways the software can be broken. The conventional wisdom of fixing sooner presupposes that the bugs being fixed are worthy of fixing, and therefore would have been cheaper to fix sooner.
Anyway, I think it's a hard situation to manage because the bugs *are* technical, and a manager most likely won't be able to see the vulnerability, and will have to take someone's word for how real it is. This is where the conversation usually diverges toward scanning vs. pentesting to prove criticality. The way I see it, any day you don't end up on the news is a day your security program is working, because you can't be invisible on the 'net. If there's a hole, people know about it, and if they can't or won't exploit it, then that's a success.
Adrian Lane, over at the Securosis Blog, had a great comment on shrdlu's post as well. He said, "Failing in order to justify action is still failure."
2 comments:
Hi Marisa,
I'm sorry our CAPTCHA caused a DoC (denial of comment). Thanks for taking the time to post your comment over here.
I think our big disconnect here is that convincing the business to fix a security flaw is not just a matter of demonstrating the possible technical impact; it's a matter of showing the PROBABILITY of an exploit being used against that exact vulnerability. Just saying, "It happened to a friend of a friend of a cousin of mine named Heartland" is not enough to convince someone that it could happen to them too. We need historical data that is relevant to that executive, and we have to be very clear about how we arrived at our estimation of the PROBABILITY of the risk.
When I've done this, some of the factors I've discussed with executives included describing what skill level it would take to exploit it, what kind of attacker would look for it and use it, and how often I've seen it happen in very similar enterprises. I might even go to their IPS logs and say, "They've already been using automated scanning to look for something like this in our other applications." All of these data points address probability in a more credible way. That gets me a lot farther than if I say, "This flaw is on the OWASP Top Ten, so you need to stop what you're doing and fix it."
"I think our big disconnect here is that convincing the business to fix a security flaw is not just a matter of demonstrating the possible technical impact..."
It's not our job to convince a business to a fix a security flaw.
Post a Comment