This post at Securosis describes why Microsoft's SDL only works for Microsoft. Microsoft agrees in their own post. Both Securosis and Microsoft make fundamental errors about secure development.
Securosis makes the implicit assumption that "you need to be secure", that it's some sort of higher moral duty for any organization, and that you are morally weak if you aren't pursuing some sort of secure development.
This is wrong. There is no such thing as perfect security. No matter what you do, you cannot be secure. The first step in a secure development process is to figure what level of risks you are willing to accept, and what level of security you need. For many organizations, the correct answer is to completely ignore security altogether.
As a disinterested expert, I don't care if you are secure. However, I would advise you that certain types of development are prone to some high-risk errors. For example, if you do "website development", then you are at high-risk for "SQL injection", a problem that causes a lot of grief for a lot of organizations.
On the other hand, if you are building internal applications, such as a utility for extracting data from one database, transforming it, and sticking it in another database, I don't have any particular recommendation. Sure, I'll be annoyed by the fact that you are probably imbedding administrator passwords in the script and sending the data unencrypted over the wire. However, most organizations ignore security on this sort of thing and never suffer adverse consequences.
Microsoft makes a similar error, claiming that secure development saves money. This is a fallacy. It's like how you have a budget of $200 for clothes, but your wife comes home having paid $400. She claims "but they were 75% off, so in reality, I've saved money".
Most development organizations cannot afford "secure development", the costs would exceed their entire development budget. No amount of "savings" will make up the difference.
You could make claims like "but a data breach could cost $100-million", surely, an "ounce of prevention" is worth it in this case. Maybe, maybe not. The size of the problem is largely irrelevant. You can always construct a scenario where the entire company is at risk of going out of business. The question isn't whether such a scenario "could" happen, but the "likelihood" that it will.
While developers tend to underestimate such risks, security types tend to overestimate them. How much money that you can "save" in development depends on correctly estimating the risks.
CONCLUSION
Securosis closes their post with "Do what Microsoft did, not what they do", meaning to come up with your own path to secure development. I disagree. I don't think most organizations need secure development, and for those that do, it should be targeted at specific risks rather than be comprehensive.
3 comments:
Sure, I agree, decisions about how much to spend on secure software development need to be made on the basis of cost and risk. And I agree that secure software development certainly doesn’t come for free: I have a difficult time agreeing with the arguments that I am somehow saving money by investing in training and technology and taking on extra steps in development. Let’s face it: much of the work demanded by a secure SDLC like Microsoft’s SDL adds time and adds cost and takes away from delivering features to customers. We have to make some tough decisions on what and how much to do, based on risk.
Building software is risky work, and as developers we are always confronting, and trying to manage, risk. So how do we go about understanding and assessing security-related risks: making sure that the people who build software understand the problems well enough to make good trade-offs, and invest enough in training, time, people, tools?
You make the point that the security community overestimates security risks, and the development community underestimates them. This is partly because most developers don't understand security requirements and problems well enough, and because most managers and sponsors, the people who make decisions about priorities and how much to spend and on what, definitely don't - I know I didn't until a couple of years ago, and I am still learning.
The software development and security communities need to work together with our customers to fill in this gap. If we are going to be making risk-based decisions, the people involved in building software need to have a better understanding of the problems, what extra costs are necessary to take on upfront and why, so that we can make good decisions. So how do we do that?
The question isn't whether such a scenario "could" happen, but the "likelihood" that it will.
BP said that same thing about deep-water drilling.
The question isn't whether such a scenario "could" happen, but the "likelihood" that it will.
BP thought that same thing about deep-water drilling, and now look where they are...
Post a Comment