This post at Securosis describes why Microsoft's SDL only works for Microsoft. Microsoft agrees in their own post. Both Securosis and Microsoft make fundamental errors about secure development.
Securosis makes the implicit assumption that "you need to be secure", that it's some sort of higher moral duty for any organization, and that you are morally weak if you aren't pursuing some sort of secure development.
This is wrong. There is no such thing as perfect security. No matter what you do, you cannot be secure. The first step in a secure development process is to figure what level of risks you are willing to accept, and what level of security you need. For many organizations, the correct answer is to completely ignore security altogether.
As a disinterested expert, I don't care if you are secure. However, I would advise you that certain types of development are prone to some high-risk errors. For example, if you do "website development", then you are at high-risk for "SQL injection", a problem that causes a lot of grief for a lot of organizations.
On the other hand, if you are building internal applications, such as a utility for extracting data from one database, transforming it, and sticking it in another database, I don't have any particular recommendation. Sure, I'll be annoyed by the fact that you are probably imbedding administrator passwords in the script and sending the data unencrypted over the wire. However, most organizations ignore security on this sort of thing and never suffer adverse consequences.
Microsoft makes a similar error, claiming that secure development saves money. This is a fallacy. It's like how you have a budget of $200 for clothes, but your wife comes home having paid $400. She claims "but they were 75% off, so in reality, I've saved money".
Most development organizations cannot afford "secure development", the costs would exceed their entire development budget. No amount of "savings" will make up the difference.
You could make claims like "but a data breach could cost $100-million", surely, an "ounce of prevention" is worth it in this case. Maybe, maybe not. The size of the problem is largely irrelevant. You can always construct a scenario where the entire company is at risk of going out of business. The question isn't whether such a scenario "could" happen, but the "likelihood" that it will.
While developers tend to underestimate such risks, security types tend to overestimate them. How much money that you can "save" in development depends on correctly estimating the risks.
Securosis closes their post with "Do what Microsoft did, not what they do", meaning to come up with your own path to secure development. I disagree. I don't think most organizations need secure development, and for those that do, it should be targeted at specific risks rather than be comprehensive.