Showing posts with label SDLC. Show all posts
Showing posts with label SDLC. Show all posts

Friday, August 09, 2013

The Rob Test: 12 Steps to Safer Code

Joel Spolsky has a famous list of "12 Steps to Better Code". I thought I'd create a similar list for safer, more secure code that's resilient against hackers.

The Rob Test
1. Do you use source control, bug tracking, and planning (i.e. GitHub basics)?
2. Do you have automated (one step, daily) builds?
3. Do you have automated regression/unit testing? Can you fix/release in 24 hours?
4. Do you reward testers for breaking things? (like fuzz testing)
5. Do your coders know basic vulns? (buffer-overflows, OWASP Top 10) Do you train them? Do you test new hires?
6. Do you know your attack surface? threat model?
7. Do you sniff the wire to see what's going on? (including sslstrip)
8. Do you have detailed security specifications as part of requirements/design?
9. Do you ban unsafe practices? (strcpy, SQL pasting, clear-text)
10. Do you perform regular static/dynamic analysis on code?
11. Do you have, and practice, an incident response plan? (secure@, bounties, advisories, notification)
12. Are your processes lightweight and used, or heavyweight and ignored?

Thursday, January 13, 2011

Comment on "Layer 8: Connecting the risk dots."

(This post is a response to the blog post at "Layer 8: Connecting the risk dots," mostly because I typed the whole thing out on the site and then couldn't figure out the captcha to submit it. )

From shrdlu at Layer 8:
"A vendor—or analyst firm, whatever—produces a paper touting the conventional wisdom that it’s a lot cheaper to fix software vulnerabilities early in the SDLC than just before or after deployment. And I can get behind that idea, certainly. But the reasoning being produced to support it often ends up to be circular. [...]

When it comes to counting the investment against the cost to fix actual breaches, the whitepapers mostly get vague. They list all the vulnerabilities found, describe how bad they were—but don’t actually show that they led to specific breaches that incurred real costs. They’re assuming that a vulnerability is bad and needs to be fixed, regardless of whether the vulnerability is EVER exploited."
"Just saying that it’s a problem because it says right here that it’s a problem is what we’re doing too much of today."

Is this a call for better attack trees? For instance, instead of just saying "SQLi is bad" we say "This line of code will lead to this SQL database being deleted because it is not sanitized. It ranks a 5 on the 'Oh Sh*t' scale."

Do we really need to see people burned to know the stove is hot? We make preventative business decisions all the time, many that require an even bigger leap of faith than security spending. Managers understand the logic of that kind of spending, but I think the reluctance to do so is actually risk calculation. The organization doesn't need the security pro to tell them what the financial effects of the breach would be; they know and we can't. When they choose not to use a secure coding program they are accepting that risk.

"We need to trace a discovered vulnerability from its creation, through the SDLC, into deployment, and then connect it to an actual breach, leading to a real monetary loss suffered by the business. THERE’S your ROI"

I agree that nothing motivates the management like a breach on the news. The greatest security programs are operating with the goal to never let a serious bug happen AGAIN. But there are companies that can survive this kind of gamble, and companies that can't. Companies putting lives at risk have a different prevention obligation than companies that make video games. Also, remember there are intangible costs to a breach like brand image and company culture, but also intangible benefits like learning from the process and justifying change that can't be measured with ROI equations.

This is why “fixing it now vs fixing it sooner” is a flawed argument. The premise is that you MUST fix, and that’s what executives aren’t buying. We have to make the logic work better.

The call for logic and evidence of breaches feeds in to this premise. You're saying that if we can just get more data then we can justify the fix to management. If you're using historical evidence to justify fixing a bug, you need only look hard enough. Somewhere there is a scary example of the bug in an exploit that burned someone else. The premise of the argument is not a judgement that everything must be fixed. It's not the job of a security pro to tell the executive what must be fixed, only the ways the software can be broken. The conventional wisdom of fixing sooner presupposes that the bugs being fixed are worthy of fixing, and therefore would have been cheaper to fix sooner.

Anyway, I think it's a hard situation to manage because the bugs *are* technical, and a manager most likely won't be able to see the vulnerability, and will have to take someone's word for how real it is. This is where the conversation usually diverges toward scanning vs. pentesting to prove criticality. The way I see it, any day you don't end up on the news is a day your security program is working, because you can't be invisible on the 'net. If there's a hole, people know about it, and if they can't or won't exploit it, then that's a success.


Adrian Lane, over at the Securosis Blog, had a great comment on shrdlu's post as well. He said, "Failing in order to justify action is still failure."

Tuesday, November 02, 2010

A discussion at SecTor on Rogue Secure Development

Last week I presented a new methodology for developing secure code called Rogue Secure Development(pdf). The talk was at SecTor in Toronto, and afterwards a lively discussion took place concerning the adoption of such a methodology. RSD is a 5 phase process that bakes in with the traditional Waterfall SDLC and focuses on bare-bones resource requirements for SMBs. The question I put forth to the audience was:

If there is a process that requires minimal amounts of resources, saves money, and creates robust code, what will it take to increase adoption?

There were many answers, but they were all summed up succinctly in 4 options.

1. People are killed, and a lack of a secure coding methodology is directly to blame.
2. Companies go bankrupt, and a lack of a secure coding methodology is directly to blame.
3. A nuclear power plant has a catastrophic meltdown, and a lack of a secure coding methodology is directly to blame.
4. Compliance forces adoption.

I found these dramatic and macabre options disturbing, so I asked, "Is there no business case for secure coding? No cost saving analysis? No risk management prescription?" The consensus in the room was that my suggestions, while potentially possible, weren't going to persuade anybody to break from the status quo. Interestingly, the only factor that seemed to have complete persuasive power was Compliance. In this particular audience, the threat of fines was more of a motivating stick than I've ever seen previously.

In March 2010, Errata did a study asking people what reasons they had if they were not using a secure development lifecycle. By far the most popular answer was resource constraints. The 4 options above would imply that, at least according to security folks, the reason people do not adopt secure coding is because of some black and white risk assessment telling them they are not in danger. So, does this mean that the people in the study aren't being honest with themselves, or that security professionals are out of touch with the motives of the development shops?

Thursday, September 16, 2010

Adobe misses low hanging fruit in Reader


One of the most common features of "secure development" is the ability to avoid functions that are known to be dangerous, functions which have caused major vulnerabilities (such as Internet worms) in the past. These are functions developed in the 1970s, before their risks were understood. Now that we have suffered from these functions and understand the risks, we have come up with safer alternatives. Using these alternatives are cheap and easy, and they can save a development house endless embarrassment and remediation time. More importantly, while verifying that your code is "secure" is an essentially impossible task, verifying that your code contains no banned functions is easy. We call this the "low hanging fruit" of secure development.

One such bad function is "strcat." It copies data from one area of memory into another. However, it does not check that the target memory is big enough. Strcat continues copying beyond the bounds of the target memory, overwriting other parts of memory. Hackers can manipulate the overwritten areas in just the right way to break into the machine. With 48,000 hits on Google for strcat vulnerabilities, some dating back more than a decade, this is a well known potential security issue.

The most recent exploit in Adobe Reader, the "SING Table Parsing Vulnerability" (CVE-2010-2883) contains exactly this function. First found exploited in the wild by Mila Parkour, this vulnerability has seen weeks of front page coverage. Metasploit's Joshua Drake did a great writeup of the exploit, here. Chester Wisniewski of Sophos posted a video that clearly demonstrates what the attack looks like, here. While this particular version of the exploit does use javascript, disabling javascript will not fix the problem (unlike the fix for the recent Adobe Reader Flash attack.)

So why doesn't Adobe fix its low hanging fruit? Why does it continue to use these toxic functions? It's strange, hardware vendors are removing hazardous substances (RoHS) from devices, but software vendors aren't being similarly diligent about cleaning up hazardous functions from old code. Errata Security provides a free tool known as "LookingGlass" that helps people see if their software is using these toxic functions. We ran it on Adobe Reader and found extensive use of these toxic functions back in 2008. LookingGlass can easily tell you if your software has these toxic functions, and quckly see what danger you are exposing yourself to. As of today, the danger from Adobe's software is still quite high.

Tuesday, August 24, 2010

DLL exploit not a job for secure coding programs

The big "zero-day" exploit this week was the malicious Windows DLL payload brought to the spotlight by Rapid7's HD Moore. Two other researchers appear to have also found this bug as well. Microsoft released a security advisory on this class of vulnerabilities, and stated "This issue is caused by specific insecure programming practices that allow so-called "binary planting" or "DLL preloading attacks". These practices could allow an attacker to remotely execute arbitrary code in the context of the user running the vulnerable application when the user opens a file from an untrusted location."

So which one is it? Is this an issue caused by insecure coding practices, or by insecure desktop administration and security policy execution? The secure coding methodology made famous by Microsoft didn't protect them from having at least 4 major applications affected by this bug. Researchers say that Microsoft has known about this class of vulnerability for anywhere from 6 months to 10 years, depending on who you read. So why didn't they catch this bug? While it might seem to be, this is NOT an admonishment of Microsoft or their secure coding practices. The Microsoft SDL and SDL-Agile are successful, game-changing strategies that I give a lot of credit. The reason the SDL didn't catch this DLL code execution bug is because a bug like this is outside of the scope of a successful secure coding program. In a secure software development lifecycle, the goal is to prevent bugs from the start that are easy and cost-efficient to eliminate. The SDL is great at preventing SQL Injections and catching bugs where code sanitization is the fix, however this bug is in code that is behaving exactly as it was designed. In April 2009, Aviv Raff was told by Microsoft when dealing with a very similar disclosure that this bug was not a simple fix. "They said it would be very problematic to fix the whole thing, and would break a lot of third-party Windows applications."

Microsoft has put out a tool to help administrators mitigate the problem, and has a lengthy description to help guide developers to construct their code differently in the future. This is an appropriate response based on their Security Response Practices. It is possible that it is more cost-efficient to respond to a disclosure such as this than it is to prevent it. Third-party companies such as Rapid7 and Errata Security are adding modules to their auditing tools to check for this attack. Actions such as these may actually cause the DLL Preloading Attacks to become "low-hanging fruit" in the development process in the future, but for now it should not be expected that a secure coding program should have prevented this attack.

Sunday, April 04, 2010

Errata Security releases the results of the survey on secure coding practices

Errata Security released the results of a survey conducted over the week of Security B-Sides and the RSA Conference in San Francisco. The survey found that Microsoft SDL was the most common security development lifecycle chosen of the companies using formal methodologies, but Ad Hoc solutions are still more popular. Small companies are more likely to be using Agile development, and the corresponding SDL-Agile. The most common reason for not choosing to use a formal methodology was resource requirements.

Of those that responded they were choosing not to use a secure coding program, a lack of resources was by far the most common answer. No matter what the size of the company, participants said it was too time consuming, too expensive, and too draining on their resources. Another reason was that management had deemed it unnecessary. Management plays a key role in these decisions. The survey showed that developers look to management to set the security agenda, and are generally not self-starters when it comes to including security in their code.

Security in the SDLC is still a relatively new methodology. 43% not integrating security is a high number, but it's improving at a steady pace. The adoption of SDL-Agile, which was introduced in November '09, by almost all of the small development shops and several large companies shows us that people are ready to make the shift, they're just waiting for the right style to fit their needs.

Here are the press links covering the story, and a link to the actual paper:

Download the Survey Results (pdf): "Integrating Security Into the Software Development Lifecycle"

Dark Reading: "Survey Says: More Than Half of Software Companies Deploying Secure Coding Methods"

CSO Security and Risk: "Code Writers Finally Get Security? Maybe"

Microsoft SDL Blog: "Survey Results: Microsoft SDL awareness on the rise"

Jeff Jones Blog: "SDL AWARENESS AND ADOPTION HIGH AMONG SECURITY PROFESSIONALS"

Help Net Security: "Root issues causing software vulnerabilities"

Sunday, February 28, 2010

POLL - What is your experience with security in the Software Development LifeCycle?



Errata Security is conducting a survey on the real world usage of software development methodologies such as Microsoft SDL, OWASP's SAMM, and BSIMM. We are interested in learning which organizations are successfully implementing these methods, and also the reasons companies are abstaining from using these methods. The survey went live over the weekend, and already we are collecting some very interesting experiences. The most noteworthy observation is how varied the responses have been. There appears to be no one correct solution for any two organizations. We will have this survey up through the RSA Conference and the following week, and see if any patterns emerge.

To participate in this short survey, go to http://bit.ly/ErrataSurvey. If you would like a copy of the results of this survey, there is a request button at the end of the survey where you can enter your email address.

In order to encourage participation in this survey, and to explain the reasons behind it, I will be giving a lightning talk at Security B-Sides in San Francisco on March 3 at 12:00 PST.

Please share the survey link with software developers, security experts, product managers, or anyone involved in product development. Thanks!