Every few months or so a person will point me to an essay written by Marcus Ranum entitled "The six dumbest ideas in computer security". This essasy is held up as the gold standard of why vulnerability auditing or penetration testing is bad. These ideas are parroted, namely by Pete Lindstrom, as reasons by vulnerability auditing is a bad idea.
Let me pause for a second and discuss marketing spin and positioning. Since we are in the midst of primary elections in the United States voters are constantly overwhelmed with a wide range of marketing messages, stories, testimonials, and other such forms of persuasion to sway people to a political parties or candidate's viewpoints. One of the staples of these efforts is vilifying your opponent. For instance if you think that the 2nd amendment should be repealed and individual gun ownership should be banned an argument would start with something like this.
"The NRA wants to arm everyman woman and child with fully automatic weapons and a legal precedent to shoot anybody on site they want. That will undoubtedly return this country to a state of constant fear and bloodshed." Of course, this is not an accurate representation of pro 2nd amendment advocates but a fair and balanced view can't be presented or you may actually convert people to your opponents cause.
That is a no-no. Dismissing this type of misinformation should be simple with some extra research. That is the beauty as the person misrepresenting the information relies on human laziness to accept an authoritative statement with no question.
You may be asking what this has to do with Ranum and why you care.
Ranum is attempting to convince you he is right by painting the ideas he is arguing against in the worst possible light and even misrepresenting some facts. I have thoughts on all his points but the one I choose to talk about the most is the "penetrate and patch idea".
Ranum wants to paint the idea of penetration testing and vulnerability hunting in general a bad idea. He lists several reasons and even gives a cutesy outline of a basic program that demonstrates his perceived flaws in the idea. The problem is that with a bit of fact checking mixed with a bit of tree shaking his reasoning starts to fall apart.
To start with the logic is flawed. His argument is literally the equivalent of saying "now that there is medicine no one should get sick anymore." He then continues with an anecdote about a programmer friend that refers to the "penetrate and patch" model and "turd polishing". From the description of the process it does sound pretty worthless. Hire some people who do not know how your application works to look for bugs. They will find some, claim success, justify the paycheck, and then do it all over again. If security research actually worked like Ranum portrays, I would have to agree, that model is worthless.
Luckily, Ranum's description is not how it actually works. I offer an explanation of how these tests actually work, a courtesy Ranum did not extend to his readers. Penetration testing is a marketing name that covers a variety of different activities that are not all the same and have different requirements, goals, and results.
Penetration Test:
Who needs it: Anybody who wants to verify that their current security posture is working. Penetration tests are to enlighten C level executives to verify or dispel a current policy, spending, and staffing needs.
How does it work: A penetration test is an activity that will assess a network's survivability to a hacker attack. This is done by using the tools and techniques a hacker would to search for flaws in a networks design, architecture, and policy. The goal is to no break-in as any network can be compromised with enough money and time. The goal is similar to game theory to look for things that might not have been considered during the network architecture.
Goals: Although vulnerabilities are unearthed, they are interpreted from a larger point-of-view.
Examples: Discovering that a user account has an easy guessed or brute forced password does not mean the resolution ends with changing the password. It means that implementing or strengthening an organizations password policy while implemented constant password auditing with user notification would resolve the problem. Being able to exploit a buffer overflow on a server for access does not mean that patching the server will fix the problem. It means that implementing a policy that servers have built in survivability from attacks with non-executable memory, address space layout randomization, heap and stack protection, and server hardening techniques would solve the problem.
Application Assessment:
Who needs it: People who have large development shops for custom applications or products. They provide a method of accountability for a products development while education the developers to different potential threats and even unintended consequences.
How does it work: An application, an operating system, or even a device like a mobile phone is analyzed and potential weakness are discovered and mitigated. This is more than just looking for buffer overflows; it is the understanding of how application actually works as opposed to how the documentation says it works. This type of understanding can pinpoint design errors and unintended consequences.
One of the advantages an audit like this brings is that normally during the development of a large and complex product there are different teams who work on different parts. For instance in an anti-virus tool there may be several teams like a scan engine team, a GUI team, an update team, and a signature team. The problem is that often these teams do not communicate and if they do its through things like developer documentation or API references, which means that one team may not understand the ramifications of the changes they make on another team. An application assessment means that a single person, or if it is a large enough application, a team of people will analyze and understand all the operations while looking for potential missteps. An application assessment is not just QA on steroids.
Examples: A security product stops threats by running them in a virtual machine and attempting to determine if software is malicious. A lack of proper checking on the part of the VM allows an attacker who is aware of its present can use the environment to hide malware as well as make untraceable changes to the host operating system. In a different example, the ability to test a mobile device GSM connection means that a test application left behind that shows the use of undocumented API functions while giving a person the list of all the ARFCN numbers in use in the area. Outsourced code is audited before being integrated into a main project to ensure that basic security standards have been met.
Vulnerability Assessment:
Who needs it: Anybody who has to prove that they are following a set of guidelines on how security should be implemented on a network.
How it works: A vulnerability assessment is what most people are actually thinking of when the phrase penetration test is mentioned. In some fashion, an auditor will look for a known set of problems to ascertain the security posture of a network. This maybe done for compliance reasons or to certify that a network is reasonably secure to handle certain types of information or transactions. They are not supposed to be insightful or explore new areas but instead verify a network meets a certain security baseline. This is important to companies like credit card processors, health care providers, and anybody that handles personal and confidential information.
Examples: A credit card processor needs to see that a potential merchant has taken the proper steps to assure that credit card information is stored safely and securely and that a network is undergoing a measure of security due diligence.
As you can see, it is a bit disingenuous to combine them all under one heading and describe them in a wash, rinses, repeat description of finding vulnerabilities. One of the main arguments Ranum makes is that if the "penetrate and patch" philosophy works them we should be running out of bugs to find. On the surface, this is a good argument and in certain conditions, I would agree with it. Those conditions would be that no future development is done on operating systems why banning any new development of commercial code. In order for Ranum's 10-year window to be accurate, that would mean that we would have spent 10 years auditing Windows 98, Redhat 3, and MacOS 7.6. If we, as an industry, were able to stop development and focus on bug, fixing just these versions over a 10 year period there would be no vulnerabilities. However, this dark period of code development is not possible. The desire of new products and functionality means that every day there is new code that contains new vulnerabilities and new problems that go into wide use. At rate of growth the IT industry is moving it will be almost impossible to catch up let alone stay current with the new tide of vulnerabilities are unwittingly introduced. It is arguably that security effort on security proofing new products should be the focus rather than looking through old products. I agree that investing in anti-exploitation technology is wise. If you look at new OS versions like Redhat, OSX, and Vista there is a lot more attention on these types of technologies. These investments are offset by the slow adoption rate of the new operating systems and is compounded by the old troubled software is not going away quickly. Recently during a penetration test I actually ran across a Windows 95 machine, talk about a blast from the past.
Ranum gives an example about the futility of running a "penetration test" for Apache against a locked down server with custom C code. I would agree. Security is no different from anything else; knowing the right tool to use is the key to success while using the wrong tool means useless results. A penetration test is useful for that example.
When doing penetration tests we specifically ask to identify all custom applications before starting the test in order to examine them in a different way otherwise it would be a waste of both parties time and effort.
Now for the entertainment portion of this post, in order better explain this position lets apply Ranum’s theory to unicycle riding. Let us say in an alternate universe that Ranum owns a Unicycle store, Ranums One Wheel. If you are used to dealing with experienced Unicycle riders glib comments like “do it right the first time” could be warranted but what happens when a novice rider comes in?
Mr. Smith (the new Unicycle rider): Excuse me sir, I have been committed to riding a unicycle in a charity fundraiser and I bought one here last week. I seem to be having a problem in its proper operation.
Ranum: Whats your problem?
Mr. Smith: I keep falling. I can’t seem to get the hang ouf it, is there any advice you can offer me?
Ranum: Of course. Don’t fall down, that is counterproductive.
Mr. Smith: Yes, I understand that part, but how do I not fall down?
Ranum: If you are wearing elbow pads, knee pads, and a helmet then you shouldn’t worry, just do it right the first time.
Mr. Smith: That’s kind of the problem, I don’t know what right is, or how to do it. Although I am wearing the required safety gear I am failing to achieve my goal of riding the unicycle.
Ranum: Then you do not have a problem.
Mr. Smith: I was thinking of hiring an instructor to look at what I am doing and tell me how to avoid falling, can you suggest someone?
Ranum: That’s what we call the “try and fall” method of unicycle riding. You will not learn anything and you will find that you are vulnerable to all types of problems like new environments and wind. It’s best if you just do it right the first time.
Mr. Smith: But I don’t undersand what right is, what It looks like, or how I accomplish it. At least with a coach I could get guidance.
Ranum: That is the kind of guidance that will leave you taking lessons forever. I am telling you just do it right the first time and do not fall.
Mr. Smith: Also I was wearing all the safety gear you told me to but i fell in a bush and got poked in the eye and had to go to the hospital.
Ranum: Never heard that one before, it must not be dangerous.
The largest problem I see with Ranum’s aversion to “penetrate and patch” is that is its purest form he is advocating ignorance. The essay boils down to saying that although there are tools and methods to better understand your application, your development process, and ways to keep you from ending up on Bugtraq they should all be ignored in favor of “do it right the first time.” As in the unicycle example it is hard for most people to do it right the first time because they don’t know what right is. There are also flaws in logic. For instance how is a company suppose to know if the are vulnerable to the “exploit of the week” as most software developers don’t know what that is. I still run into programmers that don’t think stack and heap overflows are exploitable. How do you combat that? You show them of course. That’s not a problem for me but it’s a no-no according to the anti “penetrate and patch” crowd. A glib comment could be “you should fire that programmer and hire one that can do it right the first time”. If everything is that simple why bother doing any QA, you can just hire programmers that write bug free code every time.
I am sure Ranum is a very smart and capable individual. I have used a number of his products in the past and am a fan of the design and usefulness of them. My problem with his essay is that he seems to talk about a subject he hasn't thoroughly researched or participated in which leads to an uninformed opinion. In addition, I find it odd that Ranum is the CSO of a company, Tenable, that produces tools that fall into the "penetrate and patch" heading.
No no! The real problem is that there isn't enough objectivity!
ReplyDeletehttp://books.slashdot.org/article.pl?sid=08/04/21/1323233
Seems like the introduction of the "Six Dumbest Ideas in Computer Security" list explains the _context_ in which it needs to be seen: Where do anti-good ideas come from? They come from misguided attempts to do the impossible - which is another way of saying "trying to ignore reality." [emphasis from me]
ReplyDeleteAs such, I am sure that nobody in his right mind would ever say "never do anything at anytime that is in this list" but it most certainly means that relying on it is dumb, big time. That would be like reading the "Action is Better Than Inaction" as advice to roll over and die.
Context is key.
So Mr Ranum is wrong. Why such a long -winded argument to point this out? :-)
ReplyDeleteThere are a lot of points to both your arguments here, as well as Ranum's outdated rants. I'm not sure where Lindstrom comes into the picture... he's just "that old crazy guy that nobody listens to".
ReplyDeleteI like your definitions for VA, PT, and AA. I'm stuck in AA for awhile -- at least until I get clean. I heard that a year or two is usually long enough.
So allow me to speak for the AA community as I respond to some of the points that we would disagree with.
I still run into programmers that don’t think stack and heap overflows are exploitable. How do you combat that? You show them of course
That's the Exploitability Trap. Instead of demonstrating how an application is obviously insecure (or ambiguous), have the programmers demonstrate how the application is obviously secure. If they can't, then it's your word against theirs -- and it's my opinion that you should be in a position of power to overrule any of their decisions on this sort of matter.
In any given organization, a CSO/CISO or VP/Director of Security can/should withhold the right to force a development manager/team to meet security requirements. This is called Governance. One of the requirements can be about the existence of known buffer overflows. Having a written policy to this end is a great idea. You can just point the programmers to the written policy and say "hahaha. Governance!".
I additionally recommend tying security requirements to Key Performance Indicators, which in turn are tied to bonuses, performance plans, and/or salary improvements. Yes, this does mean that organizations may end up firing programmers over buggy code, especially the same developer who makes the same mistakes over and over again. As a consultant, I feel that firing programmers is not only my job and a measure of my success -- it's my lifeblood -- a duty to my guild and my country.
That’s not a problem for me but it’s a no-no according to the anti “penetrate and patch” crowd. A glib comment could be “you should fire that programmer and hire one that can do it right the first time”. If everything is that simple why bother doing any QA, you can just hire programmers that write bug free code every time
A better question:
Why bother doing any QA when you can just hire developer-testers to test the code in a test environment so that it remains as CWE-free as possible?
"QA" usually only refers to functional or acceptance testing (to include regression testing) AFTER integration (i.e. after the application is built). While this allows organizations to measure QA processes separately from development -- it does not allow the same types of measurements to be made between each phase of the SDLC. If there are separate developer-testers with a testing environment that is separate from the development environment -- the same types of measurements can be taken as between QA and Dev, but additionally control granularity in measurements at the requirements, high-level design, design specification/modeling, construction, and integration phases of a lifecycle.
This could also be used to gather measurements such as tester and test-method reputations. Wouldn't you like to know which is more successful at finding stack-based buffer overflows in any given project/organization -- automated fuzz testing or manual DFA? Rob or Dave? Now you can.
Additionally, with the growing popularity of Refactoring, Dependency Injection, and Aspect-Oriented Programming -- don't you think it's about time that the outdated "separate but equal" Dev+QA model be replaced with "every phase" development-testing?
Yeah, I see these 6 points trumpted every now and then as well, roughly every 6 months someone posts is as news. :)
ReplyDeleteI think the spirit of what Ranum is saying is to bake security into what we make. Teach developers security and stop creating turds.
BUT!
He takes an extreme approach. So I'd respond by taking an equally extreme approach in saying P&P is useful for two reasons:
1) We can ditch P&P only if we know everything there is to know about making x secure the first time.
2) We can ditch P&P only if we know there will never be ongoing mistakes. This is just not possible to accept as long as humans are part of the equation.
@marisa
ReplyDeleteThink the world of Adam, haven't read the work, but if you're going to wait for objectivity and empiricism in risk, you'll be waiting a long time.