Monday, June 27, 2011

Take a bow everybody, the security industry really failed this time

I haven’t said anything about Lulzsec publicly yet and I don’t really have a good reason for the lack of comment. I have been watching their activities with great amusement. On Saturday I saw they released a large list of routers IP addresses and the username and passwords. The passwords looked like they were set to default values. This actually made me laugh out loud and I had two thoughts. First and foremost how was this allowed to happen if you are doing regular security checks? The second thought is who will take the blame for this from the offending company?


First off I've heard a lot of people say that Lulzsec did security a favor by really showing the need for security. I disagree completely. I think Lulzsec has show how ineffective the security community and marketplace really is. These were not mom and pop targets that got hit but instead were several mega corporations that spend more money on security than most people will make in a lifetime. The spending did not stop the compromise and posting of their sensitive data so what good is it?



Putting your security in the hands of tools will fail you every time.


A tool is a device that helps you accomplish a goal not a magic device that will accomplish the goal by itself. A hammer does not build a house for a carpenter nor will a vuln scanner make a network secure.


How did all those routers go with easy to guess user names and passwords and nobody in the company noticed? I have no inside knowledge but I can take an educated guess: the belief that security tools will work and that security policies will be followed. I am sure somebody somewhere is explaining to their boss that the security policy was followed to the letter and vulnerability scans were completed regularly and these were not detected. As a pentester I run into tests all the time that are suppose to be a “gloves off no limits test” and the first thing I am handed is a list of systems off limits, So although the networks may have been scanned maybe the routers were excluded because they were considered “mission critical” with no attack surface so they were excluded from vulnerability testing.


Tools like vuln scanners, IPSes, and WAFs will fail you when you need them most. I spend most of my time looking at how to get attacks by security tools and it is pretty easy. I try to explain that to clients but often times tools are easier to find than good people so they go with tools.


If you exclude anything from vulnerability testing you will fail.


I know that there are some systems that really are important and it will be an operating problem if they go down. Ask your self this: if that is true why aren’t these systems the targets of more testing so you can find the cause of your faults and not a hacker group. Anybody that thinks that Lulzsec or any other hacker will respect your no scan list you are crazy.


As a former network admin I know that complex networks are actually a hodgepodge of cross fingers and jerry rigging to get to work. Once these Frankenstein networks are working nobody wants to touch them in fear of breaking something that make take into the wee hours of the night to fix. This is no excuse for keeping some systems off limits for testing.


The second thought is who to blame. In reality I think everyone in security is to blame. I include myself in this. We don’t really prepare customers for real world risks and often focus of things that sell like compliancy. Having worked for and with a lot of security product companies I have observed the compromise of a security products ability to protect in the name of customer requests more times than I can count. We in security cater more to check writers than we do actual security. Normally the check writers don’t want security, they want a check box filled that will have the minimal impact on operations.


Security is the first business I have seen where the customer is not always right.


I will admit I have changed testing strategies to appease customers. The wide eyed “you are gonna do what?!?!” response to a testing planned has made me worried about losing a client so although I will ruffle my feathers and puff out my chest on the importance of the testing but in most cases I will acquiesce to please the clients. This is my fault and I should not do it.


Setting client expectations…for real.


I have not seen a company that is actually secure. It doesn’t matter if the threat if simple password guessing or holding a Glock 21 to the head of your network admin I can get access. Often times security testing is used to verify security to a certain point, a point of tradeoffs for the company between cost security and feasibility of attack. While the Glock approach may not be as feasible as other attacks it will work every time. At this point you should not be judging the feasibility of the attack but instead the determination of the attacker. As a company as if you really have something work stealing and if so what lengths would somebody go to steal it?


This might not be the best example but it is the first anecdote I thought of while writing this post:

I once did a pentest for a company that had a WEP encrypted wifi network. They network manager wanted to spend his budget on other things than security so it was never upgraded. The reason: we have guys with guns at the gates so no one can really get within range of out network to attack it. In my plan to executives I mentioned two possibilities to carry out the “no holds barred” testing. One idea was skydiving into the facility with my computer; the other was just having a helicopter circle close by. The executives immediately said no for various reasons. They were later forced to admit that either idea would work. Now if I had been a real attacker I would not have cleared my plans with the first and been able to compromise their network and do dirty deeds ranging from theft of IP to maintaining access for a cohort off site. I failed my client because I let their fear of success take over testing.


I am not alone in this failure. If you show me a person that says they have never dialed back testing to please a client I can show you a person reading a prepared statement from their marketing department. Make no mistake that often the hurdles thrown up in front of security are people worrying you will succeed or at least make their life more difficult. And the fear of success or just being annoyed will often motivate clients to veto an attack vector they know will work. If this were to cause a fix I would be happy but often if there is nothing in the report the client won’t fix anything.


Because of failures like these the security community does not prepare clients for real attacks by determined attackers like Lulzsec. The clients of the security industry are systematically compromised and exposed for all to see like a cadaver during an autopsy.


In the end while I see some sales guys rubbing their hands together in glee over the thought that Lulsec will drive security spending up I am absolutely convinced that the last thing this problem needs is more money.


Until there is a mindset change by executives of these companies no amount of security spending will keep them safe…and that’s our failure as an industry.

11 comments:

dre said...

Yes, if more money is going to be spent -- it's going to be on the new-whiz-bang product that "prevents all LulzSec and Anonymous hacks 100 percent of the time". In reality, these magic silver bullets will actually create new problems that reduce security -- the effects of which will probably be recognized anywhere from 3-12 years from now.

Also -- a free (especially illegal) penetration-test is always a bad penetration-test for both parties involved. It's especially bad for the penetration-tester when he or she gets entangled in the legal system related to criminal charges. It's also bad for the target when the target's general counsel gets entangled in the legal system related to data breach exposure. Even if neither party has any legal issues directly to attend to, they certainly will be spending time preparing to NOT have legal problems, which basically surmounts to legal problems.

Bleenq said...

I would disagree slightly ... I think the fault starts with the IT organizations inability to understand how significant the network is and how important it is to staff competent network engineers. 99.9% of the corporations I've dealt with did not staff a network engineer ... relied on their stand IT folks to deal with network issues, had leadership whose own lack of networking knowledge lead to the disgraceful state of their networks.

ReverendTurner said...

I agree with you 100% Dave. It's about time someone spoke up about the inadequate Information Security within A LOT of organizations.

I've seen Information Security improve 10-fold since I first used my 300-baud modem to call 1-800 numbers and simply type: operator / operator or root / root

But even with such improvements (policies, technology, tools, etc.), there are still many organizations with networks that have TOO much accept risk.

Why? Various reasons... Lack of understanding their true risk acceptance; lack of money; lack of expertise, etc. The list goes on..

OziWan said...

It is not that I disagree with what you say it is only the context in which you write, the latest script kiddie group intent on changing the world.

Sites that had done the basics correctly would have had nothing to fear from LulzSec. They have mostly used SQL injection attacks (via the infamous Persian carrot) that any decent penetration tester should have found.

Sure they own a few routers for cloaking purposes and bouncing but in the end what they did, was simple stuff and, that only makes it worse.

Even if you widen the context and look at CitiBank and Sony, once again - simple attacks, requiring little skill. It is frankly amazing that people in the security arena (and I do not mean you), continue to act as if something surprising or earth shattering has happened.

Do the simple things right and you can start concentrating on the nasty stuff - which is (with or without tooling) difficult to defend against.

Dale said...

Another slightly less showy solution for physical penetration: put a smart phone and a spare battery in a cargo envelope. FedEx it to someone on vacation inside the facility. Track it using GPS, when it's there connect to the system via cell network, turn on the wifi and fire up a VPN. Poof, and not nearly as obvious as a parachute!

Min

mharrison said...

One important thing to remember is that the bad guys only have to be right once and they have the keys to the kingdom. Internal corporate security has to be good enough to be right a large enough majority of the time to offset that.

Unknown said...

Dude you really need an editor/proofreader. There's a boatload of crap there that didn't make sense.

Pamela Reese said...

You’re absolutely right, any security system that relies on users to make good security decisions is bound to fail. Regularly educating employees on their company's security policies and best practices is just as important as have the right technology in place. Throwing money at the problem won't work. My company, Symantec, works with our customers’ CISOs, internal IT staff and channel partners to help them do a better job communicating the need to foster a culture of security with everyone in the company.

From a technology perspective, enabling organizations to make and enforce security policies based on real data is critical to helping employees avoid making innocent – yet costly – mistakes. For example, establishing a policy that prevents users in finance with sensitive data from installing software unless it has a good security rating and is known to be used by at least 10,000 people for at least 3 months. Providing application control, malware detection and network access control based on the experience of hundreds of millions of systems is more effective than relying on blacklisting and whitelisting technologies alone.

Pamela Reese said...

You’re absolutely right, any security system that relies on users to make good security decisions is bound to fail. Regularly educating employees on their company's security policies and best practices is just as important as have the right technology in place. Throwing money at the problem won't work. My company, Symantec, works with our customers’ CISOs, internal IT staff and channel partners to help them do a better job communicating the need to foster a culture of security with everyone in the company.

From a technology perspective, enabling organizations to make and enforce security policies based on real data is critical to helping employees avoid making innocent – yet costly – mistakes. For example, establishing a policy that prevents users in finance with sensitive data from installing software unless it has a good security rating and is known to be used by at least 10,000 people for at least 3 months. Providing application control, malware detection and network access control based on the experience of hundreds of millions of systems is more effective than relying on blacklisting and whitelisting technologies alone.

Anonymous said...

Great post David, but I respectfully disagree with your conclusion. shortened link with my rationale: http://wp.me/p1pasm-I

Anonymous said...

Would it be reasonable for IT security professionals to form a professional society with (at the minimum) a code of ethics?

Could you get (or form) something like the Underwriters Laboratories to vet security products/procedures?

Should IT security be completely re-thought from the ground up?