Saturday, September 12, 2020

Cliché: Security through obscurity (yet again)

Infosec is a largely non-technical field. People learn a topic only as far as they need to regurgitate the right answer on a certification test. Over time, they start to believe misconceptions about that topic that they never learned. Eventually, these misconceptions displace the original concept in the community.

A good demonstration is this discussion of the "security through obscurity fallacy". The top rated comment makes the claim this fallacy means "if your only security is obscurity, it's bad". Wikipedia substantiates this, claiming experts advise that "obscurity should never be the only security mechanism".

Nope, nope, nope, nope, nope. It's the very opposite of what you suppose to understand. Obscurity has problems, always, even if it's just an additional layer in your "defense in depth". The entire point of the fallacy is to counteract people's instinct to suppress information. The effort has failed. Instead, people have persevered in believing that obscurity is good, and that this entire conversation is only about specific types of obscurity being bad.


Hypothetical: non-standard SSH

The above discussion mentions running SSH on a non-standard port, such as 7837 instead of 22, as a hypothetical example.

Let's continue this hypothetical. You do this. Then an 0day is discovered, and a worm infecting SSH spreads throughout the Internet. This is exactly the sort of thing you were protecting against with your obscurity.

Yet, the outcome isn't what you expect. Instead, you find that the all your systems running SSH on the standard port of 22 remain uninfected, and that the only infections were of systems running SSH on port 7837. How could this happen?

The (hypothetical) reason is that your organization immediately put a filter for port 22 on the firewalls, scanned the network for all SSH servers, and patched the ones they found. At the same time, the worm runs automated Shodan scripts and masscan, and thus was able to nearly instantaneously discover the non-standard ports.

Thus you cleverness made things worse, not better.


Other phrases

This fallacy has become such a cliche that we should no longer use it. Let's use other phrases to communicate the concept. These phrases would be:

  • attackers can discover obscured details far better than you think, meaning, obscurity is not as beneficial as you think
  • defenders are hindered by obscured details, meaning, there's a greater cost to obscurity than you think
  • we can build secure things that don't depend upon obscurity
  • it's bad to suppress information that you think would help attackers
  • just because there's "obscurity" involved doesn't mean this principle can be invoked

Obscurity less beneficial, more harmful than you think

My hypothetical SSH example demonstrates the first two points. Your instinct is to believe that adding obscurity made life harder for the attackers, and that it had no impact on defenders. The reality is that hackers were far better than you anticipated at finding unusual ports. And at the same time, you underestimated how this would impact defenders.

It's true that hiding SSH ports might help. I'm just showing an overly negative hypothetical result to counteract your overly positive result. A robust cost-vs-benefit analysis might show that there is in fact a benefit. But in this case, no such robust argument exists -- people are just in love with obscurity. Maybe hiding SSH on non-standard ports is actually good, it's just that nobody has made an adequate argument for it. Lots of people love the idea, however.


We can secure things

The first two points are themselves based upon a more important idea: we can build secure things. SSH is a secure thing.

The reason people love obscurity is because they have no faith in security. They believe that all security can be broken, and therefore, every little extra bit you can layer on top will help.

In our hypothetical above, SSH is seen as something that will eventually fail due to misconfiguration or an exploitable vulnerability. Thus, adding obscurity helps.

There may be some truth to this, but your solution should be to address this problem specifically. For example, every CISO needs to have an automated script that will cause all the alarms in their home (and mobile) to go off when an SSH CVE happens. Sensitive servers need to have canary accounts that will trigger alarms if they ever get compromised. Planning for an SSH failure is good planning.

But not planning for SSH failure, and instead just doing a bunch of handwaving obscuring things, is a bad strategy.

The fact is that we can rely upon SSH and should rely upon SSH. Yes, an 0day might happen, but that, too, should be addressed with known effective solutions, such as tracking CVEs and vulnerability management, not vague things like adding obscurity.


Transparency good, suppression bad

The real point of this discussion isn't "obscurity" at all, but "transparency". Transparency is good. And it's good for security for exactly the same reason it's good in other areas, such as transparency in government so we can hold politicians accountable. Only through transparency can we improve security.

That was the point of Kerckhoffs's principle from the 1880s til today: the only trustworthy crypto algorithms are open, public algorithms. Private algorithms are insecure.

It's the point behind the full-disclosure debate. Companies like Google who fully disclose in 90 days are trustworthy, companies like Oracle who work hard to suppress vuln information are untrustworthy. Companies who give bounties to vuln researchers to publish bugs are trustworthy, those who sue or arrest researchers are untrustworthy.

It's where security snake oil comes from. Our industry is rife with those who say "trust us ... but we can't reveal details because that would help hackers". We know this statement to be categorically false. If their system were security, then transparency would not help hackers. QED: hiding details means the system is fundamentally insecure.

It's like when an organization claims to store passwords security, but refuses to tell you the algorithm, because that would reveal information hackers could use. We know this to be false, because if passwords were actually stored securely, knowing the algorithm wouldn't help hackers.

Instead of saying the "security through obscurity fallacy" we should instead talk about the "security through suppression fallacy", or simply say "security comes from transparency".


This doesn't apply to all obscurity

This leads to my last point: that just because "obscurity" is happening doesn't mean we can automatically apply this concept.

Closed-source code is a good example. Why won't they company share their source code? If they say "because it helps hackers", then that's a clear violation of this principle. If they say "because trade secrets", then it's not a violation of this principle. They aren't saying obscurity is needed for security, they are saying obscurity is needed because they don't want people copying their ideas.

We can still say that the security of closed-source is worse than open-source, because it usually is. The issues are clearly related. It's simply that the vendor isn't, in this hypothetical, violating the fallacy by claiming closed-source means their code is more secure.

The same is true in the blogpost above of adding decoy cars to a presidential motorcade. I guess you could use the word "obscurity" here, but it has nothing to do with our principle under discussion. For one thing, we aren't talking about "suppressing" information. For another thing, presidential motorcades are inherently insecure -- this isn't a crypto algorithm or service like SSH that can be trusted, it's a real crap system that is inherently insecure. Maybe handwaving with half-assed solutions, like varying travel routes, cellphone jammers to block IEDs, using decoy cars, is the on the whole the best compromise for a bad situation.

Thus, stop invoking this principle every time "obscurity" happens. This just wears out the principle and breeds misunderstanding for the times when we really do need it.


Conclusion

The point of this blogpost is unwinding misconceptions. A couple years from now, I'm likely to write yet another blogpost on this subject, as I discover yet new misconceptions people have developed. I'm rather shocked at this new notion that everyone suddenly believes, that "obscurity" is bad as the only control, but good when added as a layer in a defense-in-depth situation. No, no, no, no ... just no.

These misconceptions happen for good reasons. One of which is that we sometimes forget our underlying assumptions, and that people might not share these assumptions.

For example, when we look at Kerckhoffs' Principle from the 1880s, the underlying assumption is that we can have a crypto algorithm that works, like AES or Salsa20, that can't be broken. Therefore, adding obscurity on top of this adds no security. But when that assumption fails, such as a presidential motorcade that's always inherently insecure (just lob a missile at them), then the argument no longer applies.

When teaching this principle, the problem we have is that a lot of people, especially students new to the field, are working from the assumption that everything is broken and that no security can be relied upon. Thus, adding layers of obscurity always seems like a good idea.

Thus, when I say that "security through obscurity is bad", I'm really using this cliche to express some underlying idea. Am I talking about my political ideas of full-disclosure or open-source? Am I talking about vendor snake-oil? Am I talking about dealing with newbies who prefer unnecessary and ineffective solutions over ones proven to work? It's hard to tell.

The original discussion linked on Hacker News, though, discussed none of these things. Going through the top ranked responses seemed list a list of people who just heard about the thing yesterday and wanted to give their uninformed hot take on what they think these words mean.


Case Study: ASLR (Address Space Layout Randomization) (Update)

After posting, some have discussed on Twitter whether ASLR is just "security through obscurity". Let's discuss this.

The entire point of this post is to raise the level of discussion beyond glibly repeating a cliché. If you have an argument to be made about ASLR, then make that argument without resorting to the cliché. If you think the cost-vs-benefit analysis means ASLR is not worth it, then argue the cost-vs-benefit tradeoff.

The original cliché (from Kerckhoffs principles) wasn't about whether the algorithm added obscurity, but whether the algorithm itself is obscure.

In other words, if Microsoft was claiming Windows is secure because of ASLR, but that they couldn't divulge details how it worked because this would help hackers, then you have a "security through obscurity" argument. Only in this instance can you invoke the cliché and be assured you are doing so correctly.

I suppose you could argue that ASLR is only "obscurity", that it provides no "security". That's certainly true sometimes. But it's false other times. ASLR completely blocks certain classes of attacks on well-randomized 64-bit systems. It's such a compelling advantage that it's now a standard part of all 64-bit operating systems. Whatever ASLR does involving "obscurity", it clearly adds "security".

In short, just because there's "obscurity" involved doesn't mean the cliché "security through obscurity" can be invoked.


3 comments:

sheridavis said...
This comment has been removed by a blog administrator.
Robert Earl said...
This comment has been removed by a blog administrator.
Pakhi Singh said...
This comment has been removed by a blog administrator.