Wednesday, October 19, 2016

Cliché: Security through obscurity (again)

This post keeps popping up in my timeline. It's wrong. The phrase "security through/by security" has become such a cliché that it's lost all meaning. When somebody says it, they are almost certainly saying a dumb thing, regardless if they support it or are trying to debunk it.

Let's go back to first principles, namely Kerckhoff's Principle from the 1800s that states cryptography should be secure even if everything is known about it except the key. In other words, there exists no double-secret military-grade encryption with secret algorithms. Today's military crypto is public crypto.

Let's apply this to port knocking. This is not a layer of obscurity, as proposed by the above post, but a layer of security. Applying Kerkhoff's Principle, it should work even if everything is known about the port knocking algorithm except the sequence of ports being knocked.

Kerkhoff's Principle is based on a few simple observations. Two relevant ones today are:
* things are not nearly as obscure as you think
* obscurity often impacts your friends more than your enemies
I (as an attacker) know that many sites use port knocking. Therefore, if I get no response from an IP address (which I have reason to know exists), then I'll assume port knocking is hiding it. I know which port knocking techniques are popular. Or, sniffing at the local Starbucks, I might observe outgoing port knocking behavior, and know which sensitive systems I can attack later using the technique. Thus, though port knocking makes it look like a system doesn't exist, this doesn't fully hide a system from me. The security of the system should not rest on this obscurity.

Instead of an obscurity layer, port knocking a security layer. The security it provides is that it drives up the amount of effort an attacker needs to hack the system. Some use the opposite approach, whereby the firewall in front of a subnet responds with a SYN-ACK to every SYN. This likewise increases the costs of those doing port scans (like myself, who masscans the entire Internet), by making it look that all IP addresses and ports exist, not by hiding systems behind a layer of obscurity.

One plausible way of defeating a port knocking implementation is to simply scan all 64k ports many times. If you are looking for a sequence of TCP ports 1000, 5000, 2000, 4000, then you'll see this sequence. You'll see all sequences.

If the code for your implementation is open, then it's easy for others to see this plausible flaw and point it out to you. You could fix this flaw by then forcing the sequence to reset every time it saw the first port, or to also listen for bad ports (ones not part of the sequence) that would likewise reset the sequence.

If your code is closed, then your friends can't see this problem. But your enemies are still highly motivated. They might find your code, find the compiled implementation, or must just guess ways around your possible implementation. The chances that you, some random defender, is better at this than the combined effort of all your attackers is very small. Opening things up to your friends gives you a greater edge to combat your enemies.

Thus, applying Kerkoff's Principle to this problem is that you shouldn't rely upon the secrecy of your port knocking algorithm, or the fact that you are using port knocking in the first place.

The above post also discusses ssh on alternate ports. It points out that if an 0day is found in ssh, those who run the service on the default port of 22 will get hacked first, while those who run at odd ports, like 7837, will have time to patch their services before getting owned.

But this is just repeating the fallacy. It's focusing only on the increase in difficulty to attackers, but ignoring the increase in difficulties to friends. Let's say some new ssh 0day is announced. Everybody is going to rush to patch their servers. They are going to run tools like my masscan to quickly find everything listening on port 22, or a vuln scanner like Nessus. Everything on port 22 will quickly get patched. SSH servers running on port 7837, however, will not get patched. On the other other hand, Internet-wide scans like Shodan or the 2012 Internet Census may have already found that you are running ssh on port 7837. That means the attackers can quickly attack it with the latest 0day even while you, the defender, are slow to patch it.

Running ssh on alternate ports is certainly useful because, as the article points out, it dramatically cuts down on the noise that defenders have to deal with. If somebody is brute forcing passwords on port 7837, then that's a threat worth paying more attention to than somebody doing the same at port 22. But this benefit is separate discussion from obscurity. Hiding an ssh server on an obscure port may thus be a good idea, but not because there is value to obscurity.


Thus, both port knocking and putting ssh on alternate ports are valid security strategies. However, once you mention the cliche "security by/through obscurity", you add nothing useful to the mix.



Update: Response here.

5 comments:

James said...

typo: security through/by security

Daniel Miessler said...

Hi Robert.

Here's my response.

https://danielmiessler.com/blog/disambiguation-of-security-obscurity/#gs.woDBjP4

rchandan said...

Dangal Story
Dangal First Look
Dangal First Weekend Prediction
Dangal First Weekend Prediction

Taylor Hornby said...

This all boils down to the fact that "security by obscurity" is an idiom. It means something more precise than you can find out by looking up the words "security", "by", and "obscurity" in the dictionary.

Daniel is using "obscurity" in a broad common sense where some technique that "hides" (like the camouflage on a tank) is counted as "obscurity." This is indeed what the regular english word "obscurity" means, and indeed it's possible to build useful security tools on the principle of hiding.

However, when a cryptographer says "no security by obscurity" what they are actually saying is that the system should meet its design goals even under the assumption that the attacker knows exactly how the whole system works. They are saying *the way the system works* should not be obscure, not that the system should not involve obscuring things! It's sort of the equivalent of "scientific theories must be falsifiable." If the security of the system is allowed to depend on the attacker not knowing how it works, then I can prove anything is secure by assuming the attacker is stupid!

By the latter definition, something that is "secure by obscurity" is a security system which becomes insecure once the attacker learns how it works. That is not the case for port knocking, etc. as long as it's made clear what guarantees port knocking actually provides. If the claim is "port knocking makes a regular port scan fail to identify the service" then port knocking is security. If the claim is "port knocking makes it totally impossible to identify the services running on a host" then port knocking is "security by obscurity."

This is why "security by obscurity" is such a useless term. When we're presented with a security solution, it's at all not useful to our end-goals to ask "is this obscurity?" It IS useful to ask "What guarantees does this system provably provide, under what assumptions?" The second question is all that matters.

Daniel Miessler said...

Hi Taylor,

I've already agreed with Robert on the semantic issue. It's not worth discussing.

My point was on the actual merits of hiding information. As I say in my follow-up, OPSEC is all about hiding knowledge of what you're doing. Camouflage is about hiding yourself from an enemy.

These reduce the chances of a successful attack. That's one part of the risk equation (with the other being impact).

That's security, plain and simple.

Do we agree?

If so, then we're agreeing that making yourself harder to target is good for security, but that that three word phrase has baggage that might make it worth avoiding.