Showing posts with label cliché. Show all posts
Showing posts with label cliché. Show all posts

Wednesday, October 19, 2016

Cliché: Security through obscurity (again)

This post keeps popping up in my timeline. It's wrong. The phrase "security through/by security" has become such a cliché that it's lost all meaning. When somebody says it, they are almost certainly saying a dumb thing, regardless if they support it or are trying to debunk it.

Wednesday, February 10, 2016

Hackers aren't smart -- people are stupid

The cliche is that hackers are geniuses. That's not true, hackers are generally stupid.

The top three hacking problems for the last 10 years are "phishing", "password reuse", and "SQL injection". These problems are extremely simple, as measured by the fact that teenagers are able to exploit them. Yet they persist because, unless someone is interested in hacking, they are unable to learn them. They ignore important details. They fail at grasping the core concept.


Phishing

Phishing happens because the hacker forges email from someone you know and trust, such as your bank. It appears nearly indistinguishable from real email that your bank might send. To be fair, good phishing attacks can fool even the experts.

But when read advice from "experts", it's often phrased as "Don't open emails from people you don't know". No, no, no. The problem is that emails appear to come from people you do trust. This advice demonstrates a lack of understanding of the core concept.

What's going on here is human instinct. We naturally distrust strangers, and we teach our children to distrust strangers.Therefore, this advice is wired into our brains. Whatever advice we hear from experts, we are likely to translate it into "don't trust strangers" anyway.

We have a second instinct of giving advice. We want to tell people "just do this one thing", wrapping up the problem in one nice package.

But these instincts war with the core concept, "phishing emails appear to come from those you trust". Thus, average users continue to open emails with reckless abandon, because the core concept never gets through.


Password reuse

Similarly there is today's gem from the Sydney Morning Herald:


When you create accounts on major websites, they frequently require you to "choose 8 letters with upper case, number, and symbol". Therefore, you assume this is some sort of general security advice to protect your account. It's not, not really. Instead, it's a technical detail related to a second layer of defense. In the unlikely event that hackers break into the website, they'll be able able to get the encrypted version of everyone's password. They use password crackers to guess passwords at a rate of a billion-per-second. Easily guessed passwords will get cracked in a fraction of a second, but hard to guess passwords are essentially uncrackable. But it's a detail that only matters once the website has already been hacked.

The real problem with passwords is password reuse. People use the same password for unimportant websites, like http://flyfishing.com, as they use for important sites, like http://chase.com or their email. Simple hobbyist sites are easily hacked, allowing hackers to download all the email addresses and passwords. Hackers then run tools to automate trying out that combination on sites like Amazon, Gmail, and banks, hoping for a match.

Therefore, the correct advice is "don't reuse passwords on important accounts", such as your business accounts and email account (remember: your email account can reset any other password). In other words, the correct advice is the very opposite what the Sydney Morning Herald suggested.

The problem here is human nature. We see this requirement ("upper-case and number/symbol") a lot, so we gravitate toward that. It also appeals to our sense of justice, as if people deserve to get hacked for the moral weakness of choosing simple passwords. Thus, we gravitate toward this issue. At the same time, we ignore password reuse, because it's more subtle.

Thus we get bad advice from "experts" like the Sydney Morning Herald, advising people to do the very opposite of what they should be doing. This article was passed around a lot today in the cybersec community. We all had a good laugh.


SQL injection

SQL injection is not an issue for users, but for programmers. However, it shares the same problem that it's extremely simple, yet human nature prevents it from being solved.

Most websites are built the same way, with a web server front-end, and a database back-end. The web server takes user interactions with the site and converts them into a database query. What you do with a website is data, but the database query is code. Normally, data and code are unrelated and never get mixed up. However, since the website generates code based on data, it's easy to confuse the two.


What SQL injection is that the user (the hacker) sends data to a website frontend that actually contains code that causes the backend to do something. That something can be to dump all the credit card numbers, or create an account that allows the hacker to break in.

In other words, SQL injection is when websites fail to understand the differences between these two sentences:


  • Susie said "you owe me $10".
  • Susie said you owe me $10.


It's best illustrated in the following comic:


The core concept is rather easy: don't mix code with data, or as the comic phrases it "sanitize your database inputs". Yet the problem persists because programmers fail to grasp the core concept.

The reason is largely that professors fail to understand the core concept. SQL injection has been the most popular hacker attack for more than a decade, but most professors are even older than that. Thus, they continue to teach website design ignoring this problem. The textbooks they use don't eve mention it.


Conclusion

These are the three most common hacker exploits on the Internet. Teenagers interested in hack learn how to exploit them within a few hours. Yet, the continue to be unsolved because if you aren't interested in the issues, you fail to grasp the core concept. The concept "phishing comes from people you know" to "don't trust emails from strangers". The core concept of hackers exploiting password reuse becomes "choose strong passwords". The core concept of mixing code with data simply gets ignored by programmers.

And the problem here isn't just the average person unwilling or unable to grasp the core concept. Instead, confusion is aided by people who are supposed to be trustworthy, like the Sydney Morning Herald, or your college professor.

I know it's condescending and rude to point out that "hacking happens because people are stupid", but that's really the problem. I don't know how to point this out in a less rude manner. That's why most hacking persists.

Wednesday, March 04, 2015

Cliché: Safen Up!

RSA Conference is often a mockery of itself. Yesterday, they posted this tweet:



This is similar to the Simpsons episode where Germans buy the power plant. In fear for his job, Homer (the plant's Safety Inspector) starts going around telling people to "Stop being so unsafe!".



Security is not a platitude; insecurity is not a moral weakness. It's a complex set of tradeoffs. Going around telling people to "safen up" will not improve the situation, but will instead breed resentment. Infosec people are widely disliked because of their moralizing.

The only way to be perfectly secure is to cut the cables, turn off the machines, thermite the drives, and drop the remnants in a deep ocean trench. Anything less and you are insecure. Learn to deal with insecurity instead of blaming people for their moral weaknesses.

Monday, July 28, 2014

Cliché: open-source is secure

Some in cybersec keep claiming that open-source is inherently more secure or trustworthy than closed-source. This is demonstrably false.

Firstly, there is the problem of usability. Unusable crypto isn't a valid option for most users. Most would rather just not communicate at all, or risk going to jail, rather than deal with the typical dependency hell of trying to get open-source to compile. Moreover, open-source apps are notoriously user-hostile, which is why the Linux desktop still hasn't made headway against Windows or Macintosh. The reason is that developers blame users for being stupid for not appreciating how easy their apps are, whereas Microsoft and Apple spend $billions in usability studies actually listening to users. Desktops like Ubuntu are pretty good -- but only when they exactly copy Windows/Macintosh. Ubuntu still doesn't invest in the usability studies that Microsoft/Apple do.

The second problem is deterministic builds. If I want to install an app on my iPhone or Android, the only usable way is through their app stores. This means downloading the binary, not the source. Without deterministic builds, there is no way to verify the downloaded binary matches the public source. The binary may, in fact, be compiled from different source containing a backdoor. This means a malicious company (or an FBI NSL letter) can backdoor open-source binaries as easily as closed-source binaries.

The third problem is code-review. People trust open-source because they can see for themselves if it has any bugs. Or, if not themselves, they have faith that others are looking at the code ("many eyes makes bugs shallow"). Yet, this rarely happens. We repeatedly see bugs giving backdoor access ('vulns') that remain undetected in open-source projects for years, such as the OpenSSL Heartbleed bug. The simple fact is that people aren't looking at open-source. Those qualified to review code would rather be writing their own code. The opposite is true for closed-source, where they pay people to review code. While engineers won't review code for fame/glory, they will for money. Given two products, one open and the other closed, it's impossible to guess which has had more "eyes" looking at the source -- in many case, it's the closed-source that has been better reviewed.


What's funny about this open-source bigotry is that it leads to very bad solutions. A lot of people I know use the libpurple open-source library and the jabber.ccc.de server (run by CCC hacking club). People have reviewed the libpurple source and have found it extremely buggy, and chat apps don't pin SSL certificates, meaning any SSL encryption to the CCC server can easily be intercepted. In other words, the open-source alternative is known to be incredibly insecure, yet people still use it, because "everyone knows" that open-source is more secure than closed-source.

Wickr and SilentCircle are two secure messaging/phone apps that I use, for the simple fact that they work both on Android and iPhone, and both are easy to use. I've read their crypto algorithms, so I have some assurance that they are doing things right. SilentCircle has open-sourced part of their code, which looks horrible, so it's probable they have some 0day lurking in there somewhere, but it's really no worse than equivalent code. I do know that both companies have spent considerable resources on code review, so I know at least as many "eyes" have reviewed their code as open-source. Even if they showed me their source, I'm not going to read it all -- I've got more important things to do, like write my own source.

Thus, I see no benefit to open-source in this case. Except for Cryptocat, all the open-source messaging apps I've used have been buggy and hard to use. But, you can easily change my mind: just demonstrate an open-source app where more eyes have reviewed the code, or a project that has deterministic builds, or a project that is easier to use, or some other measurable benefit.


Of course, I write this as if the argument was about the benefits of open-source. We all know this doesn't matter. As the EFF teaches us, it's not about benefits, but which is ideologically pure; that open-source is inherently more ethical than closed-source.

Wednesday, January 09, 2013

Cybersec-cliché: process

Among other things, Bruce Schneier is famous for saying that "security is a process". His point wasn't about process so much as products. Customers buy security products (like anti-virus, firewalls, and IPS) thinking they are a magic pill that will stop hackers. They aren't a magic pill, of course, their efficacy depends a lot on how these products are used, the "process".

But, "process" isn't a magic pill, either. Process cannot make up for product deficiencies  Process cannot make up for the lack of skills and education in IT organizations. Indeed, the under-skilled use process to mask their own inadequacies. Process often becomes it's own worst enemy, sucking up resources to feed itself rather than making forward progress toward a goal.

Process has become a cliché: what value the idea once had has been destroyed by its overuse.

I mention this because recently I've seen a bunch of articles/posts attacking "process" and I wanted to jump on the bandwagon. The new phrase is now "security is not a process". Though of course, once we finally convince people the value of this idea, it, too, will have become a useless cliché.



Update: Note the excellent comment below from @JPGoldberg on why he uses this cliché. I think the point is that even though something has become a cliché doesn't mean it's lost all value when used correctly.