ME: "Hi, you left your headlights on."The intent of this is to show how badly people react to "friendly" disclosure of vulns. However, vuln disclosure isn't friendly. It is an inherently rude act. It's more like writing "WASH ME" in the dirt covering a person's car. Your standard of cleanliness may differ than your neighbor's. Pointing this out will always get a hostile reaction.
NEIGHBOR: "WHO SENT YOU? DID MY EX-WIFE SEND YOU? ARE YOU SLEEPING WITH HER?"
What vuln researchers expect is something like the following scenario:
ME: "Your hairstyle is ugly."That's not going to happen. I mean, try it with your neighbor. I'd bet that you'll get a hostile reaction rather than the desired one above.
NEIGHBOR: "Thank you for telling me. I'll go to the hairdresser and fix it immediately."
The reason vuln disclosure is rude is because most people don't care about fixing security problems. This includes not only the vendor, but also the customer. Sure, the customers care when vulns cause a problem, but frankly, most vulns don't cause problems. For the average small software vendor, no hacker will ever exploit even obvious vulns in their products. I know this because people hire us all the time to find vulns. We find the most shockingly obvious things that have been in their products for years, yet going back through logs we fail to find any evidence that malicious hackers have found and exploited those problems.
Security is a tradeoff. When a small company fixes a vuln, it comes at a huge cost. It means fixing something that their customers largely do not care about (and would not pay for) at the expense of adding new features that customers would want (and pay for). When a researcher publishes a vuln in a product, it a previously harmless vuln becomes harmful, forcing a tradeoff to be made that neither the vendor nor the customer wanted.
It's hard for vuln research to grasp the enormity of the tradeoff. For us, security is easy. We can pentest a product and come up with an obvious vuln within seconds. Here's a real scenario: we were asked to look at a product, so we put a packet-sniffer on the wire and within seconds found a secret password being sent in the clear that would allow us to own all their customers. Yet, for the small software vendor, this is black magic. Nobody in the company knows how to use a packet-sniffer, nor is there any budget to hire somebody who can. Nobody understands really how to fix the problem that isn't equally insecure. Their development process is so crappy that they can't track this bug and make sure it's fixed.
Vuln disclosers believe that such problems wouldn't exist "if only people took security seriously". That's not the issue. I could hold a gun to your head and tell you to solve a third order differential equation. No matter how seriously you took my threat, you still would not have the ability to solve it. Sure it'd be straightforward for a mathematician to solve, but essentially impossible for everyone else. Vulns are straightforward for us, but still impossibly difficult for most vendors.
The Attrition blogpost linked above focused on the idea of "responsible disclosure". Vuln researchers can be extremely rude while cloaking themselves in "responsibility". For example, you often see vuln reports where the researcher claimed that the vendor ignored their vuln. That's not precisely true. What really happens is that researchers act in passive-aggressive ways to make sure that the disclosure process goes awry. Sure, the immediate response of any vendor is to deny a problem (especially small vendors that have never dealt with a vuln before). Polite researchers try to overcome this hurdle in a positive fashion, passive-aggressive researchers exploit this to show why the vendor is evil or stupid. The Attrition article is a good example of the passive-aggressive attitude: when disclosure awry, it's always the vendor's fault and never the researcher's.
While disclosure is always "rude", it is never "irresponsible". Take Internet worms, for example. Regardless how responsible the underlying vulns were disclosed, they resulted in devestating Internet worms. The result was that vendors learned a lesson and stopped shipping product with such a large attack surface. While it's impossible to justify the value of disclosing a single vuln that caused a billion dollars in damage, overall, the disclosure of such bugs clearly improved cybersecurity.
It's like how there is no such thing as "irresponsible speech". It's hard to say why particular speech is valuable, such as burning a Koran, but overall, any laws that restrict "irresponsible" speech end up restricting good speech. That's why laws banning "insulting" end up banning legitimate criticism of political leaders.
Therefore, we should not delude ourselves that we are helping the vendor (or their customers) with a disclosed vuln: we usually aren't. We should not justify our actions by calling it "responsible disclosure". We should understand the inherent rudeness of our actions and behave in a humble manner. Yet, while the vendor will hate us, we should still disclose such vulns, because unfettered security research serves the greater good.
UPDATE by David Maynor (5:08pm EST): The more people involved, the more likely it'll go awry. Vulnerability Disclosure is like sex: it should be between 2 people, 3 max. Any more participants will result in a big mess of hurt feelings, bruised egos, NDAs and a weird feeling in the pit of your stomach anytime it is mentioned.