The correct concept is simply "risk analysis". Here's how it works.
List out all the risks. For each risk, calculate:
- How often it occurs.
- How much damage it does.
- How to mitigate it.
- How effective the mitigation is (reduces chance and/or cost).
- How much the mitigation costs.
If you have risk of something that'll happen once-per-day on average, costing $1000 each time, then a mitigation costing $500/day that reduces likelihood to once-per-week is a clear win for investment.
Now, ROI should in theory fit directly into this model. If you are paying $500/day to reduce that risk, I could use ROI to show you hypothetical products that will ...
- ...reduce the remaining risk to once-per-month for an additional $10/day.
- ...replace that $500/day mitigation with a $400/day mitigation.
But this is never done. Companies don't have a sophisticated enough risk matrix in order to plug in some ROI numbers to reduce cost/risk. Instead, ROI is a calculation is done standalone by a vendor pimping product, or a security engineer building empires within the company.
If you haven't done risk analysis to begin with (and almost none of you have), then ROI calculations are pointless.
But there are further problems. This is risk analysis as done in industries like oil and gas, which have inanimate risk. Almost all their risks are due to accidental failures, like in the Deep Water Horizon incident. In our industry, cybersecurity, risks are animate -- by hackers. Our risk models are based on trying to guess what hackers might do.
An example of this problem is when our drug company jacks up the price of an HIV drug, Anonymous hackers will break in and dump all our financial data, and our CFO will go to jail. A lot of our risks come now from the technical side, but the whims and fads of the hacker community.
Another example is when some Google researcher finds a vuln in WordPress, and our website gets hacked by that three months from now. We have to forecast not only what hackers can do now, but what they might be able to do in the future.
Finally, there is this problem with cybersecurity that we really can't distinguish between pesky and existential threats. Take ransomware. A lot of large organizations have just gotten accustomed to just wiping a few worker's machines every day and restoring from backups. It's a small, pesky problem of little consequence. Then one day a ransomware gets domain admin privileges and takes down the entire business for several weeks, as happened after #nPetya. Inevitably our risk models always come down on the high side of estimates, with us claiming that all threats are existential, when in fact, most companies continue to survive major breaches.
These difficulties with risk analysis leads us to punting on the problem altogether, but that's not the right answer. No matter how faulty our risk analysis is, we still have to go through the exercise.
One model of how to do this calculation is architecture. We know we need a certain number of toilets per building, even without doing ROI on the value of such toilets. The same is true for a lot of security engineering. We know we need firewalls, encryption, and OWASP hardening, even without specifically doing a calculation. Passwords and session cookies need to go across SSL. That's the starting point from which we start to analysis risks and mitigations -- what we need beyond SSL, for example.
So stop using "ROI", or worse, the abomination "ROSI". Start doing risk analysis.
So I mean, I totally agree with you (and not just in theory but in practice), in fact I love the literature authority on this topic via the book How To Measure Anything in Cybersecurity Risk which says exactly what you say. I love all the way up to Advisen and their actuarial approach to cyber.
ReplyDeleteHowever, people like to talk in money in the ways that they understand. Every vendor wants to show the Global 2000 (or whoever their clientbase is) not just an increase in security (or privacy) but also productivity. How much more productive and efficient can an org be by outsourcing SIEM to SaaS SIEM, or crowdsourcing pen tests to bug-hunter programs? Is that ROI for productivity purposes? We are in agreement that for security purposes it does not produce a return, but is there not a savings, i.e., a return on productivity?
I agree with dre.
ReplyDeleteTo add, ROI is what business understands. Vendors push ROI because the people they need to persuade to part with their money, is business. If security is wanting a few bar to fund their projects, business wants to know what return they will get (its a warped view on the whole situation, but they want to know anyway).
Even with a risk based approach, they will still ask you to quantify the risk in $currency value. This is near impossible in most cases, because it is typically based on estimates of what damage will be done in a hypothetical situation. Business dont like to work with hypothetical situations.
Perhaps if there is no measurable benefit, business' risk appetite has not been adequately assessed, and the matching of security spend to business needs has not been addressed.
Love them or hate them, business will want to know ROI. It is best you find a way to speak their language in order to address the risks, and how these risks affect business' bottom line.
@ Scag: It's not impossible. Wrote about how it is possible through both analytical and statistical methods in this recent answer on StackExchange -- https://security.stackexchange.com/questions/154965/information-security-risk-analytics
ReplyDeleteYou'll see me rant about foresight tools and exceedance-probability curves. I hope you like it and find it useful.
First, we have yet to solve the problems documented in Willie W Ware's April 1967 paper, "Security and Privacy in Computer Systems (https://www.rand.org/pubs/authors/w/ware_willis_h.html). The is an early paper which describes attacks against network based system, via hardware, software, data, the network, and people. Later papers address how to model risk, based on engineering.
ReplyDeleteSecond, 19 years, we moved away from a security engineering process with System Security Engineering CMM, and Information Assurance CMM to quantify the maturity of systems and processes. This allowed an ROI model to be applied based on the maturity and the system.
We moved to a blinky box, vendor driven architecture, based on nonprovable assumptions by the sales guy, with no product and service corporate liability to motivate a great product/solutions vs. a product that does the minimum and makes money. The current method of maturity is based on recommendations from training, technology research and security conference companies.
Lastly, engineering standards at all levels are influenced by companies wishing to make money, support law enforcement and nation states. This injects short and long term vulnerabilities which can be exploited for months or years prior to a vendor patch.
In short, “When the situation was manageable it was neglected, and now that it is thoroughly out of hand we apply too late the remedies which then might have effected a cure.” (Winston Churchill, “Air Parity Lost”, May 2, 1935, to House of Commons).
The ability to create an ROI now requires rethinking security, and go against the "Group Think" that exists in the industry.