Return on investment, or ROI, is a big deal in business. Any business venture needs to demonstrate a positive return on investment, and a good one at that, in order to be viable.
It's become a big deal in IT security, too. Many corporate customers are demanding ROI models to demonstrate that a particular security investment pays off. And in response, vendors are providing ROI models that demonstrate how their particular security system provides the best return on investment.
It's a good idea in theory, but it's mostly bunk in practice.
Before I get into the details, there's one point I have to make. ROI as used in a security context is inaccurate. Security is not an investment that provides a return, like a new factory or a financial instrument. It's an expense that, we hope, pays for itself in cost savings. Security is about loss prevention, not about earnings. The term just doesn't make sense in the investment context.
But as anyone who has lived through a company's vicious end-of-year budget-slashing exercises knows, when you're trying to make your numbers, cutting costs is the same as increasing revenues. So while security can't produce ROI, loss prevention most certainly affects a company's bottom line.
And a company should implement only security countermeasures that affect its bottom line positively. It shouldn't spend more on a security problem than the problem is worth. Conversely, it shouldn't ignore problems that are costing it money when there are cheaper mitigation alternatives. A smart company needs to approach security as it would any other business decision: costs versus benefits.
The classic methodology is called annualized loss expectancy (ALE), and it's straightforward. Calculate the cost of a security incident in both tangibles like time and money, and intangibles like reputation and competitive advantage. Multiply that by the chance the incident will occur in a year. That tells you how much you should spend to mitigate the risk. So, for example, if your store has a 10% chance of getting robbed and the cost of being robbed is $10,000, then you should spend $1,000 a year on security. Spend more than that, and you're wasting money. Spend less than that, and you're also wasting money.
Of course, that $1,000 has to reduce the chance of being robbed to zero in order to be cost-effective. If a security measure cuts the chance of robbery by 40% -- to 6% a year --then you should spend no more than $400 on it. If another security measure reduces it by 80%, that's worth $800. And if two security measures each reduce the chance of being robbed by 50% and one costs $300 and the other $700, the first one is worth it and the second isn't.
The data imperative
The key to making this work is good data; the term of art is actuarial tail.If you're doing an ALE analysis of a security camera at a convenience store, you need to know the crime rate in the store's neighborhood and have some idea of how much cameras improve the odds of persuading criminals to rob another store instead. You need to know how much a robbery costs: in merchandise, in time and annoyance, in lost sales due to spooked patrons, in employee morale. You need to know how much not having the cameras costs in terms of employee morale -- maybe you're having trouble hiring salespeople to work the night shift. With all that data, you can figure out if the cost of the camera is cheaper than the loss of revenue if you close the store at night -- assuming also that the closed store won't get robbed. And then you can decide whether to install one.Cybersecurity is considerably harder, because there just isn't enough good data. There aren't good crime rates for cyberspace, and we have a lot less data about how individual security countermeasures -- or specific configurations of countermeasures -- mitigate risks. We don't even have data on incident costs.
One problem is that the threat moves too quickly. The characteristics of the things we're trying to prevent change so quickly that we can't accumulate data fast enough. By the time we get some data, there's a new threat model for which we don't have enough data. So we can't create ALE models.
But there's another problem, and it's that the math quickly falls apart when it comes to rare and expensive events. Imagine that you calculate the cost -- reputational costs, loss of customers, etc. -- of having your company's name in the newspaper after an embarrassing cybersecurity event to be $20 million. Also assume that the odds are 1 in 10,000 of that happening in any one year. ALE says you should spend no more than $2,000 mitigating that risk.
So far, so good. But maybe your CFO thinks an incident would cost only $10 million. You can't argue, since we're just estimating. But he just cut your security budget in half. A vendor trying to sell you a product finds a Web analysis claiming that the odds of this happening are actually 1 in 1,000. Accept this new number, and suddenly a product costing 10 times as much is still a good investment.
It gets worse when you deal with even more rare and expensive events. Imagine you're in charge of terrorism mitigation at a chlorine plant. What's the cost to your company, in money and reputation, of a large and very deadly explosion? $100 million? $1 billion? $10 billion? And the odds: 1 in a hundred thousand? 1 in a million? 1 in 10 million? Depending on how you answer those two questions --and any answer is really just a guess -- you can justify spending anywhere from $10 to $100,000 annually to mitigate that risk.
Or take another example: airport security. Assume that all the new airport security measures increase the waiting time at airports by -- and I'm making this up -- 30 minutes per passenger. There were 760 million passenger boardings in the United States in 2007. This means that the extra waiting time at airports has cost us a collective 43,000 years of extra waiting time. Assume a 70-year life expectancy, and the increased waiting time has "killed" 620 people per year -- 930 if you calculate the numbers based on 16 hours of awake time per day. So the question is this: If we did away with increased airport security, would the result be more people dead from terrorism or fewer?
Caveat emptor
This kind of thing is why most ROI models you get from security vendors are nonsense. Of course their model demonstrates that their product or service makes financial sense: They've jiggered the numbers so that they do.This doesn't mean that ALE is useless, but it does mean you should 1) mistrust any analyses that come from people with an agenda and 2) use any results as a general guideline only.
So when you get an ROI model from your vendor, take its framework and plug in your own numbers. (Don't even show the vendor your improvements; it won't consider any changes that make its product or service less cost-effective to be an "improvement.") And use those results as a general guide, along with risk management and compliance analyses, when you're deciding what security products and services to buy.
Bruce Schneier is the chief security technology officer
No comments:
Post a Comment