Allan Alford has written about the security metrics that he uses over at Mitel, and even dropped in a kind compliment to CISO’s who focus on dwell time.

His post on metrics is interesting. I know many CISOs, but have not been one; I never made it higher in the security food chain than Head of IT Security, a post I held for less than a year before handing it over to the very capable David Calder – and founded ECS Security, where David is now Managing Director. CISOs have a difficult job, in part because they have to live in a shadow state between different worlds, one of which understands the realities of security and one of which lives in ignorance, and is in turns fearful or overconfident. Trying to explain the actions of one to the other looks very difficult.

Corporate boards want to know they are safe.

That’s a shame, because they’re much better off accepting that they aren’t. For this reason, one of my favourite metrics is Annualised Loss Expectancy – the amount your organisation estimates it will lose each year from security breaches. This quantity is the sum of all expected frequencies of each breach multiplied by the amount of loss from that breach, and multiple methods exist for gaining these two figures that comprise it. The CISSP certification teaches several of these methods, so a very large number of security practitioners will know it theoretically, and many will have practical experience with it. It’s a great way to set a spending ceiling for your security spend, since the ALE is the best estimate of what the business will lose if you do nothing. But most importantly, it is a constant reminder at Board level that they are not safe, cannot be safe and will never, ever be safe, represented in monetary terms.

Allan goes on to mention eight other metrics, and I’ll talk about each one in turn. As a general principle, I like to monetise metrics so that they can be compared to costs and benefits. This means that the output of every calculation should be in currency.

  1. Security control audits: A useful exercise, where the output should be the projected spend and time to get to the desired maturity for the organisation’s risk appetite.
  2. Raw incident/breach counts: I prefer projected loss figures and corrected, actual loss figures for previous projections, because they are in monetary figures. Corrections are important because downstream legal ramifications can take years to resolve, and sometimes what looks like a nasty cleanup is, in fact, pretty cheap after the dust has settled.
  3. Successful preventions – Allan doesn’t really like this measure, and neither do I. No one cares how much you managed to stop, they really only care about how much what you didn’t stop is going to cost to clean up. The increasing level of trash washing up against enterprise perimeters isn’t usually an issue.
  4. Security awareness training completion – I favour engineering solutions over management solutions, but for those situations where technical controls are not appropriate and you need to rely on people to do the right thing, it’s best to know if they are at least aware of what the right thing is. I’d probably want to drive this home by pointing out to the employees that failure to complete mandatory training after several reminders counts as an aggravating factor to wilful negligence in the event that the company suffers losses as a result of their negligent action, such as clicking on a phishing link in an email.
  5. Anti-phishing training click rates – a good way of measuring training effectiveness, although protecting an organisation from phishing attacks is probably easier to do at the proxy by blocking uncategorised sites.
  6. Pen-test remediation and/or source-code remediation rates. This metric alone caused a chill to run down my spine. I wouldn’t take a client that didn’t have a mandatory, non-movable release gate for software that mandated a full pen-test and code review at every release, especially for agile environments. Allan says: “If you can’t afford a pen-test…” and I would finish that sentence by saying “…then you’re business model isn’t profitable enough for you to continue being in business.”
  7. Recovery time objective and recovery point objective. Necessary for BC/DR, and useful.
  8. Measured risk management. I’ve had consistently poor experiences with Archer, but consistently good ones with ServiceNow. I think that it’s important to use as much automation here as you can – don’t send a single survey to a single person. Look for ways to instrument using an endpoint tool such as Tachyon, or the compliance checking function in something like Tenable. My experience is that even the most well intending human beings are considerably less accurate than asking the machines. Humans can’t even determine risk appetite correctly; they are far more risk hungry before lunch than after lunch, for instance.
  9. Not on Allan’s list, but another key metric for me is the per system profitability. The business should be able to state how much profit is generated by a given line of business for a set period of time. This can sometimes just be a ballpark, but it should be roughly accurate. This is vital for determining system criticality, which flows through into nearly every other risk metric there is.

Adarma

2019

RELATED

WORKFLOW AND SOARSERVICES