In last month’s post,
I proposed an approach identifying risks to your organization strategically. By
aligning your organization’s values and key assets with the threat vectors and risk
sources that threaten them, we can define a multi-dimensional attack surface that
helps us gain a deeper initial understanding of risk and risk motivation than a
traditional risk register.
Identifying your risks and aligning them with your strategic
assets are a great start, but to really help your management make good,
impactful decisions some risk analysis is needed.
There are two primary schools of thought in risk analysis, the
qualitative and quantitative approaches. The qualitative approach which focuses
on narrative descriptions of risks, their likelihood and impact. These risks
are then rated on a scale such as 1 though 9 or red, yellow green to help
prioritize them. This approach tends to enjoy widespread use due to its
perceived simplicity and speed. However, these ordinal scales are rarely well
defined or consistent and do not lend itself well to mathematical analysis.
The Quantitative approach relies on measurement and metrics
to provide “value” for risks. Many people feel that for these “Value”
measurements to be valid, they must have high precision and be backed up by
vast amounts of data and statistical information. This concept has prevented
many people from implementing quantitative risk analysis in any widespread
manner.
In their book “How to Measure
Anything in Cyber Security”, Douglas Hubbard and Richard Seiersen propose
that the purpose for quantitative measurement is to reduce uncertainty, and
even a course measurement with low precision can significantly reduce
uncertainty. The key to having a useful measurement is knowing what question
the measurement of trying to help answer, “What are we trying to protect from
who?”. This is what the FAIR risk
analysis standard refers to as scoping. Our method for identifying risks by
assets and vectors have already provided us with a very good understanding of
this scope.
But what do you need to measure and how do you do you get it
done? The first step is to go back to your qualitative analysis and break it
down into discrete items that can be estimated and measured. To be accurate,
you need to go to your technical experts and stakeholders and get their input, their
story.
In their book “Scrum, The Art Of Doing
Twice The Work In Half The Time” Jeff and JJ Sutherland point out that
“people think in narratives, in stories. That’s how we understand the world. We
have an intimate grasp of characters, desires, and motivations. Where we get
into trouble is when we try to abstract out of the main through-line discrete
parts and deal with them out of context…. You need to think of motivation. Why
does this character want the thing?”
“Those stories are ones that a team can wrap its head
around. A discussion can actually ensue about how to implement them. They’re
specific enough to be actionable”
Once you have decomposed the story line of actionable pieces,
project your estimates into ranges expressed of a period of time. In this way “Quantitative”
measurement becomes another quality in your qualitative analysis. You don’t need
to begin with “Monte
Carlo” analysis or forgo your quantitative legacy information, just
continue to extend it until you reach a level that can be accurately measured
and expressed. Start small, with low hanging fruit, risks that are better
understood and more easily measured.
Risk analysis and measurement are all about assigning value
to uncertain events and properties. These are estimates of the potential impact
of threat events across a frequency of events over time. Be wary of the pressure
to give an exact dollar figure to a given risk, unknowns cannot be expressed in
exact figures. Don’t re-invent the wheel, use a framework or standard such as
FAIR, and engage your technical experts and stakeholders. Your analysis are
more easily defensible and management will be enabled to be more effective and
your teams more efficient.
No comments:
Post a Comment