Editor's note: This is the first in a series of Corelight blog posts focusing on evidence-based security strategy. Catch up on all of the posts here.
What matters most in a criminal trial? Evidence. Everything depends on the quality and depth of facts deployed to build a case for innocence or guilt. Without compelling evidence, no jury can draw accurate conclusions. Here at Corelight, we are in a position to see that the most sophisticated cyber defense teams in the world have shifted their strategies towards the collection and analysis of high-quality evidence to both disrupt advanced attacks and accelerate SOC operations: quite literally, treating evidence as a first-class part of their strategy. This concept is either intriguing (what does that mean?) or ridiculous (who by now hasn’t recognized the value of cybersecurity evidence?). Let’s start with some questions from three different angles to see if we can get ourselves into the first of those two camps.
What were the number and type of major incidents over the past 90 days? When we look across those incidents, which evidence sources were either (a) the lead indicators of the problem or (b) critical to our ability to diagnose the incident? Where did lack of evidence mean we either couldn’t fully resolve an incident or understand the scope of the attack? How broad was the scope of disclosure as a result? Those questions give us a heat map of our evidence, in the same way we might have a heat map of your MITRE tactics, techniques and procedures (TTP) coverage. Getting to this level of analysis is one key difference between “of course we value evidence” and “we have an evidence-based strategy.”
What is the coverage - across the threat surface, quality and time - of our evidence? Is that something we can quantify or just qualitatively describe? Driving that coverage reduces the ability for attackers to maneuver across our environment, which increases our ability to detect, hunt or investigate their actions. It increases our confidence in a complete understanding of the attack scope, remediation success, and required disclosure. Quantifying that coverage, and as a result quantifying the risk we have in the environment, is another difference between “of course we value evidence” and “we have an evidence-based strategy.”
Are we measuring overall time to respond? Have we looked at the flow of our evidence through our training programs, analytics tools and operating playbooks? Have we given our teams time to do reconnaissance on our environment? Do we have operating processes that connect the insights from that reconnaissance back to our training programs and analytics coverage? Measuring time to respond, and crafting programs to improve it, is the third key difference between “of course we value evidence” and “we have an evidence-based strategy.”
So, with all of this in mind, is an evidence-based cybersecurity strategy a novel next step or a tired catchphrase? This strategy is not a replacement for solid prevention technologies or for driving detection coverage for known TTPs. However, when (not if!) attackers bypass those, evidence dictates our ability to find the attacker before they have time to do damage. In light of Log4shell, Sunburst, stolen red team tools and many other examples, we expect to see these kinds of threats continue and in fact represent the next frontier for defense. This is no longer only a problem that the most sophisticated organizations have to face - it is a problem for all of us.
One more thing: the trick with an evidence-based strategy is that we have to start it before we think we need it. Otherwise, we find ourselves trying to investigate a threat from, say, four months ago but we can’t recreate the missing evidence. This is a stark contrast to analytics and automation technologies that can and will evolve - these are powerful tools, but without the right evidence they are irrelevant. As a colleague once quipped, the best time to plant a tree is twenty years ago. The second best time is now. Happy planting!
By Brian Dye, CEO of Corelight