Read the Gartner® Competitive Landscape: Network Detection and Response Report
Read the Gartner® Competitive Landscape: Network Detection and Response Report
START HERE
WHY CORELIGHT
SOLUTIONS
CORELIGHT LABS
Close your ransomware case with Open NDR
SERVICES
ALLIANCES
USE CASES
Find hidden attackers with Open NDR
Corelight announces cloud enrichment for AWS, GCP, and Azure
Corelight's partner program
10 Considerations for Implementing an XDR Strategy
January 14, 2019 by Richard Bejtlich
In response to my previous article in this blog series, some readers asked “why monitor the network at all?” This question really struck me, as it relates to a core assumption of mine. In this post I will offer a few reasons why network owners have a responsibility to monitor, not just the option to monitor.
Please note that this is not a legal argument for monitoring. I am not a lawyer, and I can’t speak to the amazing diversity of regulations and policies across our global readership. I write from a practical standpoint. I consider how monitoring will help network owners fulfill their responsibilities as custodians of data, computational power, and organizational assets.
I learned a lot about network security monitoring when I started as a midnight shift analyst at the Air Force Computer Emergency Response Team (AFCERT). Monitoring the network was integral to our operations. However, it wasn’t always the case. Prior to 1993, each Air Force base was responsible for its own security. There was no centralized “managed security service provider” (MSSP) offering global visibility. When the AFCERT deployed trial versions of Todd Heberlein’s Network Security Monitor (NSM) software in the early 1990s, officials were shocked to find intruders in their enterprise.
From a practical standpoint, monitoring is a way to validate the assumptions one makes about the computing environment. In the case of the Air Force in the 1990s, officials assumed that intruders weren’t active in the enterprise. The Air Force had just pummeled the world’s fourth largest army in the first Gulf War. How could intruders be present? The AFCERT’s deployment of Todd’s NSM software provided irrefutable evidence to the contrary.
The first responsibility to monitor, then, is to provide evidence to support or deny one’s assumptions. Assumptions matter because they are the basis for decision making. If leaders make decisions based on faulty assumptions, then they will likely make poor choices. Those decisions can result in harm to the organization and its constituents. Significantly, that constituency can extend well beyond the organizational boundary, to include customers and other third parties who may unknowingly depend on the decisions made by the network owner.
Beyond understanding what is happening on the network, one has a duty to know what is not happening on the network. This sort of “negative knowledge” becomes critical when one is accused of nefarious activities that they did not commit, or when one is accused of ignoring activity that did not occur.
Let’s address the first case. Consider instances where rogue actors flood false Border Gateway Protocol (BGP) routes into the Internet routing plane. If other service providers carry those routes, then the parties can perform BGP hijacking. From the perspective of downstream network users whose ISPs carry the rogue routes, the BGP hijacker is, for all intents and purposes, the owner of the hijacked Internet protocol (IP) addresses. This means that if a victim sees an attack from another party’s hijacked IP addresses, the victim may accuse the authorized owner of the IP addresses as being the perpetrator.
In this BGP hijack scenario, which occurs on a daily basis, monitoring egress traffic from the hijacked IP address space can show, by omission, that no attack took place. Remember, in reality the offending traffic is generated by the party conducting the BGP hijacking. Records of traffic from the legitimate network owner would not show any attack traffic. One could argue that the BGP hijack victim could have altered his or her logs to remove evidence of attack. However, various means, if necessary, could be applied to show that, while possible, altering the evidence would have introduced forensic artifacts tipping a forger’s hand.
Now imagine the second scenario: ignoring activity that did not occur. My first work after the AFCERT involved helping to create a managed security service provider in Texas. One Monday morning, one of our clients, a financial institution, called me to complain that we had not caught the penetration test they had scheduled for the previous weekend. They were quite upset with me, but I managed to review all of the activity to their IP address space over the weekend, thanks to our deployment of NSM software and processes. I found a single instance of an Nmap scan that occurred on Saturday afternoon, which our analysts had reported as a reconnaissance event with no need for follow-on reporting. NSM data showed no other unusual activity to the customer that weekend.
I asked my customer if their “penetration tester” used a cable modem registered to a certain provider, and I offered the IP address. The customer confirmed that I had located the correct IP address, and I explained to them that the totality of the activity that my customer had paid to the “penetration tester” was an Nmap scan. I asked how much money that scan had cost, and I remember the answer being a five digit number. The customer then excused himself to make another call, which was to the firm that had tried to pretend a Nmap scan was indeed a penetration test.
In these instances, NSM data is the best way to show not only what has happened, but what has not happened. This benefit derives from the fact that NSM is not alert-centric or alert-dependent. While one should incorporate detection methods into NSM operations, remember that NSM does not depend upon alerts alone.
I have advocated NSM for two decades because I found that the decision to capture network activity details, in a neutral way, is an incredibly powerful tool. To understand why, consider an alternative that depends upon alert creation. If one’s operation assumes alerts will always provide information on network activity, what happens when activity does not trigger an alert? Similarly, how does one expect to address the “negative knowledge” question — by not generating an alert?
In brief, because network operators have a responsibility to make decisions based on proper assumptions, and because operators also have a responsibility to know what is, and what is not, happening on their networks, implementing NSM via Corelight and Zeek data is indispensable.
Tagged With: Zeek, Bro, Network Security Monitoring, NSM, Richard Bejtlich, BGP, MSSP