Read the Gartner® Competitive Landscape: Network Detection and Response Report
Read the Gartner® Competitive Landscape: Network Detection and Response Report
START HERE
WHY CORELIGHT
SOLUTIONS
CORELIGHT LABS
Close your ransomware case with Open NDR
OVERVIEW
PRODUCTS
SERVICES
ALLIANCES
USE CASES
10 Considerations for Implementing an XDR Strategy
September 23, 2024 by Mark Overholser
In the Black Hat Network Operations Center (NOC), the conference’s leadership team must assemble best-in-class technologies that complement each other to build and harden an enterprise-grade network in just a few days. Then, the NOC must continuously monitor and adapt the network throughout the course of the conference before dismantling it after the conference concludes.
Monitoring the network means looking for signs of performance degradation or misconfigurations and simultaneously making sure that participants on the network are adhering to the rules of engagement. This is why Corelight is a key part of the network stack at the Black Hat conferences. The rich data we provide allows for easy incident response and threat hunting, tasks that would be much harder if we could only rely on other sources of network information, like Netflow records or Firewall logs.
Let’s imagine a member of the NOC has the task of monitoring outbound connections and looking for anomalies. Here’s an example of a Netflow record for an outbound connection:
Flags = 0x00 FLOW, Unsampled
size = 56
first = 1722727633 [2024-08-03 18:27:13]
last = 1722727633 [2024-08-03 18:27:13]
msec_first = 438
msec_last = 986
src addr = 192.168.2.94
dst addr = 140.82.112.35
src port = 54012
dst port = 443
fwd status = 0
tcp flags = 0x00 ......
proto = 6 TCP
(src)tos = 0
(in)packets = 26
(in)bytes = 3355
dst as = 36459
You can collect some useful information from this record, like the source and destination IPs, the destination port, how much data was exchanged, and when it happened. And it contains some enrichment, like the autonomous system (AS) number of the destination.
However, as an analyst, I would have several more questions, such as:
For each of these questions, the analyst looking at the record has to make assumptions. For example, an analyst might see the destination port of 443 and make the assumption that this was an SSL/TLS/HTTPS connection. However, I don’t like to make assumptions, especially when this is a well-known obfuscation technique.
Let’s take a look at the same session using logs from an application-aware next-generation firewall (NGFW):
There are a few main differences between the Netflow record and the log entry from the firewall. First, note that the granularity of the recorded duration of the session was in milliseconds for the Netflow record, but is in seconds for the firewall record. So, instead of being able to see that it was just over half a second long, we’ve rounded up to one second. That’s probably not consequential here, but if you’re looking to compare a bunch of sessions to look for similarities and differences, those milliseconds may make a difference elsewhere.
However, the firewall logs have a distinct advantage over the Netflow records, in that they identify the higher-order application protocol. In this case, it was the secure shell (SSH) protocol, being used over port 443.
This is loads better than just a Netflow record, to be sure. At the enterprise level, even if you’re allowing traffic based on port (instead of a combination of port(s) and application, as recommended by NGFW vendors), you can at least look back to see whether the things you allowed matched your expectations, and adjust your policy accordingly. Additionally, among all the fields that are more focused on policy enforcement and network operations, such as the name of the firewall rule that matched the traffic, and any details surrounding network address translation (NAT), there are also helpful fields like is_non_std_dest_port to describe whether the destination port is commonly associated with the protocol, and user identification fields (which requires a centralized identity management system, and benefits from agents on user endpoints–neither of which we have for attendees at Black Hat).
However, as an analyst, some of my earlier questions are still unmet:
We can search for the same record in Corelight Investigator, and see what the Corelight logs have to offer.
This is what the Corelight conn (short for connection) log looks like, and it is our version of a “flow” record, collecting all the high-level network flow information about the session.
In addition to the details afforded by the firewall logs, we have a number of additional details:
All of these are standard fields for logs from a Corelight sensor, or can be enabled with default packages. In the Black Hat NOC we also add packages to extend many of our logs with additional information. You can see that in fields like:
With all this information at our fingertips, we can quickly put together a high-confidence assessment of the situation. The originator, acting from the Black Hat network General WiFi, is using the SSH protocol to connect to a remote IP over SSH on port 443. The remote endpoint answers to the name ssh.github.com, and is in a network range owned by GitHub. The connection was fully-formed, not a banner-grab or a connectivity test, and it was not interrupted by a mid-stream reset. Considering all these factors, we can quickly and easily chalk this up as likely normal behavior, such as someone interacting with a GitHub repository over SSH, and not malicious behavior for this network.
Not only do we have all of this information in the basic connection log, but we also have a protocol-specific log for the SSH traffic, including other details that can be useful for analysts. Here, I’ve pivoted to the SSH log using the UID (unique identifier) of the connection, instead of searching for the 4-tuple–it’s far faster and easier, since that UID is present in all of the Corelight logs pertaining to the connection.
In this case, the client asserted an identity of “SSH-2.0-OpenSSH_9.7” while connecting; the server responded with “SSH-2.0-babeld-785eccb76.” The client and server agreed on chacha20-poly1305 for encryption and the server’s host key fingerprint is available.
In addition, Corelight sensors using the SSH Inference Package from our Encrypted Traffic Collection offer some inferences about behaviors inside the encrypted SSH session, without needing to intercept or break the encryption of the connection. It does this by doing analysis of packet sizes and relative timing, as these leak information about the contents of the session. The inferences for this connection are:
Other possible inferences we could make include:
Each of these inferences can be useful to a security analyst who is trying to make sense of network behaviors and look for signs of malicious activity.
(I promise this will make sense in a paragraph or two.)
SSH is just one protocol in a large suite of application-layer protocols in use on the internet and individual networks. Another is HTTP. Let’s take a look at an interesting HTTP transaction we found during an unstructured hunt in the Black Hat NOC:
We can tell a lot from this record. The client requested a unique-looking URI from “ocsp.rootca1.amazontrust.com”, specified a user agent of “com.apple.trustd/3.0” and received a “200 OK” response from the server, as well as some content with a MIME type of “application/ocsp-response”. So, an Apple device asked for an OCSP update from an Amazon root certificate authority. What’s so interesting about that?
I’ve collapsed a couple of larger multi-value fields in the original screenshot, specifically the client_headers and server_headers fields. I’ll omit the client headers here—they’re none too interesting—but here are the server headers returned by the server:
Take a look at that “server” header. ¯\_(ツ)_/¯ It’s not malicious, but it sure sticks out when browsing through server headers when you’re on the lookout for anomalies. I’d like to meet the person responsible for getting that approved, buy them a beverage of their choice and hear the back-story behind it.
The incredible extensibility of the Corelight sensors gives a huge boost to the Black Hat NOC’s unstructured hunts through the server headers in HTTP traffic. Since Corelight sensors are based on Zeek, which is both a DPI engine and an event-driven scripting engine, we are able to take advantage of existing packages from the Zeek Package database, and even to write our own (like the one above which adds network name information into the connection log) to create new detections or to add fields to logs. The HTTP client and server headers were added to the HTTP log by such a package. I’d like to see your switch, router, or NGFW do that.
Even after all of this, we’ve still only looked at two protocols, but Corelight sensors log the details from over 30 protocols by default (and can be extended with new protocol analyzers as well). Have a look at the Zeek log cheat sheet poster (PDF) for an overview of the more popular protocols and fields.
By comparing Netflow to NGFW logs to Corelight logs, we can make the following recommendations for choosing the right tool for the job at hand.
First, let the firewall be the firewall. Firewalls are optimized to make fast, point-in-time judgments about traffic to determine whether it should be allowed or blocked, and all of their mechanisms and logs are organized around making and documenting that decision. Asking a firewall, even an NGFW, to gather insights about network traffic to the level of detail that Corelight provides would cause significant additional load and degrade its performance.
Second, use Netflow where it matters. An apt comparison of Netflow and Corelight logs is the difference between reading a book by starlight and reading it at noon on a clear day. There is so much more information in Corelight logs that it is hardly a comparison at all. Netflow was originally designed for network administrators to gather information that would assist in troubleshooting connectivity issues and network performance; it was not designed with security team needs in mind. Switches and routers that generate Netflow present the same limitation: logging isn’t their primary function. Switches and routers are in place to switch and route; those are their primary functions.
Netflow does have its uses for security teams. For example, in a network that wasn’t architected for security visibility, there may not be tapping points built in to remote locations, which gets more problematic the more remote locations you have. In this case, it would be easier to enable Netflow remotely on a switch or router when the security team needs to extend visibility, with the understanding that:
It’s no wonder that an analysis of data sources by MITRE for their relevance/applicability in defense ranks “Network Traffic Content” above “Network Traffic Flow” by a margin of nearly 10% more coverage.
When you want to know exactly what happened, what’s better than Corelight metadata logs? How about the packets themselves? With Corelight Smart PCAP, the Black Hat NOC analysts set conditional triggers for what to capture and what to discard. For example, in the Black Hat NOC we capture all unencrypted traffic and store it for the duration of the conference. We elect not to capture encrypted traffic because we don’t have a way to decrypt SSH or TLS traffic, so those packets would just take up valuable storage space. When it is time to dive into a session at the packet level, there is a link right in the connection log to directly download the packets for that session: no need to filter out the surrounding traffic or anything!
I hope this latest dispatch from the NOC makes it clear why organizations like the Black Hat security conference, which need deep network visibility for incident response and threat hunting, choose to add Corelight to their stack, even if they already have a best-in-class next-generation firewall.
If you want to build a network the way the Black Hat NOC does, follow the same recipe:
We thank Black Hat for continuing to partner with Corelight, and thank all of our peers at Arista, Cisco, Lumen, NetWitness, and Palo Alto Networks for their effort and collaboration in the NOC, where adventures in network monitoring never cease.
Tagged With: network security, cybersecurity, NDR, Netflow, BlackHat, threat hunting, featured