CONTACT US
forrester wave report 2023

Close your ransomware case with Open NDR

SEE HOW

ad-nav-crowdstrike

Corelight now powers CrowdStrike solutions and services

READ MORE

ad-images-nav_0013_IDS

Alerts, meet evidence.

LEARN MORE ABOUT OUR IDS SOLUTION

ad-images-nav_white-paper

5 Ways Corelight Data Helps Investigators Win

READ WHITE PAPER

cloud-network

Corelight announces cloud enrichment for AWS, GCP, and Azure

READ MORE

corelight partner programe guide

Corelight's partner program

VIEW PROGRAM

glossary-icon

10 Considerations for Implementing an XDR Strategy

READ NOW

ad-images-nav_0006_Blog

Don't trust. Verify with evidence

READ BLOG

video

The Power of Open-Source Tools for Network Detection and Response

WATCH THE WEBCAST

ad-nav-ESG

The Evolving Role of NDR

DOWNLOAD THE REPORT

ad-images-nav_0006_Blog

Detecting 5 Current APTs without heavy lifting

READ BLOG

g2-medal-best-support-spring-2024

Network Detection and Response

SUPPORT OVERVIEW

 

Want better network visibility? Don't just go with the (net)flow

In the Black Hat Network Operations Center (NOC), the conference’s leadership team must assemble best-in-class technologies that complement each other to build and harden an enterprise-grade network in just a few days. Then, the NOC must continuously monitor and adapt the network throughout the course of the conference before dismantling it after the conference concludes.

Monitoring the network means looking for signs of performance degradation or misconfigurations and simultaneously making sure that participants on the network are adhering to the rules of engagement. This is why Corelight is a key part of the network stack at the Black Hat conferences. The rich data we provide allows for easy incident response and threat hunting, tasks that would be much harder if we could only rely on other sources of network information, like Netflow records or Firewall logs.

Netflow: It’s better than nothing

Let’s imagine a member of the NOC has the task of monitoring outbound connections and looking for anomalies. Here’s an example of a Netflow record for an outbound connection:

    
     

  Flags        =              0x00 FLOW, Unsampled

  size         =                56

  first        =        1722727633 [2024-08-03 18:27:13]

  last         =        1722727633 [2024-08-03 18:27:13]

  msec_first   =               438

  msec_last    =               986

  src addr     =      192.168.2.94

  dst addr     =     140.82.112.35

  src port     =             54012

  dst port     =               443

  fwd status   =                 0

  tcp flags    =              0x00 ......

  proto        =                 6 TCP

  (src)tos     =                 0

  (in)packets  =                26

  (in)bytes    =              3355

  dst as       =             36459

 

You can collect some useful information from this record, like the source and destination IPs, the destination port, how much data was exchanged, and when it happened. And it contains some enrichment, like the autonomous system (AS) number of the destination.

However, as an analyst, I would have several more questions, such as:

  • Are there any names associated with the remote endpoint?
  • Is there a higher-level application protocol associated with this communication?
    • If so, what protocol, and are there any other protocol details we can glean?
  • Does the order of TCP flags give any clues to the way the session transpired?

For each of these questions, the analyst looking at the record has to make assumptions. For example, an analyst might see the destination port of 443 and make the assumption that this was an SSL/TLS/HTTPS connection. However, I don’t like to make assumptions, especially when this is a well-known obfuscation technique.

Firewall logs: a step above Netflow

Let’s take a look at the same session using logs from an application-aware next-generation firewall (NGFW):

There are a few main differences between the Netflow record and the log entry from the firewall. First, note that the granularity of the recorded duration of the session was in milliseconds for the Netflow record, but is in seconds for the firewall record. So, instead of being able to see that it was just over half a second long, we’ve rounded up to one second. That’s probably not consequential here, but if you’re looking to compare a bunch of sessions to look for similarities and differences, those milliseconds may make a difference elsewhere.

However, the firewall logs have a distinct advantage over the Netflow records, in that they identify the higher-order application protocol. In this case, it was the secure shell (SSH) protocol, being used over port 443.

This is loads better than just a Netflow record, to be sure. At the enterprise level, even if you’re allowing traffic based on port (instead of a combination of port(s) and application, as recommended by NGFW vendors), you can at least look back to see whether the things you allowed matched your expectations, and adjust your policy accordingly. Additionally, among all the fields that are more focused on policy enforcement and network operations, such as the name of the firewall rule that matched the traffic, and any details surrounding network address translation (NAT), there are also helpful fields like is_non_std_dest_port to describe whether the destination port is commonly associated with the protocol, and user identification fields (which requires a centralized identity management system, and benefits from agents on user endpoints–neither of which we have for attendees at Black Hat).

However, as an analyst, some of my earlier questions are still unmet:

  • Are there any names associated with the remote endpoint?
  • Are there any other higher-order protocol details we can glean?
  • Does the order of TCP flags give any clues to the way the session transpired?

Corelight: The best thing since sliced packets.

We can search for the same record in Corelight Investigator, and see what the Corelight logs have to offer.

This is what the Corelight conn (short for connection) log looks like, and it is our version of a “flow” record, collecting all the high-level network flow information about the session.


In addition to the details afforded by the firewall logs, we have a number of additional details:

  • We have the remote endpoint name ssh.github.com, as well as a citation of where that information came from (observation of a DNS A record resolution)
  • We have the sub-second duration already calculated
  • We have the history field “ShADdaFf” showing the TCP flags in order throughout the connection lifecycle:
    • S: SYN from the originator
    • h: SYN/ACK (“handshake”) from the responder
    • AD: ACK and data from the originator
    • da: data and ACK from the responder
    • F: FIN from the originator
    • f: FIN from the responder
  • The connection lifecycle is represented by the conn_state summary “SF”, indicating the connection ran from “SYN to FIN” as a standard, well-formed connection would do.
  • We have fields local_orig and local_resp to indicate whether the originator and responder exist within our network boundary.

All of these are standard fields for logs from a Corelight sensor, or can be enabled with default packages. In the Black Hat NOC we also add packages to extend many of our logs with additional information. You can see that in fields like:

  • enrichment_orig.network_name: the name of the network segment that the originator is on. We accomplish this with a script that has an accompanying lookup table of network ranges and their names.
  • id.orig_mac_vendor: the Vendor associated with the MAC address of the originator, to help identify device types.
  • remote_asn: the autonomous system number (ASN) associated with the network range the responder is within.
  • remote_organization: the organization name associated with the remote AS (that saves the analyst a trip to ARIN’s website to look it up from the ASN).

With all this information at our fingertips, we can quickly put together a high-confidence assessment of the situation. The originator, acting from the Black Hat network General WiFi, is using the SSH protocol to connect to a remote IP over SSH on port 443. The remote endpoint answers to the name ssh.github.com, and is in a network range owned by GitHub. The connection was fully-formed, not a banner-grab or a connectivity test, and it was not interrupted by a mid-stream reset. Considering all these factors, we can quickly and easily chalk this up as likely normal behavior, such as someone interacting with a GitHub repository over SSH, and not malicious behavior for this network.

Not only do we have all of this information in the basic connection log, but we also have a protocol-specific log for the SSH traffic, including other details that can be useful for analysts. Here, I’ve pivoted to the SSH log using the UID (unique identifier) of the connection, instead of searching for the 4-tuple–it’s far faster and easier, since that UID is present in all of the Corelight logs pertaining to the connection.

In this case, the client asserted an identity of “SSH-2.0-OpenSSH_9.7” while connecting; the server responded with “SSH-2.0-babeld-785eccb76.” The client and server agreed on chacha20-poly1305 for encryption and the server’s host key fingerprint is available.

In addition, Corelight sensors using the SSH Inference Package from our Encrypted Traffic Collection offer some inferences about behaviors inside the encrypted SSH session, without needing to intercept or break the encryption of the connection. It does this by doing analysis of packet sizes and relative timing, as these leak information about the contents of the session. The inferences for this connection are:

  • CTS: The client already has an entry in its known_hosts file for this server, or was instructed to ignore the server host fingerprint (“Client Trusts Server”).
  • SA: The client scanned authentication methods with the server and then disconnected (“Scanned for Authentication”).

Other possible inferences we could make include:

  • Whether files were transferred, and in which direction
  • Whether there were keystrokes and whether they were automated
  • Whether the client repeatedly tried and failed to authenticate.

Each of these inferences can be useful to a security analyst who is trying to make sense of network behaviors and look for signs of malicious activity.

Amazon shrugged

(I promise this will make sense in a paragraph or two.)

SSH is just one protocol in a large suite of application-layer protocols in use on the internet and individual networks. Another is HTTP. Let’s take a look at an interesting HTTP transaction we found during an unstructured hunt in the Black Hat NOC:

We can tell a lot from this record. The client requested a unique-looking URI from “ocsp.rootca1.amazontrust.com”, specified a user agent of “com.apple.trustd/3.0” and received a “200 OK” response from the server, as well as some content with a MIME type of “application/ocsp-response”. So, an Apple device asked for an OCSP update from an Amazon root certificate authority. What’s so interesting about that?

I’ve collapsed a couple of larger multi-value fields in the original screenshot, specifically the client_headers and server_headers fields. I’ll omit the client headers here—they’re none too interesting—but here are the server headers returned by the server:

Take a look at that “server” header. ¯\_(ツ)_/¯ It’s not malicious, but it sure sticks out when browsing through server headers when you’re on the lookout for anomalies. I’d like to meet the person responsible for getting that approved, buy them a beverage of their choice and hear the back-story behind it.

The incredible extensibility of the Corelight sensors gives a huge boost to the Black Hat NOC’s unstructured hunts through the server headers in HTTP traffic. Since Corelight sensors are based on Zeek, which is both a DPI engine and an event-driven scripting engine, we are able to take advantage of existing packages from the Zeek Package database, and even to write our own (like the one above which adds network name information into the connection log) to create new detections or to add fields to logs. The HTTP client and server headers were added to the HTTP log by such a package. I’d like to see your switch, router, or NGFW do that.

Even after all of this, we’ve still only looked at two protocols, but Corelight sensors log the details from over 30 protocols by default (and can be extended with new protocol analyzers as well). Have a look at the Zeek log cheat sheet poster (PDF) for an overview of the more popular protocols and fields.

The right tool for the job

By comparing Netflow to NGFW logs to Corelight logs, we can make the following recommendations for choosing the right tool for the job at hand.

First, let the firewall be the firewall. Firewalls are optimized to make fast, point-in-time judgments about traffic to determine whether it should be allowed or blocked, and all of their mechanisms and logs are organized around making and documenting that decision. Asking a firewall, even an NGFW, to gather insights about network traffic to the level of detail that Corelight provides would cause significant additional load and degrade its performance.

Second, use Netflow where it matters. An apt comparison of Netflow and Corelight logs is the difference between reading a book by starlight and reading it at noon on a clear day. There is so much more information in Corelight logs that it is hardly a comparison at all. Netflow was originally designed for network administrators to gather information that would assist in troubleshooting connectivity issues and network performance; it was not designed with security team needs in mind. Switches and routers that generate Netflow present the same limitation: logging isn’t their primary function. Switches and routers are in place to switch and route; those are their primary functions.

Netflow does have its uses for security teams. For example, in a network that wasn’t architected for security visibility, there may not be tapping points built in to remote locations, which gets more problematic the more remote locations you have. In this case, it would be easier to enable Netflow remotely on a switch or router when the security team needs to extend visibility, with the understanding that:

  • The logs will be less rich. Analysts may need to make assumptions about the traffic by reading between the lines—and they may miss things or make mistakes as a result.
  • The traffic/logs may be sampled. This is a common practice deployed to reduce load on switches and routers.
  • Log generation/arrival is not guaranteed (Netflow uses UDP for transport).

It’s no wonder that an analysis of data sources by MITRE for their relevance/applicability in defense ranks “Network Traffic Content” above “Network Traffic Flow” by a margin of nearly 10% more coverage.

Oh, did I forget to mention PCAP? Silly me!

When you want to know exactly what happened, what’s better than Corelight metadata logs? How about the packets themselves? With Corelight Smart PCAP, the Black Hat NOC analysts set conditional triggers for what to capture and what to discard. For example, in the Black Hat NOC we capture all unencrypted traffic and store it for the duration of the conference. We elect not to capture encrypted traffic because we don’t have a way to decrypt SSH or TLS traffic, so those packets would just take up valuable storage space. When it is time to dive into a session at the packet level, there is a link right in the connection log to directly download the packets for that session: no need to filter out the surrounding traffic or anything!

Best-in-class: the Black Hat way

I hope this latest dispatch from the NOC makes it clear why organizations like the Black Hat security conference, which need deep network visibility for incident response and threat hunting, choose to add Corelight to their stack, even if they already have a best-in-class next-generation firewall.

If you want to build a network the way the Black Hat NOC does, follow the same recipe:

  1. Pick best-in-class partners to handle each of your classes of needs
  2. Architect for Zero Trust, including ample network visibility and logging (more on that strategy here)
  3. Acquire as much talent as you can and get them together in a collaborative environment

We thank Black Hat for continuing to partner with Corelight, and thank all of our peers at Arista, Cisco, Lumen, NetWitness, and Palo Alto Networks for their effort and collaboration in the NOC, where adventures in network monitoring never cease.

Recent Posts