Read the Gartner® Competitive Landscape: Network Detection and Response Report
Read the Gartner® Competitive Landscape: Network Detection and Response Report
START HERE
WHY CORELIGHT
SOLUTIONS
CORELIGHT LABS
Close your ransomware case with Open NDR
SERVICES
ALLIANCES
USE CASES
Find hidden attackers with Open NDR
Corelight announces cloud enrichment for AWS, GCP, and Azure
Corelight's partner program
10 Considerations for Implementing an XDR Strategy
March 5, 2025 by Eldon Koyle
In November 2024, I participated in SCinet with the Network Security team at SC24. My job was supporting Corelight sensors and threat hunting using the data the sensors produced.
This engagement allowed for a very constructive comparison between the networking challenges at SC and Black Hat USA, where I had the honor of working in the Network Operations Center (NOC) a few months earlier. At SC, I felt immersed in the cutting-edge world of research computing with people showcasing the fastest everything. Black Hat, on the other hand, is all about showcasing the latest security research.
Black Hat USA has an impressive amount of bandwidth for a conference, with multiple 10 GbE internet connections. In fact, its networking has a lot in common with an enterprise network, which is only appropriate. This is, after all, a technically advanced conference that focuses on emerging trends and research into enterprise security. Registration runs on-site with an SSL decrypt feed to our tooling, which enables us to provide better visibility. EDR and MDM are deployed on conference-owned assets, and they operate their own wireless access points to allow for network segmentation. This keeps the training shenanigans safely removed from the attendee Wi-Fi. All of this infrastructure is tapped to enable security tools like the Corelight sensors to monitor the traffic.
In contrast, SC focuses on advancing scientific computing. If you want to simulate how atoms interact, model anything from proteins to civilizations, analyze huge datasets, or predict how a thing will fail in the real world, these are your people. SCinet’s goal is to build one of the fastest networks in the world (for a limited time), in support of the SC conference. There were multiple 400 and 100 GbE WAN connections with an aggregate of over 8 Tbps. There was no shortage of IPv4 address space, and exhibitors could even request network drops with link speeds ranging from 1 to 400 Gbps and some routable (public) IP addresses.
After spending 15 years as a sysadmin and network engineer at a research university, SC felt like home — but with way bigger pipes.
Enterprise networks tend to be closed down as much as possible without interfering with (work-related) day-to-day tasks. Security and efficiency concerns often limit users to approved software. As an example, if you can’t run a weird application, then the network team doesn’t have to spend time figuring out why it doesn’t work right, and the security team won’t spend time chasing after any strange behavior the app exhibits.
Generally, network administrators in the enterprise have an organizational advantage: When you’re paying people to perform a task and providing the resources necessary to accomplish it, it’s easier to dictate hygiene. This is especially true within a highly centralized business structure with corporate-owned computing devices.
The education sector has very different ideas about how computing should work, as well as different needs to achieve success. There are researchers who are paid to push the envelope and try new things. There are students who are paying a significant amount of money for an opportunity to learn from the faculty/researchers, and maybe even do some research themselves. They generally bring their own devices and reasonably expect these devices to work (along with whatever software came along for the ride). There are other faculty who are vehemently and morally opposed to any form of censorship.
Furthermore, research often demands the sharing of massive data sets, which are processed by clusters of computers (some with performance measured in petaflops) with extremely fast network connections. A firewall capable of operating at those speeds and scales is probably beyond any research budget (if one even exists). Since research institutions were among the first to connect to the internet, they generally received massive IPv4 allocations back in the Internet Dark Ages, before IPv4 address exhaustion was a common concern.
The net result is that the structure of many research institutions encourages keeping things as open as possible and minimizing — or even eliminating — restrictions. It is not uncommon for a client device, like a laptop, to get a publically-routable IP address at an educational institution and to be exposed to receiving connections from the internet.
This is the type of network we are monitoring at SC. It’s big, fast, and very open. Doesn’t that just sound like the kind of network that needs a little extra visibility?
Threat hunting in professional life has some differences from hunting at a conference. Conference networks generally haven’t been up very long, so there isn’t much to go on for a baseline. You still get used to certain things, but when the data is destroyed at the end of every conference, it takes years to get a bead on what is “normal”. In a typical environment, whether enterprise or education, you could go back months or maybe even years to check your assumptions; when you only have access to a few days worth of traffic once every few months (or even once a year), you have to remember these things from conference to conference (I hope you’re good at taking notes).
In any environment, you’ll likely identify a wide variety of “weird but not malicious” things. Some are broken things, others are just broken assumptions.
For example, a common assumption is that TCP port 443 should always be TLS. That may hold true for a lot of places, and in these environments seeing unencrypted traffic on this port typically will cause alarm.
However, Zscaler breaks that assumption. If your browser is about to make an unencrypted HTTP connection, Zscaler will proxy it unencrypted… over port 443. An unencrypted HTTP proxy is often cause for concern, but when Zscaler does it, it’s just because the client was going to make an unencrypted HTTP connection anyway. This may not be the P1 incident you anticipated when you started investigating. Did I see a bunch of this at SC? You bet.
Seeing a client connect to a large number of remote IP addresses without resolving them in DNS first is also generally concerning behavior – normal software doesn’t do that. The most common culprit is a peer-to-peer service of some kind. However, it turns out that Private Internet Access (PIA, a VPN service) uses this to decide which server to use. Observed at SC? Check.
VPNs with bad behavior change from show to show. Attendees come from all over the world, informed by different backgrounds and software preferences, after all. It’s not unlike students at a university. At a conference, attendees are paying to spend time learning from presenters whom the organizers somehow enticed to show up and teach something. In a corporate environment, you can make a strong argument to block unauthorized VPNs. But the conferences I participate in feel more aligned with educational institutions in this regard. We can’t dictate the way attendees behave (beyond “don’t break the law”), though we try to protect them from themselves, others, and (especially at security conferences) from each other. So a block on every unauthorized VPN might make sense, but it also might infringe on the learning experience and preferences of people who have paid to be at this conference. With different motivations and considerations, a different security calculus applies.
It is for these reasons that threat hunting at a conference like SC requires me to think differently than I would in other circumstances.
So, what does it take to have a successful network security presence for a conference, and how does it differ from traditional network security? (Spoiler alert: other than the time available, it’s not that different.)
First of all, you need a plan. What are you protecting? Are you protecting attendees from each other, or are you more concerned with protecting conference infrastructure? What do you do when (not if) you find an attendee device has malware on it? What about spyware? If someone has software I would deem undesirable, what level of effort is justified in finding and notifying them? Should we track attendees down? Should we disconnect them from the network if they are involved in an incident? Clearly, if someone is a risk to others then we should intervene expeditiously. But what should we do when attendees are only a hazard to themselves?
These questions can be hard to answer, especially during an incident. My first recommendation is to decide ahead of time what is actionable and how you expect to be able to resolve these issues.
No matter where you are, resources are finite; that’s particularly true at a conference that only lasts a few days or a week. You can spend a lot of time looking at security issues that in the end you won’t be able to take any action on; unless you are hoping to write a paper or a blog post, your return on that invested time and effort can be nil. You should also expect to change your plan based on resources available – time foremost of all.
From a purely ideological perspective, I’d like to contact everyone about their broken stuff to give them an opportunity to improve their situation. From a practical perspective, that may not serve the organization’s primary goal. There will always be a balance to strike between risk and reward, so think through your own risk tolerance, and gauge that of the conference’s organizers, ahead of time.
If you’re going to have a debate about what constitutes an “incident” and what the response actions should be, based on the severity of the situation, you’re going to need a team. Otherwise, it’s just a monologue. Also, until the promised day when AI has spread forth across the world, transforming all of our difficult problems into mere memories, without a team you’ll have to do all the work yourself. You’ll need people who understand the data, the network, and the way common applications are supposed to behave. You’ll need to give these colleagues time to look for risks; they will need skills and experience to prioritize them and the organizational support to resolve or mitigate them.
How do you hope to find and validate security risks? You can have the best security team in the world, but without data, they will never be effective. At SC and other conferences, we don’t have the ability to install endpoint software on attendee devices (even if we did, it probably would be too time-consuming to be practical). So, our next best bet is network data. You probably have no shortage of alerts already, but alerts aren’t data, at least not the kind of data I’m talking about.
I have found that alerts are actually quite difficult to act upon without additional context. The alert says this host has malware on it, but it only fired once. What did it detect? Look at the rule (you can see your rules, right?); it detects a DNS domain that somebody, somewhere, decided was related to malware. Cool, cool, cool. So was this caused by malicious software on the machine, or did we just get a questionable ad that wasn’t clicked on? To answer these kinds of questions, you’ll need data about what the client did before, during, and after the alert.
Don’t get me wrong, alerts are important. But the ability to provide context for the alert is what threat hunters need most. It’s also what really sets Corelight apart, and I believe this is what gets us invited to participate in these conference networks.
Corelight logs include Zeek® data. Zeek is focused on facts: The conn log tells us about connections made and network endpoints involved, the DNS log tells us about queries made and the answers received, the HTTP log tells a lot about unencrypted web requests, and so forth. Since these logs present everything that was seen, rather than only traffic that was deemed suspicious at the time, we get a much clearer picture of what is happening when an alert fires.
Utilizing Zeek data, we are able to look at client behaviors and correlate those against alerts to determine whether a Suricata alert is a smoking gun or a nothingburger. For instance, this correlation allows you to pivot from a Suricata alert for a malicious POST to the details in the HTTP log simply by querying the connection ID in your SIEM, greatly simplifying the process of alert response.
Ultimately, the main difference between providing network security at a conference and a traditional enterprise environment is a function of available time. You have a lot less of it at a conference: less time to iterate on processes, less time to learn from setbacks. This makes planning extremely important. Your planning should include realistic expectations about what you can reasonably respond to, and what the response process will entail. After all, finding an issue has a pretty low value if you can’t do something about it.
You definitely want contextual data to help sift through alerts and decide which are actionable and which are really just data points. You need to know your environment and your audience, and the risk tolerance of the organization. Last but not least, you need a team who can understand and act appropriately.
Tagged With: network detection response, network security, featured