Black Hat Europe 2025: Going Into the Fire
During Black Hat Europe 2025, I had the honor to join our team of “firefighters” at the Black Hat NOC and feel the heat for real.
Working at the Black Hat Network Operations Center (NOC) as a data scientist makes me a bit of an outlier (pun intended) among network engineers and hard-core threat hunters. Thanks to the great work of my colleagues and the other NOC partners that I’ve worked alongside over the years (Arista, Cisco, Jamf, and Palo Alto Networks), I have been able to focus on triaging alerts and developing tailored modeling approaches for identifying anomalies using Corelight logs to defend Black Hat’s well-defined yet challenging conference environment.
While focusing on these areas has been slightly less stressful than dealing with networking issues, it also has not been easy as analysis and triage is often complicated by malicious-but-controlled traffic generated by Black Hat offensive security trainings and vendor demos. Overall, my experience in the NOC has taught me about the long tail of network-related gotchas and oddities.
For example, one crucial lesson I've learned is the value of in-the-clear metadata provided by Network Detection and Response (NDR) solutions, particularly from POST bodies in HTTP logs, which often reveals hygiene issues missed by threat detection alerts. At Black Hat, identifying these leaks has allowed us to notify attendees of configuration or application issues, providing immediate value to conference goers and, in turn, boosting our morale to keep hunting in a challenging environment.
After attending several shows, my colleagues and I have developed a hunch that Large Language Models (LLMs) could be effective in analyzing POST bodies. Our intuition has been that their ability to parse messy and nested structures, as well as interpret data semantics, could help identify problematic content. Pushing things a bit, we have been especially curious whether a hunting agent could autonomously decode encoded data, as our threat hunting playbooks rarely focus on encoded data due to the effort involved.
In this blog, I’ll share how we explored this LLM hunch in the NOC at Black Hat Europe 2025.
We tested this hypothesis by developing an autonomous hunting agent using the ReAct (Reasoning and Acting) approach. As shown in the figure, ReAct agents solve tasks through a cycle of LLM reasoning and tool calls. In our case, the two steps corresponded to:

Our agentic hunting approach consisted of:
A key advantage of ReAct agents is their ease of implementation, as the LLM itself decides at each reasoning step whether to call a tool and how to parameterize it depending on the status and progress of the analysis. Here I’ve included the code snippets that we used for the implementation of the agent and tools; note that we leveraged langchain’s create_react_agent function and the @tool decorator that exposed the tools to the LLM.
Tool definition:
Agent declaration:
Another cool feature of these agents we leveraged was that we could instruct them to categorize findings and explain their reasoning. This transparency helped us review the results and quickly identify the most significant detections, and was also essential to helping us generate structured outputs that could be programmatically handled to ultimately populate our hunting dashboards.
And bingo! The Black Hat Europe 2025 edition was no exception, and our analysis of POST bodies once again revealed critical security issues. What’s more, our agentic hunting approach yielded a significant catch: a file synchronization application that was inadvertently revealing sensitive file names.
This detection was particularly interesting as it had multiple encoding layers, with an encoded URL containing newline-separated Base64 strings, which were also URL-encoded. Due to the multiple steps necessary to fully decode the data, we would have likely missed it without the agent. We also would not have caught it with keyword-based hunts, which only surface unencoded data.

The agent returned an explanation that immediately caught our eye. The decoded payload revealed not only the attendee’s domain user and device names, but also what appeared to be sensitive document names. This included company policies and security posture (redacted here), which definitely should not be transmitted in the clear. Thankfully, our subsequent analysis determined that the files themselves were not shared in the clear, so they could not be intercepted. We assume the files were not transmitted because they had already been synchronized with the server on a previous run, but if files were changed or new files added, there is a possibility those files could have been exposed, in full.

Curious to understand the inner workings of the ReAct agent, I dug up its execution traces. While we provided a toolset designed for multi-step transformations, the agent’s internal reasoning was robust enough to “look through” the encoded payload and successfully map a valid Base64 string to the decode_base64 tool in a single move. Analyzing just one segment was enough for the agent to identify the issue, stop the loop, and generate a high-confidence finding. This demonstrated the flexibility of the approach, as the agent was not confined to a single rigid path to reach a conclusion.

This successful finding validated our hypothesis that agentic reasoning AI, when provided with rich data from an NDR solution, can complement and accelerate hunting workflows. That said, this discovery is just the beginning. My colleagues and I are excited to see what can be achieved when autonomous agents interact with NDR’s rich network traffic data.
For more about the Black Hat NOC, I recommend checking out our blog. And for more on threat hunting, check out our Corelight Threat Hunting Guide.
During Black Hat Europe 2025, I had the honor to join our team of “firefighters” at the Black Hat NOC and feel the heat for real.
Defending the SCinet network is critical work. See how we handled orders of magnitude more traffic using Corelight’s Open NDR Platform.
SCinet’s massive, open network creates unique security challenges. Here's a recap of my experience threat hunting in this high-speed environment.