Skip to content
  • There are no suggestions because the search field is empty.
PROTECTING OVER $1B IN DAILY TRADES
DEFENDING ENERGY FOR 32+M U.S. USERS
SECURING NETWORKS FOR 52K+ TRANSPORT VEHICLES
PROTECTING OVER $10T IN MANAGED ASSETS
SECURING 16+M ANNUAL PATIENT VISITS
Home/Podcasts/Episode 12 - The Agentic SOC:...
Episode 12 - The Agentic SOC: Upleveling Analysts with AI Knowledge Multipliers
Guest Speaker: Stan Kiefer
April 9, 2026

Episode 12 - The Agentic SOC: Upleveling Analysts with AI Knowledge Multipliers

Episode 12 - The Agentic SOC: Upleveling Analysts with AI Knowledge Multipliers
0:00 / 0:00

About the episode

Richard Bejtlich sits down with Stan Kiefer, Corelight’s Senior Manager for Data Science, to discuss how AI serves as a vital "abstraction layer" and "knowledge multiplier" for security analysts. Stan explains that while AI can synthesize complex information, it remains untrustworthy without high-fidelity network data at its center to provide verifiable evidence. The episode explores the shift toward an "agentic ecosystem" and a tiered architecture where a central orchestrator manages specialized sub-agents to accelerate detection and investigation. Looking toward the future, Stan envisions a hybrid SOC environment where adaptive systems learn an analyst's specific workflows to automate routine tasks, acting as a professional companion that can cut the time needed to reach competency in half.

Episode transcript

Download transcript

Episode 12 - The Agentic SOC: Upleveling Analysts with AI Knowledge Multipliers

Welcome to Corelight Defenders. I'm Richard Bejtlich, strategist and author in residence at Corelight. In each episode, we explore insights from the front lines of NDR, network detection and response. Today, I'm speaking with Stan Kiefer, Senior Manager for Data Science and AI. Welcome, Stan. Hey, glad to be here. We know that it's sort of a meme that whenever anyone talks, uh, in a professional capacity in front of an audience, the word AI, uh, or the, the letters AI come up. Uh, but that's what we're gonna be talking about today because we have quite a, an initiative around this at Corelight, and, uh, you're kind of our point guy for that. So I appreciate you being on the podcast. And previously we had Keith Jones on here talking about how he uses AI tools more on the research and development side. But you're sort of more looking at a, at a holistic approach for the company.

And so I, I think the first place I'd like to start with is how should security teams think about the use of AI in NDR, platforms? You know, it's always been really difficult inside of the SOC to, um, to, to elevate the SOC, you know, tier one analyst, tier two analyst to understand network data, because network data can be very complex. Um, you know, in, in detail, Corelight data, you know, based on Zeek is probably one of, you know, the most complex log data sets out there, but it has its good and bad sides, right? It-the good side is it's a direct replication of what happened on the network. The difficult side is it can be difficult to actually consume the, the data. And I think that's really where

AI. em-empowers the analyst, um, especially within the NDR side. You know, being able to say, "Hey, I, I have this indicator, um, that may come, from outside of NDR. Maybe it' comes from, from an EDR or, or identity," and then be able to interrogate the, the network data without having to be an expert in the network data. And so that, you know, upleveling, you know, an analyst to, to be able to consume that data and at the same time be able to explore areas of the data they may have never been able to explore before because of that abstraction layer, um, the AI, brings to the, table.

And, so it's not only a force multiplier, but it's also a knowledge multiplier. When I worked in-- at. the AFCERT decades ago and I had a question, there was very little out there. There were no search engines. The books that we had were... There were a couple, like, you know, Unix and Internet Security and how to build internet firewalls. So generally, you had to turn to the person next to you and say, "Do you know what this is? Have you seen this before?" And they would have to do the same thing to, to ask other people.

And then we got search engines, and we were able to ask questions, but at that point, you relied on finding someone else having a conversation that addresses the topic of interest. And it seems now with these, uh, AI tools, you can interrogate them, and they are sort of synthesizing what they've seen by looking at everything. And then it's sort of up to you to decide whether their synthesis is useful or accurate.

Is that-- does that sound reasonable? Oh, I think so, right. I mean, I think that's one of the, the powers of AI is, or well, well-defined and deployed AI, is that, you know, it, it tells you an answer, but also having referenceable data, um, that goes with that answer. So as a human, you can look at the answer maybe the AI gave you, then look at the, the data that it used to generate that, and then go, "Okay, yes, that makes sense," or, "No, you just-- it just made something completely up that was not even true."

Even if I don't know the data in and out, I can look at it and tell, you know, whether it was even close. And so I think that's really, you know, where data is that center of, of the I--

AI world. You know, AI alone is, is really not trustworthy at this point, and maybe never without the data to actually reference it back. And so I think from an analyst perspective, being able to deliver them both with, you know, the abstracted version, this is what's important, why it's important, but then, hey, if you wanna dig in deeper, here's the data that, that backs up that. And that evidence is really the key, um, to having that complete, you know, picture, uh, that AI provides. If I'm trying to accomplish a task, and there's a way to measure the, the state of things before I make a change as suggested by AI. So I'll give you a concrete example. I've been doing some performance tuning on my, my gaming desktop, and I can take a test, I can do a baseline, I can get some data, I can record something with like HW sixty-four, log it to a file. Then I'll ask the AI, "What do you think?

What, what changes could I make?" It makes some suggestions. I implement those. Then I rerun the test, and then I s-- put it back and I say, "Now, what changed? Did we, did we make any improvement here?" But without that, I think it would be difficult to know... You know, if there were no way to measure the results, it could be completely making stuff up, and I'd have no idea if it made any sense, you know, made any made any difference. So I guess with security, what-- have you seen anything that we could do in a similar way with, with security data? One thing that I've seen that AI is good at is translating that into, you know, much simpler to understand terms.

Um, you know, whether it's, it's translating, um, a, a complex, uh, detection rule or, you know, maybe a firewall rule or, you know, even code into something that's much easier to consume just from, you know, technical but yet plain, um, language. It really drives two things. It drives education and capabilities.

So now when I have the thing I'm trying to understand and then the natural translation, I quickly... the synapses start forming. So the next time I see that, maybe I don't even need the AI translation. I go, "Oh, I've seen this before. That's what this means." Yeah.

E-explaining what a complex regular expression does, that's a great example. I-- There's a YouTuber I watch named Eric Parker, who is not listening to this podcast, but, uh, he, he's a, he's a malware, uh, reverse engineer, and so he will, he will show in his videos just what he's doing to figure out what malware does, and when he comes across something that's obfuscated, he just dumps it into some AI and says, "Can you de-obfuscate this and tell me what it means?"AI for detection,

AI for integration with other tools, and AI for helping analysts, so to help them investigate and to improve their workflow. I was wondering if you could maybe expand on any one of those or all of them, or just sort of talk about how we're doing that as, as sort of the framework for bringing this, this new capability into our offering. Corelight has been doing, um, detection using AI, machine learning models, um, you know, anomaly detection and things of that nature for, for many years. One thing that you would see a trend moving forward is using these AI tools to accelerate the creation of that, um, those detections, and which we have to, right? Because the, the attackers are using these AI tools and going from zero to exploit in minutes sometimes, um, and lofting these attacks at, um, our customers.

Because now I can have this really maybe complex machine learning model broken down into very simple terms, so that when a detection, you know, is fired from a certain set of data, the analyst can go, "Okay, I know how this model works because, you know, I don't have to know all these data science terms and all these things. I could just need to know, hey, this thing looks for these aspects and these features, which means this might be occurring." On top of that, providing those next steps to an analyst that either they could use or another tool, AI tool can use to do further investigation really blends those two together.

So if I've had detection, here what, you know what it might mean. Here's some potential next steps for investigation, either inside the NDR realm or outside. It, it's facilitating the agentic ecosystem inside of the SOC.

Are we seeing that yet? Like, is that something that products are doing? It seems like it might still be kind of early days for that. It is early days, but we're seeing some motion in that. The agentic, think of it as an agentic triangle. I think, you know, what we're gonna see kind of form near term, 'cause the speed in which the, the AI world is moving is like nothing

I've ever seen in the thirty years of working in technology, is, you know, probably one kind of-- think of as a SOC, um, you know, kind of overlord, right? So this, this, this agent is going to oversee these sub-agents, which then in turn might have further sub-agents. And so I think what we're gonna see is maybe point products being these sub-agents in which you embed the expertise of how to use the data, its applicability, how the detections work, so on and so forth. And if every point product does that, then those become the individual experts that could be steered by this one, um, central agent.

And so the, the, the-- in the end, the user's just gonna talk to the central agent, and then the central agent's gonna kinda cascade that down. I mean, we're already seeing some early products come out that look that way. And there's also been, uh, a lot of research done in that tiered agent architecture and how the language models naturally do really well in that architecture, especially because you can have very narrow-scoped, well-tuned agents that are an expert at one thing, and then you can just keep multiplying them. So you end up with a ton of experts all being controlled by one kind of central orchestrator.

What are your thoughts on the, the end state for the way the analyst works with this data? So I, I have this dream that one day there will no longer be dashboards, and there will no longer be pie charts, and there will no longer be whatever the vendor, and of course, I'm speaking, you know, working at a vendor, but whatever the vendor thinks that the analyst needs to see will instead be replaced by something that is essentially either completely customized for the user by the user's interactions, or it's as simple as a prompt, and you simply talk to the, the master agent or whatever it is, and you say, "What do you have for me right now?" Or, "What do you have for me today when

I start my workday?" And it says, "Well, I've-- I have ten events that you might wanna look at. Would you like to look at these?" "Yes, I would." And it just starts providing you data, and maybe it's, it starts, if you've never used it before, it'll provide some sort of default. But otherwise, over time, it's learned what you like to see, how you like to see it, how you like to interact with it, what you do next. Do you think we'll get to that point, or is that not a good way to do it? Or I don't know.

I'm just really interested in your thoughts on that. Yeah. I, I think it's some kind of hybrid approach, right? I, I, I think for sure we're seeing with the agentic coding, um, explosion and the use in writing custom skills, um, custom agents, being able to connect, you know, via MCP or A to A, um, allowing these individual, you know, permutations that works well for the way the individual works, right?

And being able to customize the way they work. Um, I think that's probably, you know, since that has worked so well and the developer community has em-embraced that, I think you're gonna see something very similar start to happen in other communities. Um, you know, I can see inside of a security opera-operation center where you might actually have, you know, a capability for somebody to make a skill or even more, have the system watch what, you know, an analyst does and then start adapting to that and go, "Hey, for the last five days, it seems like you've done this search every day and looked at this output." When you come in in the morning, it's already done for you. It goes, "Hey, just so you know, this is already done. Didn't find any anomalies.

Do you want-- What do you wanna do next?" And so this adaptive way of learning how somebody operates, I think is gonna be key. Now, what is unknown is how that's gonna be presented in like a visual format, right?

I mean, I think that's what dashboards today are great at, is those that, you know, you absorb data visually. You know, you can quickly assess things, you know. I love pie charts and graphs, and you can see trends, and all that makes sense.I think probably in the future what we're gonna see is that data really being driven as, like I said, a hybrid of the way that that individual works. And so the, the, the chart that they may show is more attuned for the work and how they work and how they've worked in the, in the past in trying to predict what the, the a-analyst may wanna do in the future. So let me ask you one final question then. This is probably the biggest challenge I would estimate for most people with AI. How do you keep up? Do you have any, um, tips or tricks or any patterns of life or any recommendations for how people can try to stay engaged with this, uh, technology?

Yeah, I mean, I, I think if you were to make it your full-time job, you still would run out of hours in the day. You know, I think some of the traditional, um, capabilities work, you know, um, you know, RSS feeds and, you know, a couple trusted sources, um, that you can rely on.

Um, everybody has, has their different sources, but, you know, kinda leaning on them to try to, you know, get the TLDRs of the things that are important. At the same time, it's, it's looking at things that are in your sphere, right, of, of, of application, right? And so I think for, for me, you know, I don't necessarily pay a ton of attention to maybe what's going on with AI in imaging or AI in self-driving cars, but more of AI in the use of data, right? And I think that's really what helps as well, is to kinda narrow the scope down and really look at it, you know, within the realm that is important for whatever task you're trying to achieve.

The speed at which things are changing is not something like I've ever seen. Again, over 30 years experience in technology, and, you know, you've seen things, you know, the dot-com boom and, you know, the internet and... But if you look at the speed in which that, uh, you know, occurred and transitioned, it was tiny, I mean, almost like molasses compared to this, right? It is, it is so hard to wrap my head around.

Like, so for example, looking at technologies like, um, um, you know, like MCP. You know, MCP has just now had its one-year birthday, and it's already thinking about being sunsetted for other technologies that are maybe better and other ways of orchestrating things because, you know, limitations were found and new ways were found to replace it, and then we, you know, the AI allows you to pivot quickly to the use of those new technologies. Now, I don't know what that means when we actually productize things with these technologies that could get replaced that quick, because the actual development of things using the tools really hasn't sped up at the same rate in which the tools have changed, right? So it's just gonna be interesting to see how all that plays out.

Keeping up is hard, um, but AI can help you keep up as well. Um, making, making your own podcast. out of documents, um, you' know, and, you know, the, the tools themselves can help, you know, absorb the data they produce even quicker, which sounds strange.

Well, it's funny, in an AI podcast, I will just say to the viewers, uh, or to the listeners, there is no AI. I don't use any. to create the podcast. It's all done, uh, by me with my editing tools, and it's all done manually. So in case you have an issue with that.

Um, let me ask, uh, just one final question then, Stan. Uh, is there anything you're looking forward to or you hope to see? And I'll give you an example. Uh, I'm personally interested in... There's some old code that exists in... Well, actually it's not even old code, it's, it's continually maintained, but it's all written in Pascal, which is something I, haven't looked at in over 35 years, and I'm waiting for the, the point where the models can handle a project of that size, and I can just feed it this project and say, "Can you turn it into something more modern like C++ or C# or whatever?"

Uh, but that kind of requires just a bigger token window. Is there anything that you're looking forward to, uh, that might come about with AI? Yeah, I mean, I think for me it's just that, like I talked about earlier, that, y-you know, the ability to have the companion, right?

And something that y-you can, can make you better as, as a human, but not replace you. And I think that's an important fact that, that we have to realize, um, as we move forward in an industry.

You know, you can't, you know, displacing, you know, uh, somebody that is an entry-level position that is really required of a knowledge base to be built as they move up. Having that companion and, you know, really accelerating that growth is really what I'm the most excited about, is be able to take somebody that normally would've taken three to five years to come up to some, you know, you know, competent level that I can just turn them loose, that I might be able to do it in half that time or a third of that time. Well, Stan, I appreciate your insights, and I can almost... I, I'm gonna have you back hopefully if you'll agree to come back, and I can't even imagine what the world will be like if we talk again about this topic in even six months.

So, uh, thank you for being on the Corelight Defenders podcast. I-I've enjoyed the time, and thanks again. Thank you for joining us on the Network Defenders podcast, sponsored by Corelight. We will see you on the network.

You've been listening to Corelight Defenders. To stay informed with expert intelligence on today's cybersecurity challenges, please subscribe to ensure you never miss an episode. We'll see you on the network.