Skip to content
  • There are no suggestions because the search field is empty.
PROTECTING OVER $1B IN DAILY TRADES
DEFENDING ENERGY FOR 32+M U.S. USERS
SECURING NETWORKS FOR 52K+ TRANSPORT VEHICLES
PROTECTING OVER $10T IN MANAGED ASSETS
SECURING 16+M ANNUAL PATIENT VISITS
Home/Podcasts/Episode 11 - The AI Maturity...
Episode 11 - The AI Maturity Journey: Data, Agents, and the Shift from Craft to Art
Guest Speaker: Vijit Nair
March 26, 2026

Episode 11 - The AI Maturity Journey: Data, Agents, and the Shift from Craft to Art

Episode 11 - The AI Maturity Journey: Data, Agents, and the Shift from Craft to Art
0:00 / 0:00

About the episode

Richard Bejtlich talks with Vijit Nair, VP of Product at Corelight, about the evolving "AI Maturity Journey" for modern security teams. Vijit outlines a three-level spectrum of AI adoption, moving from basic human-driven assistance to automated swarms of agents, and eventually toward fully autonomous systems. They discuss why high-quality, unopinionated data remains the essential foundation for building trust in AI and how technologies like the Model Context Protocol (MCP) are turning human language into the primary interface for tool integration. The conversation explores the partnership between Corelight and CrowdStrike Charlotte AI as a real-world example of this connected ecosystem. Finally, Vijit and Richard reflect on how AI is "eating the craft" of security—automating away the mind-numbing manual tasks of a SOC—to allow analysts to focus on the "art" of judgment, creativity, and strategic defense.

Episode transcript

Download transcript

Episode 11 - The AI Maturity Journey: Data, Agents, and the Shift from Craft to Art

Welcome to Corelight Defenders. I'm Richard Bejtlich, strategist and author in residence at Corelight. In each episode, we explore insights from the front lines of NDR, network detection and response.

Today, I'm speaking with Vijit Nair, VP of products at Corelight. Welcome, Vijit. Hey, Richard. Thanks for having me on. I'm glad you're here. This is a much-anticipated episode. You are leading a program at Corelight called the AI Maturity Journey. Can you explain what that means for security teams? When we think of AI capabilities,

I, I really lay this along the spectrum, uh, you know, from a level one to a level three of AI maturity, right? So if I think of, you know, on the far left is level one, which we, we phrase it as basic AI assistance, right?

So you think of simple human-driven, uh, uh, with LLM assistance, right? So you've got AI tools and LLM tools providing, you know, basic explanations, rule correlation, and, and so on. As you move from level one to level two, you are, you know, ceding, uh, more control to the AI and allowing the AI to operate with more agency, right? And the, the second level, I call that the, the level two, which is more automated. So you're going from assistance to automated, where the, the loops in the automated are more agent-driven, but there is a human in the loop, uh, that is firmly still in control, but you're still having kind of swarms of agents autonomously go about, uh, their job.

Uh, but the human is the final decision-maker, right? And the human is applying the final judgment before, uh, making a decision, be it remediation and so on. And what allows you to go from level one to level two, it's really the data, uh, the power of the data and the expertise that you apply on top of the data that allows you to go from level one to level two, right? 'Cause, uh, a lot of organizations out there are kinda applying AI, but without that foundational data, right, the context needed. And then the final nirvana is obviously, uh, uh, level three, which is fully autonomous, right? So in, in fully autonomous case, now you've got autonomous agents, uh, that are orchestrated by humans, but that are fully independent. So you think of extremely high agency, extremely low control.

Uh, and what allows you to go from level two to level three is you've gotta build that trust, that confidence, uh, with the SOC analyst, with the security teams, uh, and they've gotta trust you. They've gotta trust the accuracy that the AI tools provide. Uh, and that's, you know, it's a, it's a slow progression from going from level two to level three, uh, where as the, the users and the analysts, you know, trust the systems more, they are, they're willing to cede more control to the, uh, autonomous agents. And that's kinda the, the ultimate future that we're building to. Could you tell me a little bit about what you would consider good data to be? When

I think of good data, it's, it's really, uh, a, a, a few things that matter the most. Uh, the data has to be, uh, well-structured, right? It's gotta be designed for the purpose they're intending to use it for. Uh, and it's, it has to be as comprehensive as possible. We analyze network traffic to generate the best data from it. Uh, and our data, in some ways, is the de facto standard in the security industry, right? So it's data that is built by security people for security people. It's unopinionated, judgment-free data, right? And the, the, uh, the value of having unopinionated, judgment-free data is now you're using the tools to apply their judgment on top of that, right? If your data itself has bias, if your data itself has judgment embedded in it, uh, then the, the tools kinda skew towards that judgment, which is why you want, you know, comprehensive and, and as accurate as possible data, uh, that allows the tools to apply, uh, judgment on, on top of that.

Uh, the, the data that we have built, you know, over time is, uh, something that the, the entire security community has built, right? The open source community over the last twenty-five years has built, you know, this, this great form of data that becomes the foundation of a lot of our AI tools.

W-we have an open core system, and we like to talk about that in terms of the software that we run. But let's say you're someone who's using our data or data like it, and you decide that however that data is being used doesn't really meet your case.

You can use that data, and you could bring in your own LLM or your own a-analytical methods or whatever, and you can possibly come up with a way that works better in your use case.

But if you're using a product or a source that doesn't provide you the data and just provides you the judgments at the end of the day, and you don't like those judgments, you're kinda stuck. Yeah. You, you can extend the, the data that you generate. You can extend it in a lot of ways, right? So it's, it's not only open core. The data is openly available for everybody to use. The data is openly available for everybody to modify, right? So we've got customers.

We, we ourselves do it. We've got customers that add a lot of context and a lot of, lot of enrichment to the data. So you'll use-- You'll add, like, information about hosts, users, vulnerabilities, like network subnets and so on to the data. That makes that context extremely rich. All the, uh, foundational LLM models out there are already trained on, you know, twenty-five years of security professionals using Zeek and using, uh, Suricata and using all these open source tools and investigating, you know, security incidents with these open source tools. So in, in some ways, the LLMs have that inherent advantage that we can lean on, uh, uh, pretty ef-effectively.

Do you have any, uh, advice for people who are, say, using LLMs with, with data like ours or our data to make sure that you're getting what you hope from it? There are a couple of things you have to keep in mind. The economics and the cost of it is absolutely important, right? Like, there is... Uh, you can go, you know, to the extreme of sending a lot of real-time data to the

LLMs, and that may not be as, uh, uh, as useful. The other thing to guard against is, you know, you-- people call it hallucinationsBut I really call it the non-determinism of LLMs, right? Uh, and it's, it's important to e-especially in a, in a place like cybersecurity, where the result of a false, uh, negative is, is so high, it's important to guard against that non-determinism, d-- non-determinism or hallucination.

So think about how you might apply guardrails around when you're interacting with these LLMs and throwing data at these LLMs. Where are the guardrails you'll put in place, right? So what kind of verification will you do for each of the answers, uh, that, uh, the

LLM generates? For example, one of the things we do is we've got agents that, you know, will pull in the alert, pull in the context of the alert, and go through a, a structured investigation ra-run by the agent. But we'll have another agent that actually checks the work of the first agent, right? And both of them operate independent to each other, and that gives you sort of the checks and balance or the, the guardrail that you need to put in place when you're building, uh, uh, you know, these kind of systems. Can you talk to me a little bit about integration?

Absolutely. In, in a, in a lot of ways, AI lowers the barrier of integration. Uh, in the past, great integration between tools used to be, uh, you know, a very rigid API, um, with, you know, very, uh, specific, uh, you know, i-interface definition, uh, that could change over time, that could drift over time, and you've got to constantly keep the tools updated to make sure they talk to each other. Uh, in the AI era, that barrier has been lowered quite a bit, right? So now the interface between AI tools is just human language, right? So MCP, for example, has come up as a, uh, a really good example of how, um, you know, different, uh, uh, AI systems can use the MCP server and tools exposed by the MC-MCP server, uh, to, uh, you know, access, uh, other

AI systems. And that's sort of how we are approaching it as well, uh, is we've got, uh, you know, a-- our MCP server product in preview right now, where we allow, uh, you know, our customers that are building their, their own AI tool stack to come access our data, right? And you can come access our data using simple human language. So for example, if you have an EDR system, uh, where you're investigating a certain IP address, you might want to know, you know, everything there is to know from a network standpoint about, about that IP address.

And you can use our MCP server to go query, you know, our data, uh, repo on, "Hey, tell me everything you know about this IP address. Tell me all the other hosts it talked to, tell me all the services it's talked to, tell me all the data that was exchanged by this." And you can, you can ask a human language question in the course of your investigation and get a human language response, uh, from our product. So that becomes, you know, pretty powerful.

Uh, one of the ways we are showing that integration or showing the capability is, uh, with our latest integration with CrowdStrike's, uh, uh, Charlotte

AI, uh, uh, within their Fusion, uh, um, marketplace. Uh, where essentially Charlotte's AI tool, you know, just in the way I described it in the course of an investigation, can reach out to our AI system, uh, and get access to the data that it needs for its investigation, right? And I, I believe this is going to be the future. Not just other AI tool... Not just AI agents talking to other AI agents, but in general, customers are building their own AI stack, uh, and they will want, you know, similar interfaces with all the tools that they have in their ecosystem for AI to be able to talk to-- for their AI to be able to talk to the

AI in the, in the vendor tool ecosystem. So when you look ahead, is it difficult to create a product roadmap when the technology seems to be evolving so rapidly? Uh, this is a, a constant challenge, right? I, I see, uh, see it as an opportunity, right, as the AI systems... I see it both as an opportunity and a threat. The threat, uh, you know, I, I kinda, uh, uh, articulated earlier is in some ways, you know, when attackers, uh, are moving faster than defenders, that's a threat for all of us as a community that we need to step up to.

Uh, but I myself see, you know, the, uh, coming in of AI as a huge opportunity for us and the cybersecurity industry, right? Because if you think of what the industry has been struggling with, uh, you know, for sort of as long as I've been here, it's been a, you know, a huge increase, a constant increase of surface area, right? Your surface area continues to increase with cloud and, uh, and now

AI. Uh, the, uh, the lack of skills in the industry, right? So not that you don't have highly skilled people, but you don't have enough, so that's a constant challenge. And AI, in a lot of ways, is a great leveler. Uh, it lowers the, the barrier to entry for new professionals to come into this space and understand it, right? So you don't need decades of experience. Just like you were saying, uh, you could use AI as a, a assistance, a tool, a co-pilot that helps you, you know, ramp up a lot faster than, uh, if you did not have it, right? So it lowers the barrier to entry for a lot of new people coming into the industry.

I used to think that there would be a threat to, uh, sort of the entry-level work. But I think about the way so many teams are structured, and I never structured my teams this way, but I've seen plenty of other places that do, where the entry-level work was so mind-numbing and so boring, I'm not sure who would want to do it. And if they did it, and they stuck around, like, I'm not sure that's even, that's even the right kind of person I'd want to stick around because they're just sort of mind-numbingly clicking buttons and doing work that is not fulfilling or interesting.

And that should be automated away because it's so... It's not even really security in my opinion. It's just sort of, uh, low signal management type work. So to think about someone who could go in as an entry-level person, and yet they're doing something that's more complex and meaningful, and they're equipped to do it because they have essentially an assistant who can help them make sense of it, I think that's a way more fulfilling career track for, uh, an entry-level person.

So I think we just have to think in different terms of what entry lev-level people do versus not doing. Yeah, 100%. And this is true across the industry, right? We are seeing this play out in the, the software development industry right now, where, you know, entry-level folks are, uh, now armed with tools like Cloud code and so on, with capabilities that far exceed what, you know, a typical entry-level person would have. And the, the value I see in the software development industry is now you're just going to see a lot of amazing software getting built and developed and shipped, right, way more than it has been in the past. Like, in the past, you, you could have a great idea, uh, but your barrier was finding the technical skills to finding the people with the right technical skills to go build the right products. But now that that barrier has dramatically gone down, now, you know, a lot more great ideas, uh, you know, I, I see will... I expect will see the, the light of the day because that barrier for innovation has dropped. It's a similar, uh, kind of analogy to the cybersecurity industry as well. Yeah. I think the, the work is going to shift, and in some cases is shifting away from the, the bare coding into project management, architecture, testing, validation, vulnerability discovery, all these sorts of other areas. You know, the, the, the actual delivery of the code is just one part that I think is getting, uh, eaten by AI. But, you know, there's a, there's a AI developer who uses Claude who's-- who makes my favorite, uh,

Windows de-bloating software. And it's not him just telling the AI, "Oh, I want this." He has to do so much architectural work to make sure it's doing what he wants it to do, and the project management. And it's like he has a whole team that has to do all this work, and he's the, the supervisor for it. That I think is what's really interesting. It's a lot of people who have a vision, but it's not just simply, "Oh, AI, go build me this." There's so much other work that needs to be done that, that is part of traditional software development, but maybe it wasn't as, I don't know, as glamorous or maybe even seen as, as that relevant.

When I think of a, a typical, uh, software engineer or SOC analyst or even product managers such as myself, right? I typically think of our roles as a combination of craft and art, right? Craft, uh, is the sort of, you know, the sort of the grunt work that you had to do in the past, right? Like in code development, you have to, you know, manage your repos. You've got to, you know, write your requirements. You've got to, you know, uh, write your functional spec. You've got to communicate with people. Uh, and, uh, you know, you, you've got to build the code, debug, test, right? Like, all of that undifferentiated stuff. And the art is, uh-- And same in the, in the cybersecurity space, like you were saying, right? The tier one analysts have to do a lot of the grunt work, uh, to, uh, go through a lot of chaff to find the, the threats of the alerts that really matter.

But if I think of the art piece of it, uh, that's where you, your, you know, expertise, your judgment, your creativity, all that comes in, right? And unfortunately, in the past, you know, most of us spend seventy, eighty percent of our time on craft because, you know, the data's data that we need is all isolated in different silos. It's not easy to go access to the data. So now you're working across a bunch of databases, you know, joining and stitching stuff, uh, when, you know, the, the playbooks are sort of all over the place. You don't have enough context about your environment. So, so we, we used to spend a lot more time on the craft and a lot less time on the satisfying work, which is the art, right? Again, like, I think we-we're kind of saying the same thing is with the advent of AI eating,

I, I think, you know, AI comes in to eat the craft so that people such as us can focus a lot more time on the, on the a-art, right? And that's, I think, the, the real distinguishing piece of

AI. Yeah. And if I could give a shout-out to one of my, uh, my favorite YouTubers, Bash Bunny. She did a good video recently. It was d-exactly about that, that differentiation, the craft and the art. And she reminded viewers that for some people, the craft is, is the thing that they're there for. You know, you think back to these legendary programmers like, uh, like John Carmack or someone, who, who devised these amazing new ways to do things for video games that no one had ever done before.

Um, that was, that was the accomplishment right there. You know, the, the fact that he was able to figure out how to do all these different algorithms and, and all that sort of stuff. So

I think, uh, I think that might be where some of the friction comes from. There are people who really do prefer the craft versus the art. Um, and but it is nice as someone who is more on the art side to be able to, you know, to solve some problems that I've had thanks to using these tools.

Vijit, uh, I was so glad that you could join us today. I know you're on, on the way to the, uh, the annual security show that everyone knows as RSA. So thank you for joining me. Excellent. Thanks so much for having me, Richard. Really appreciate the time and appreciate the chance to reach out to your audience as well.

Thank you for joining us on the Network Defenders. Podcast sponsored by Corelight. We will see you on the network. You've been listening to Corelight. Defenders. To stay informed with expert intelligence on today's cybersecurity challenges, please subscribe to ensure you never miss an episode. We'll see you on the network.