Episode 13 - Battle-Hardened Research: Navigating the Intersection of AI and Open Source
Welcome to Corelight Defenders. I'm Richard Bejtlich, strategist and author in residence at Corelight. In each episode, we explore insights from the front lines of NDR, network detection and response. Today, I'm speaking with Ali Islam, VP of Corelight
Labs. Welcome, Ali. Thanks, Richard. Glad to be here. What does a research team do at a security company? Essentially, what Labs does is it delivers on all kinds of detection. And what it means is the team sits at the intersection of threat research, data science, detection engineering, and product innovation.
So they are doing all kinds of things, starting from really studying the threat actors, studying the techniques, uh, the different campaigns they run, what are their motivations. Um, um, from there on, they try to really look at the product, the existing capabilities. Do we need to enhance the capability, um, or do we-- can we use the existing capabilities to kind of really, uh, figure out, uh, the most generic behavior-based, um, way to, uh, detect those TTPs, those behaviors? Would it be fair to say that this is something that you get in sort of a larger company? Because I would imagine if you're a very small vendor, you are trying to do that all within the people who are writing the software. I mean, maybe even the founder started out as the person who started the company, started the initial product, had a certain vision for what the product should do, and put all that together into their, their MVP.
But at some point, you're trying to scale, and you have developers who aren't pr-probably security people even. They're probably just more software people. And so you need that research team to be providing the guidance on, on what should be coming out in the product. No, that's, uh, that's, that's a great question. Um, you're absolutely right. So depending on the size of the company, initially the lines are blur and, you know, you're kind of doing... you're wearing multiple hats, and you're doing many different things. Um, for example, even as of today, uh, within Corelight's research team labs, right, I mean, we, we, we do not only the detections, but we also work on the detection engines, and that's a lot of software engineering.
We also, from labs, uh, own the open source Zeek, which is a, a, a huge, uh, contribution to security community. Many of the top companies use that, uh, within their environment.
The other, other, other point that I wanna mention is, um, the research teams within cybersecurity companies are not the ivory tower or, or the typical academic labs. Uh, every research initiative is directly linked to a customer pain point or a, you know, missing detection, and we close the loop with product engineering, so our customers, uh, just on-- just to make sure that all of our research efforts, um, eventually gets operationalized and, uh, get used by our customers.
So I think it's interesting you mention the academic side of things because I've tried to keep my feet in different areas over the years. And so, for example,
I might go to the Black Hat conference, which has more of an offensive-minded group. And then I would go to USENIX, which is more about sort of the academic side a little bit, a little more, uh, practitioner side.
And it always seemed like, especially if you go to a more of a purely academic conference, and I've been to some of those as well, th-they would focus on topics like NetFlow. You know, there, there's a whole FlowCon that is nothing but NetFlow.
And I feel like it's stuck back in, like, at the very beginning of my career, where that's all you had in many cases, so that's what you used. And it just sort of ignored that in the field, people have way more, or at least they have the opportunity to have way more richer sources of data. Whereas in, you know, other parts of academia, they are in, in fact, pushing things forward in s- you know, medicine or science. But in security, it seems like if... unless you're not... You know, if you're, if you are in school, there's probably some things that you're doing. But for the most part, you kinda have to be under fire to... or at least talking to people who are under fire every day in order to keep up with what's going on. Oh, yeah. Yeah. I mean, I must say I love those conferences, USENIX and, and all of them. They're definitely very valuable. But you're right. I think the way the, the term that, that I use for that is you have to be battle-hardened, right? And that kind of naturally pushes you to really push the boundaries, explore the edges, and, and, and really figure out based on the necessity. Um, as you see the strike-- threat actors evolving, you, you don't have a choice. I mean, you just have to think about it, experiment, figure out what is the best way to kind of evolve your overall product, uh, and capabilities. Whereas in academia, you kind of ask certain questions, and then you go on your exploration, uh, that is not really derived by the latest threat actor techniques or, uh, or evasions. I agree. I think it-- that those are two different worlds and, and there's definitely overlap there, and they both exist, uh, for a reason. Uh, both has its value, but, uh, definitely when it comes to cybersecurity, a lot of times, um, the people who are actually battling, uh, with the bad guys are, um, on, on the cutting edge. Within your group are the Zeek developers that Corelight funds.
So Corelight employees whose main job is to develop Zeek as an open source project. I think that's interesting because while there's probably not a company out there that doesn't use open source softwareThere are some companies who will fund dedicated positions, but it's usually not the entire team that works on a, on a project. Like, there's no single company that's, that employs everyone that's building the Linux kernel, for example. But yet, h-here at Corelight, we, we do that. Um, what, what's, what is that experience like having the, the engine for our product be developed by an open source team that we also, uh, fund?
I-i-it's great because, um, we sit at the intersection of we thinking about what the open source user need and at the same time, what Corelight customer need, and then we kind of, kind of cater to both of those, uh, requirements.
We kind of get best of both world, uh, from, from that perspective. I must say it's also very satisfying to, to know that there are huge number of users out there, uh, using our open source Zeek, and as they become fan of
Zeek and fan of, like, really enriched logs and data, um, they hit a point where, uh, they, they struggle to scale. Um, and, and that's where the Corelight comes in and, and they have, um, really scale-tested, um, stable product that can, um, really scale nicely with your network, uh, as your network grow and your traffic grows.
So they have basically both options, uh, the open source to kind of play with it, see how valuable Zeek is, and then as they scale and they wanna implement that, um, either throughout their, uh, organization or their traffic is just scaling, they can, uh, come buy Corelight.
Before I ask my next, next question, I also wanted to point out, we do follow that other model where, uh, we employ a developer for the Suricata project as well, um, because we use that in our own software, and we believe in it, and it, it provides a whole set of capabilities that's useful for our customers. So we s- we support that. And
I think we even have some contributions, not officially, or at least we don't, you know, promote ourselves, but I think we have a lot of people inside the company who help with other, uh, open source projects as well. We talked about security research. How does a person become a security researcher? Security research has changed from more, being more reactive, uh, in, in, in the past to more proactive nowadays.
I think that security research teams should really focus on behavior-based detections, ML, AI detections, and, um, less on the signature-based detections because those can be easily evaded. My story is simple. I, I didn't plan for it. I just followed the interesting problems.
And when I graduated, I just had strong CS fundamentals and just graduated. There was a, a startup called FireEye in early two thousand five, and because I was following interesting problems, I, I started working on it and, uh, from there on, I never left cybersecurity. It was, it was, um, it was so interesting, and I remember, uh, the initial product that we built at FireEye, um, was for server s- server side exploits. Uh, that product, not many people know that pro-product did not work out the way we wanted, and it was not a success.
And we quickly pivoted to the changing threat landscape, which was attacks moving to the client side. And then there was a lot of, uh, IRC botnet in those days, the Rbot, the SCbot, one interesting problem after the other and kept on learning, kept on, um, you know, exploring new things. And later in my career, I also explored securing autonomous cars, but that's how... That's my story. That's how I got into the cybersecurity field.
Now, when I think about what are the things that are required for somebody who's just starting to get into cybersecurity, there's no single right background, but, uh, right background criteria, but maybe a combination of things. Strong computer science fundamentals, software engineering, um, networking, assembly, the CPU architecture, especially if you wanna get into the reverse engineering side of things.
Uh, and just a curious mind. It does not matter if you have all those skills and you're not-- I-if you're not curious, those skills does not matter. That makes a lot of sense. Navigating ambiguity, that should be on a T-shirt for all security people.
Um, so, uh, speaking of not knowing what's gonna happen, you mentioned a, a little bit earlier artificial intelligence. So how do you see that affecting security research or even affecting how we're creating s- uh, solutions for our customers?
It definitely helps accelerate things, uh, both from a defender perspective as well as attacker perspective. It's gonna really speed up, help speed up things. On the defense side,
I can think about AI really helping reducing analyst fatigue. For example, previously an an-analyst was-- would go in and write, um, take a look at... A SOC analyst would take a look at all of their, uh, sim alerts and trying to triage and figure out what's going on. AI can come in and a good AI with the, trained with good data can come in and, and, and really help with that triage and, and maybe an investigation that used to take, like, seven or 10 days can be done in a, in a one day.
The limited bit of exposure I've had to using AI for investigating events, it's been pretty impressive. It can cut through a lot of stuff quickly and give me a little bit more context, more than I would get, say, just with a search or something like that. No, it's, it's definitely, uh, gonna be, uh, very helpful.
It can really help reduce the noise. And, and as you're, as you're, uh, finding the needle in the haystackIt can really remove a lot of haystack for you so that- Yeah
... needle, uh, becomes, uh, more and more clear or easier to find. A-a-and, and there's definitely challenges too, right? I mean, there is, um, there's the challenge in terms of trust and, and false confidence, but I would say, uh, there is way more positives when it comes ... And it's getting smarter and smarter every d- every, every month, right? So there is way more positives there in terms of defenders, uh, that, uh, I'm really excited about it.
I would say the balance takeaway would be that AI won't just magically solve security, but the teams that don't use it will, will fall behind for sure. I mean, we are definitely embrace... We need to embrace it. We need to experiment, play with it, make sure we're finding the right use cases to kind of really, uh, get all the help from AI. I think that makes sense.
I, I'm personally looking forward to the day when some intruder is vibe coding their command and control infrastructure, and because they're focused on the intrusion part of it, they don't focus on the, the quality of the code that they're writing for their back end.
So they introduce some kind of vulnerabilities in there, and it allows for, you know, more, more offensive-minded security researchers to get access to their back end. 'Cause we've b- we've seen that over the years, but I have a feeling that might become a little more prevalent if people are using AI and they're not as, as, uh, security-minded in their own offensive tools. So we'll see. It's, it's interesting that you've mentioned that because, you know, we've actually done that at f- when I was at FireEye.
We, we exploited one vulnerability, uh, and got into the attacker's machine, and it was very ... It, it was an interesting experience.
And even, even today, we do see the attacker infrastructure. Uh, there's sometime open directories and, and, uh, there are other ways like following an attacker and seeing their behavior in terms of what kind of domains they're registering, and then kind of predicting what we internally call the minus-one-day attack. So we figure out new attacks before this, the day zero. Um, so we- we're, we're seeing those things, but you're absolutely right that AI will really help accelerate those things. And now, uh, maybe we can do it in, in, in a much better way because there is no doubt that on the other side, the attackers are definitely embracing AI. There are tools out there like
WormG- GPT at, at, at mass scale. People are attack- all attackers are using it, getting advantage of the AI. There are bett- better phishing and social engineering campaigns now, faster malware development.
All of that is happening. So the, uh, the other, other camp, the attacker camp, is already using it. It's, it's definitely, uh, it definitely makes sense from, from our perspective, the defender's perspective, to fully embrace it and, and, and, and use it wherever possible. Well,
Ali, I'm glad that you're here to lead our labs effort because we have a lot of capability there, and it-- we need somebody good in place to make sure we're directing it towards the, the best outcome for our, our customers and, and in the Z community as well as, as our open source users. So thank you for joining me today on the Corelight Defenders.
podcast. Well, thanks for having me. It was great talking to you. Thank you for joining us on the Network Defenders Podcast sponsored by Corelight. We will see you on the network. You've been listening to Corelight.
Defenders. To stay informed with expert intelligence on today's cybersecurity challenges, please subscribe to ensure you never miss an episode. We'll see you on the network.