CONTACT US
forrester wave report 2023

Close your ransomware case with Open NDR

SEE HOW

ad-nav-crowdstrike

Corelight now powers CrowdStrike solutions and services

READ MORE

ad-images-nav_0013_IDS

Alerts, meet evidence.

LEARN MORE ABOUT OUR IDS SOLUTION

ad-images-nav_white-paper

5 Ways Corelight Data Helps Investigators Win

READ WHITE PAPER

glossary-icon

10 Considerations for Implementing an XDR Strategy

READ NOW

ad-images-nav_0006_Blog

Don't trust. Verify with evidence

READ BLOG

video

The Power of Open-Source Tools for Network Detection and Response

WATCH THE WEBCAST

ad-nav-ESG

The Evolving Role of NDR

DOWNLOAD THE REPORT

ad-images-nav_0006_Blog

Detecting 5 Current APTs without heavy lifting

READ BLOG

g2-medal-best-support-spring-2024

Network Detection and Response

SUPPORT OVERVIEW

 

I have trust issues and so does my CISO

 

Trust is hard to earn but necessary for any successful relationship. As organizations build the systems to support Zero Trust, they find themselves balancing security and functionality across their operations. Incident Response and Network Operations in particular can be full of traumatic experiences, and as we sink into those moments the typical responses are freeze, flight, or fight.

Network architecture design didn’t start with many, or any, barriers in place. When ARPANET (1971) birthed the Internet, it was a feat to send a message and get a response, so barriers were counterproductive. Yet, as more things were connected and more trauma was experienced, be it from misconfigured networks wreaking havoc or actual mal-intended actions, the need to build barriers grew. Organizations went from blind trust to breach prevention by building bigger and better castle walls or deeper and wider moats; this was the castle and moat design era. It served its purpose, but when an adversary was on the interior they had free movement. As breaching the walls became more common we moved from breach prevention to breach detection to inform security operations of interior movements that would be counter to business operations. Increasing the visibility of interior movements exposed the flaws of the monolithic prevention strategies. This is where, and why, Zero Trust designs came into being. They were informed by the past designs but leveraged modern capabilities and computational efficiencies to add more dimensions to the trust decisions.

Where initial prevention methods were based around IPs, subnets, or ports, modern methods can include multiple facets of data like users, roles, and privileges. Two, foundational Zero Trust models exist to help organizations conceptualize and plan for the implementation of a modern Zero Trust Network Architecture (ZTNA): NIST’s (National Institute of Science and Technology) Zero Trust Maturity Model v2.0 based around NIST SP 800-207; and the DoD’s (Department of Defense) Zero Trust Strategy. While the fundamental drivers behind the designs are the same–1) Never Trust, Always Verify, 2) Assume Breach, and 3) Verify Explicitly–there are arguments to be made as to whether network visibility is foundational or structural in design, but the importance shouldn’t be ignored.

Whichever way you lean on the design of network visibility, the plumbing behind it hasn’t really changed. Private Cloud, Public Cloud, Containerization: that’s the wave of the present. And while the future has a chance to swing back to COLOs and on-prem, for now it’s rented space on someone else’s hardware. During the transitional infancy to this modern design pattern, there was a huge tradeoff in network visibility. You went blind to the network because there was no reliable or clean way to instrument that visibility. Those plumbing bits are mostly solved in the major cloud environments, but there still seems to be this allure of ‘free is good enough’ that pervades the compliance mentality. Netflow is great, but it’s hardly good enough against modern threats. You really need richer, deeper analytics with the idea of longevity built into the dataset.

 

trust-issues-ciso

 

The concept of a strategic data reserve comes up when you have a dataset of sufficient richness and depth. When the data is properly enriched and of a high enough fidelity, it retains value that can easily be lost as time passes with low fidelity data. Federal mandates are beginning to require the building of strategic data reserves. If you look at OMB (Office of Management and Budget) Memorandum 21-31, you’ll see that network data and packet data has requirements for longer term storage. Bolstering that mandate, >NARA (National Archives and Records Administration) General Records Schedule 3.2 Transmittal Number 33, which absolutely rolls off the tongue, just further codifies what M-21-31 has set out to do: to give defenders a longevity in their knowledge base and increase their maneuver space. These mandates set the requirement for archiving 30 months of cybersecurity event logs (i.e. network flow/session logs) and 72 hours of packet capture data. This goes a long way towards building that strategic data reserve and creating a predictable maneuver space for defenders to work in.

With anywhere from five to seven pillars to focus on when designing towards ZTNA, it’s easy to lose focus on the network visibility portion of the Visibility and Analytics pillar. However, its ability to create a long-term, correlated data set based on passively collected data shouldn’t be discounted when making Zero Trust decisions or general Incident Response purposes. For a more in depth conversation on the past and present of ZTNA and how data can resolve trust issues, I invite you to watch a recent webinar I presented to SANS with our Federal CTO Jean Schaffer.

Recent Posts