NETWORK FORENSICS: KEY STEPS FOR DATA COLLECTION AND PRESERVATION
The cybersecurity discipline of network forensics involves the collection and analysis of network data to support legal and security actions. Understand the essentials of the network forensics process, the challenges in network forensics, tools to use, and the steps to take during data preservation and collection that can help you improve data integrity.
What is network forensics?
Network forensics is the art of collecting, protecting, analyzing, and presenting network traffic to support remediation or prosecution. Whereas incident detection and response are security processes, the practice of network forensics tends to support legal processes. Investment in a network forensics process will improve detection and response while making prosecution an option for cases that warrant legal attention.
Forensics standards
When security or network professionals hear the term "network forensics," they often think in terms of collecting packets from the wire and writing them to disk. They then inspect the packets for signs of suspicious or malicious activity to support incident detection and response. While these steps are part of a network forensics process, they do not on their own qualify as network forensics.
Forensic activities are usually tied to legal proceedings, or within the walls of an organization, human resources investigations. Because of the standards required to support a prosecution or human resources action, security professionals must adhere to standards more stringent than those found in normal security processes.
Network forensics relies on evidence collected from a network source. The term "evidence" in this context is best understood with respect to three broad sources:
- Federal Rules of Evidence (FRE)
- Daubert v. Merrell Dow Pharmaceuticals, Inc., 113 S. Ct. 2786 (1993)
- Kumho Tire Company, Ltd v. Patrick Carmichael 119 S.Ct. 1167 (March 23, 1999)
Daubert and Kumho are the most specific and are the basis for the recommendations that follow.
Daubert emphasizes the following:
- “[W]hether it [a scientific theory or technique] can be (and has been) tested”
- “[W]hether the theory or technique has been subjected to peer review and publication”
- “[C]onsider the known or potential rate of error... and the existence and maintenance of standards controlling the technique's operation”
- “The technique is ‘generally accepted’ as reliable in the relevant scientific community”
Kumho emphasizes the following:
- Required the Court “to decide how Daubert applies to the testimony of engineers and other experts who are not scientists.”
- “Daubert's general holding -- setting forth the trial judge's general ‘gatekeeping’ obligation -- applies not only to testimony based on ‘scientific’ knowledge, but also to testimony based on ‘technical’ and ‘other specialized’ knowledge.”
- “[A] trial court may consider one or more of the more specific factors that Daubert mentioned when doing so will help determine that testimony's reliability.”
- Introduced a level of “flexibility” and discretion into the process of accepting expert witness testimony.
- “Daubert's list of specific factors neither necessarily nor exclusively applies to all experts or in every case. Rather, the law grants a district court the same broad latitude when it decides how to determine reliability as it enjoys in respect to its ultimate reliability determination.”
Why MTTD is an important metric to monitor
MTTD is a key metric for determining whether an organization is improving or degrading its ability to respond to attacks. It can provide a baseline for measuring an organization's effectiveness and allow it to observe whether it is moving in the right direction. Changes in MTTD can also provide an opportunity to proactively evaluate aspects of security operations centers’ (SOCs) methods and toolsets. Degradation in MTTD can provide an opportunity to identify the cause and take action to address it.
For example, when an incident increases the MTTD security leaders can pursue several lines of inquiry:
- Was the increase due to adversaries changing their tactics, techniques, and procedures (TTPs) in response to existing defenses?
- Does it indicate shortcomings of existing detection tools and techniques?
- Is additional training needed to help staff recognize this change?
- Has something changed in the organization’s architecture that has created a new blind spot in current detection capabilities?
To maximize the metric’s utility, MTTD should be documented at regular intervals so that changes can be tracked over time.
As organizations build their security strategy, capabilities, and procedures, they can monitor the impact that they have in reducing the MTTD by continuing to ask questions about actual performance, such as:
- How did a new threat hunting effort help improve the ability to detect an adversary?
- What information will be needed to demonstrate the value of an improvement in network monitoring?
Along with other key metrics, MTTD gives security leaders an accessible measurement that helps explain to business leaders how well their security defenses are performing and returning on their investments.
Additionally, tracking MTTD over time allows for easier communication with auditors and other stakeholders about how well the security program is working. Many regulatory frameworks require timely detection of incidents, so tracking MTTD allows organizations to show how their efforts have improved their detection capabilities.
Incident detection and response vs network forensics
Incident detection and response
A common strategy for handling computer intrusions consists of the following four activities. The goal of this process is to determine the scope and impact of a compromise and return compromised assets to a trustworthy state.
Collection: Gathering the data we need to decide whether activity is normal, suspicious, or malicious
Analysis: Validating what we suspect about the nature of an event
Escalation: Notifying a constituent about the status of a compromised asset
Resolution: Reducing the risk of loss by acting to contain and remediate the asset
With this security process, a mistake at any one step does not necessarily result in ultimate failure. Security processes can be relatively forgiving, especially when teams investigate multiple indicators of compromise during the course of an intrusion.
Network forensics
The network forensics strategy is similar but offers key differences. The goal of this process is to support a prosecution or administrative action.
Collection: Gathering evidence to decide whether activity is normal, suspicious, or malicious
Preservation: Maintaining the integrity and chain of custody of the evidence
Analysis: Validating what we suspect about the nature of an event
Presentation: Making a case to a constituent on the nature of the forensic findings
Unlike security processes, a failure at any step of the network forensics practice will likely doom the whole investigation. Opposing counsel will point out any flaws in the forensics practice, either casting doubt on the affair, or prompting a judge or jury to throw out the entire proceeding.
The network forensics process
Collection
The foundation for the network forensic process is collection. If collection is not trustworthy, none of the evidence is reliable. The following are key to the collection process:
- Secure the sensor
- Limit access to the sensor
- Position the sensor properly
- Verify the sensor collects traffic as expected
- Determine sensor failure modes
- Recognize and compensate for collection weaknesses
- Use trusted tools and techniques
- Document and automate the collection process
The sensor must absolutely not be compromised. Any hint of compromise undermines the reliability of the evidence and severely weakens the position of a forensic analyst serving as an expert witness. A compromised sensor demonstrates a lack of professional capability.
Only those with an explicit need to connect to the sensor should do so. Each person logging in to the sensor should employ their own username and password. If possible, replace password-based authentication with public/private keys (and strong passphrases) or use two-factor authentication with a hardware token. Direct login using the root account should be disabled. Only secure communications channels like SSH or SCP are acceptable ways to perform remote maintenance. Only those with a real need for root access should possess root access. Analysts who simply need to review network traffic can perform their duties with user-level privileges.
Be sure the sensor is located in the part of the network that sees the traffic of interest. Be prepared for opposing counsel to argue that "exculpatory activities were not seen by your sensor." For example, in the case of discovering inappropriate material on a computer, opposing counsel often argues that an intruder placed the offending content on the defendant's computer. Opposing counsel then argues that the network sensor didn't see the intruder's activity. Be prepared to show that the sensor would have captured that activity, had it actually occurred.
It is good to know that a sensor is working properly. It is better to understand the conditions which will cause that sensor to fail. When forensic investigators deploy packet collection devices, they need to know what could degrade or deny their systems. The condition most likely to harm packet collection is high bandwidth usage. Properly size the sensor and tune collection software to absolutely minimize dropped traffic.
Packet loss is only one difficulty to be expected when performing network forensics. Be prepared to deal with issues like lack of complete visibility, encrypted payloads, and limited sensor storage capacity. Visibility is a function of the network architecture, as defined by the routing protocols, switching design, and ingress and egress paths. Achieving completely pervasive network awareness may be too expensive or time-consuming, so investigators should assess and document ways their collection strategy could fail and implement countermeasures on the endpoint and elsewhere as appropriate.
Avoid tools that are not accepted by the general security community. Open source tools are frequently better suited for forensic roles as their code can be independently audited. When analyzing network traffic, be prepared to defend how you look at packets and why you make certain judgements. Relying on published sources (like solid books on network security monitoring) enables third parties to determine your adherence to industry best practices.
Whatever process you choose to follow, document it thoroughly. Justify why you take the steps you have selected and how they meet the criteria for a sound forensic workflow. Automate the process where possible to ensure reliability and repeatability. The ideal situation involves providing evidence to a neutral third party who follows your process and produces the same results, thanks to the reliability of your approach.
Preservation
Preserving network evidence is a requirement for network forensics that differs from normal security processes. Most incident detection and response work is "best effort." Network forensics may fail if opposing counsel can show that the evidence presented to a judge or jury is unreliable due to faulty preservation. Consider the following after collecting network evidence:
- Hash stored network traces and other data after collection and safely store the hashes elsewhere
- Understand the "forms of evidence"
- Copy evidence to read-only media when possible
- Create derivative evidence
- Follow chains of evidence
Hashing network traces and other data means using a hash function, such as SHA-256, to create a one-way cryptographic representation of the target data. Hashing provides a way to show that the original evidence is the same as any evidence reviewed at a later date. A change in a cryptographic hash shows that the underlying data has also changed, and is therefore potentially unreliable.
"Best evidence" is the original form of network based evidence (NBE) available to an investigator. If security analysts provide an investigator an attachment in an email as part of one case, that email and its attachment is the investigator’s best evidence. If security analysts provide an investigator a network trace in pcap format as part of another case, that file is the investigator's best evidence. In the case of network based forensics, network traffic saved on a sensor is the best evidence available.
Investigators should never, to the extent practically possible, directly work with the best evidence. The risk of corruption or other change is too great. Rather, investigators should make working copies of the best evidence, and scrutinize those duplications. Copies of any traffic transferred to a central storage location become working copies suitable for investigation.
Transferring large trace files from a remote site to a central location can occupy a large amount of bandwidth. In some cases it may be preferable to enlist the help staff co-located with the network sensor. Deploy the appliance with a Blu-Ray, DVD, or CD burner, and ask the person at the other end of the phone to help with the media creation process. The staffer can then ship the optical media via certified mail to the investigator's location. This process has the benefit of creating an archive of the best evidence. If two copies are made, then one copy can be safely stored while the second can be directly scrutinized by analysts. Any subsets of the data used by investigators to solve a case become "derivative evidence." Be sure to document the chain of custody for this evidence for every step in its lifecycle.
A chain of custody form should accompany any physical evidence and can be as simple as the following example:
Case Number TS-0041 |
Evidence Tag ET-8905 |
Evidence Description
Blu-Ray of network traffic collected 9 Oct 2024 |
Chain of Custody Form |
---|---|---|---|
Location |
Time / Date |
Person Receiving Evidence | Signature |
Boston, MA |
1815 |
Richard Bejtlich | // SIGNED // |
Alexandria, VA |
1715 |
Keith Jones | // SIGNED // |
Network evidence preservation is one area not well appreciated by most tool creators and vendors. Preservation is just as -- if not more -- important than traffic collection. If the integrity of data cannot be established, it will be heavily scrutinized in the boardroom or courtroom. Critics will attack any part of the forensic process to introduce doubt in the mind of directors or jurors. Remember to treat network traffic as evidence and not just a file containing packets. Documented collection and preservation procedures, and following them, will contribute to successful prosecution or administrative action.
Analysis and presentation
Analysis of network forensic data varies greatly depending on the tools involved and the format of the data being scrutinized. This topic is best addressed in a subsequent article.
Presentation, however, is as crucial as the previous steps. When making a case, think like a defense attorney!
What aspects of your network evidence most concern you? You should imagine ways to explain your findings before being put in the hot seat. Ask yourself the questions you are most afraid to hear, then work with colleagues to formulate honest and believable explanations for shortcomings in your evidence. Always be sure your evidence leads to your conclusions, not the other way around. You must absolutely work with an attorney to be sure you are on solid legal ground.
To learn more, visit our Cloud Security Solutions page, learn about our Cloud Sensors, or schedule a demo today.
Richard is strategist and author in residence at Corelight. He was previously chief security strategist at FireEye, and Mandiant's CSO when FireEye acquired Mandiant in 2013. At General Electric, as director of incident response, he built and led the 40-member GE Computer Incident Response Team (GE-CIRT). Richard began his digital security career as a military intelligence officer in 1997 at the Air Force Computer Emergency Response Team (AFCERT), Air Force Information Warfare Center (AFIWC), and Air Intelligence Agency (AIA). Richard is a graduate of Harvard University and the United States Air Force Academy. His fourth book is 'The Practice of Network Security Monitoring'. He also writes for his blog and Mastodon.
Book a demo
We’re proud to protect some of the most sensitive, mission-critical enterprises and government agencies in the world. Learn how Corelight’s Open NDR Platform can help your organization mitigate cybersecurity risk.
