September 2017 was a busy month. Three major breach notifications in Deloitte, the SEC, and Equifax… and my first Wave dropped, coincidentally on Digital Forensics & Incident Response Service Providers. Following all this commotion, I had a client reach out and ask me how… How are investigators able to reconstruct digital crime scenes to identify how attackers got in, and what information could have been leaked? The simple answer: available logs and the element of surprise.
Let’s start by talking about what an attacker can control, because this is important. For all practical purposes, once an attacker is on a system, they can control everything that happens on that system including the flow of information to and from it. The endpoint becomes untrusted, but the defender doesn’t know this yet. This leads to what Robert Bejtlich has described as the intruders’ dilemma. In short, an intruder only needs to be detected once for the defender to initiate a response and remove the intruder from the network. If an attacker wants to persist, they have to play nice-ish and not create too large a disturbance in the force.
Now, **if** you have good logging and have detected an attack, you can start to reconstruct timelines and paint a picture of what the attacker was up to, but remember, local logs are untrusted because they are controlled by the intruder. Centralized logging provides the defender two important advantages, it makes it a lot more difficult for an attacker to clean up after themselves (read: sanitize logs) and a system that suddenly stops logging is going to attract the kind of attention the attacker doesn’t want. This is why the better Endpoint Detection & Response (EDR) tools don’t store their telemetry data on the endpoint and is also an important argument for investing in Security Information & Event Management (SIEM) technology.
This brings us to the element of surprise. Local logs and filesystems generate a lot of forensic information — if you manage to get there before the attacker starts cleaning up after themselves. File creation and modification dates have long been a method of reconstructing an attack. During an investigation, you will find clues about the tactics, techniques, and procedures of the attacker, possibly including getting access to artifacts such as their toolkit. Note that the window this information can be collected is much wider than one might think, if an attacker doesn’t know they have been detected or are outright making mistakes.
Finally, in trying to understand what has been stolen, an attackers’ choice of targets can help you understand what they were looking for and what they may have had access to. Logs may provide transactional information to allow you to see what data was accessed. If you’re lucky, there may be a staging server the attacker was using to collect stolen data before exfiltration, allowing you to see exactly what data was accessed. Sometimes it hard to tell. In the end, computer forensics is about using available information to perform an analysis of what may have transpired and assign a confidence level to that assessment.
Josh Zelonis is senior analyst at Forrester Research. Read more Forrester blogs here.