The 3 stages of 3ages

November 27, 2014 in Compromise Detection, Malware Analysis, Preaching

Quick Update

Just to clarify: this is a critique of IR processes that rely on a single way of doing things which in certain circumstances is not the best; it may slow down your response time & give you a lot of unnecessary work. In other words: Alexious’ principle (see below) is a good way of doing things. Doing full or raw forensics on the example 400 hosts would be very inefficient. It mainly applies to daily IR/SOC work, not consulting gigs.

Original Post

The Digital Forensics world is a subject to trends the same way as is fashion.

A long time ago everyone would just do the bit-by-bit & offline forensics and that would be enough. Then ‘do not pull the plug’ idea came along and now no one dares to shut down the box until at least the volatile data is acquired. On the shoulders of the importance of volatile data came the notion of importance of memory forensics which concluded (as of 2014) in a phenomenal work of the Volatility team, an excellent tool and the best book about ‘all of it’ presented in the most digestible way ever.

Somewhere in the background a lot of research towards understanding of the internals of NTFS was also done, then it got digested by many, and finally converted into popular, and often free applications. It’s actually a good time to be in DFIR because tools are out there and you can crack cases in no time. On top of that, both memory and NTFS/$MFT forensics are probably the most attractive technical aspects of DFIR world in general due to technical difficulties associated with their everchanging complexity & simply speaking ‘it really takes time getting to understand how it all works, but it is very rewarding’ (c).

What could possibly go wrong?

The everlasting and omnipresent part of the DFIR work is the word ‘context’ a.k.a. scope (if you are from consulting or compliance world).

One thing I observe last few years is a very strange trend which can be formulated as:

  • triage is now equal to memory & $MFT forensics.

If you can do it quickly, have proper tools and know what you are doing  – it may actually work.

BUT

I believe that it’s often an over-engineered solution to a much simpler problem.

Context is REALLY important. And it dictates what you do and how you do it. And I believe that the context is always driven by the character of the investigation.

Let’s make an attempt to describe the various ‘levels’ of depths one can reach while doing DFIR work. It all is depending on… yes, you guessed it right – context (or scope).

  • Law Enforcements engaged / Criminal case
    • Full blown forensics with a major stress on accountability/logs/notes, chain of custody; and applied to every possible device you can find on the crime scene
    • Almost always goes to court, or the possibility is pretty high
    • You are SUPERCAREFUL, because everything you do is going to be shown to law interpreters [a.k.a. lawyers :)]
    • You use a very specific, self-protective language to describe your findings
  • Confirmed compromise with more than one aspect of C.I.A. triad being affected (e.g. PCI space, hacking cases)
    • Almost identical with the above case, with one extra bit – full forensics for the scoped systems + raw or light forensics in a ‘close neighborhood’
    • Surprisingly, it does not go to court that often, but sometimes it does. Whatever you do – do with an assumption it WILL go to the court one day. So, you are still VERY CAREFUL, take care of the chain of custody and statements
    • You also use a very specific, self-protective language to describe your findings
  • Day to day work on the IR/SOC team
    • Your role is to keep the company secure and literally speaking: find & close incidents
    • Usually you do Light forensics for all systems
    • Only and only if deeper intrusion is confirmed raw/full forensics are used

Same as in school, this is all about grades.

Just to be precise here: I have used some terms above which require further explanation:

  • light forensics – focus on data that is ‘easy’ to acquire with OS-only tools and a minimal impact on the system (minimal contamination) – this is not your memory forensics/ $MFT analysis yet; it is AV logs, “dir /a/s c: > evidence.txt”, “powershell gong foo”, “netstat”, “wmic /xyz/” variations, maybe later on autoruns and Sysinternals tools, etc. + copying it over to your box for further analysis
  • raw forensics – maybe there is a better name; if your light forensics didn’t detect anything and you suspect you need more – this is the time when you need to go deeper; natural progression is to look at $MFT and memory
  • full forensics – nothing to add, cuz there is nothing to remove; you go de Saint-Exuperian a.k.a. ballistic on this one & analyze everything & analyze it twice

The conclusion is this:

  • In a typical IR scenario, utilizing tools that are adequate for your task/role is very important
  • You do a MINIMUM first
  • Only, and only if it doesn’t deliver and you suspect you need to go deeper – then you go deeper; $MFT and memory can wait (notably: if you have tools at hand to retrieve $MFT file list w/o much hassle – by all means, do so – it’s fast and it’s better than a file list retrieved by Windows API)
  • In CI/A breaches you better do EVERYTHING you can think of

And to add some real-world case scenarios here: when I worked for a bank, we would sometimes have 400 infections in one go.

Employing full, or even raw forensics doesn’t make sense ALL THE TIME. All you have to do is to get a process list, file list, kill the bad process, remove the  drive-by exploit and reboot the system, verify all is good after a reboot.

No $MFT, no memory analysis. No full forensics.

Think of the Alexious principle:
1. What question are you trying to answer?
2. What data do you need to answer that question?
3. How do you extract that data?
4. What does that data tell you?

Share this :)

Comments are closed.