You are browsing the archive for Preaching.

The 3 stages of 3ages

November 27, 2014 in Compromise Detection, Malware Analysis, Preaching

Quick Update

Just to clarify: this is a critique of IR processes that rely on a single way of doing things which in certain circumstances is not the best; it may slow down your response time & give you a lot of unnecessary work. In other words: Alexious’ principle (see below) is a good way of doing things. Doing full or raw forensics on the example 400 hosts would be very inefficient. It mainly applies to daily IR/SOC work, not consulting gigs.

Original Post

The Digital Forensics world is a subject to trends the same way as is fashion.

A long time ago everyone would just do the bit-by-bit & offline forensics and that would be enough. Then ‘do not pull the plug’ idea came along and now no one dares to shut down the box until at least the volatile data is acquired. On the shoulders of the importance of volatile data came the notion of importance of memory forensics which concluded (as of 2014) in a phenomenal work of the Volatility team, an excellent tool and the best book about ‘all of it’ presented in the most digestible way ever.

Somewhere in the background a lot of research towards understanding of the internals of NTFS was also done, then it got digested by many, and finally converted into popular, and often free applications. It’s actually a good time to be in DFIR because tools are out there and you can crack cases in no time. On top of that, both memory and NTFS/$MFT forensics are probably the most attractive technical aspects of DFIR world in general due to technical difficulties associated with their everchanging complexity & simply speaking ‘it really takes time getting to understand how it all works, but it is very rewarding’ (c).

What could possibly go wrong?

The everlasting and omnipresent part of the DFIR work is the word ‘context’ a.k.a. scope (if you are from consulting or compliance world).

One thing I observe last few years is a very strange trend which can be formulated as:

  • triage is now equal to memory & $MFT forensics.

If you can do it quickly, have proper tools and know what you are doing  – it may actually work.

BUT

I believe that it’s often an over-engineered solution to a much simpler problem.

Context is REALLY important. And it dictates what you do and how you do it. And I believe that the context is always driven by the character of the investigation.

Let’s make an attempt to describe the various ‘levels’ of depths one can reach while doing DFIR work. It all is depending on… yes, you guessed it right – context (or scope).

  • Law Enforcements engaged / Criminal case
    • Full blown forensics with a major stress on accountability/logs/notes, chain of custody; and applied to every possible device you can find on the crime scene
    • Almost always goes to court, or the possibility is pretty high
    • You are SUPERCAREFUL, because everything you do is going to be shown to law interpreters [a.k.a. lawyers :)]
    • You use a very specific, self-protective language to describe your findings
  • Confirmed compromise with more than one aspect of C.I.A. triad being affected (e.g. PCI space, hacking cases)
    • Almost identical with the above case, with one extra bit – full forensics for the scoped systems + raw or light forensics in a ‘close neighborhood’
    • Surprisingly, it does not go to court that often, but sometimes it does. Whatever you do – do with an assumption it WILL go to the court one day. So, you are still VERY CAREFUL, take care of the chain of custody and statements
    • You also use a very specific, self-protective language to describe your findings
  • Day to day work on the IR/SOC team
    • Your role is to keep the company secure and literally speaking: find & close incidents
    • Usually you do Light forensics for all systems
    • Only and only if deeper intrusion is confirmed raw/full forensics are used

Same as in school, this is all about grades.

Just to be precise here: I have used some terms above which require further explanation:

  • light forensics – focus on data that is ‘easy’ to acquire with OS-only tools and a minimal impact on the system (minimal contamination) – this is not your memory forensics/ $MFT analysis yet; it is AV logs, “dir /a/s c: > evidence.txt”, “powershell gong foo”, “netstat”, “wmic /xyz/” variations, maybe later on autoruns and Sysinternals tools, etc. + copying it over to your box for further analysis
  • raw forensics – maybe there is a better name; if your light forensics didn’t detect anything and you suspect you need more – this is the time when you need to go deeper; natural progression is to look at $MFT and memory
  • full forensics – nothing to add, cuz there is nothing to remove; you go de Saint-Exuperian a.k.a. ballistic on this one & analyze everything & analyze it twice

The conclusion is this:

  • In a typical IR scenario, utilizing tools that are adequate for your task/role is very important
  • You do a MINIMUM first
  • Only, and only if it doesn’t deliver and you suspect you need to go deeper – then you go deeper; $MFT and memory can wait (notably: if you have tools at hand to retrieve $MFT file list w/o much hassle – by all means, do so – it’s fast and it’s better than a file list retrieved by Windows API)
  • In CI/A breaches you better do EVERYTHING you can think of

And to add some real-world case scenarios here: when I worked for a bank, we would sometimes have 400 infections in one go.

Employing full, or even raw forensics doesn’t make sense ALL THE TIME. All you have to do is to get a process list, file list, kill the bad process, remove the  drive-by exploit and reboot the system, verify all is good after a reboot.

No $MFT, no memory analysis. No full forensics.

Think of the Alexious principle:
1. What question are you trying to answer?
2. What data do you need to answer that question?
3. How do you extract that data?
4. What does that data tell you?

Why breaches happen under IR teams noses

October 12, 2014 in Compromise Detection, Preaching

Having an IR Team is not a guarantee of breach-free life for the organization. In this short post I am trying to list very specific reasons why breaches happen despite IR teams being present and active. Instead of writing yet-another-smart-ass-who-knows-it-all post that talks about ‘events are ignored’, ‘teams are underfunded’, etc. I am trying to list very specific issues that negatively affect the IR team’s work & contribute to actual breaches actually happening (note: the ‘events are ignored’ is not a root cause; it is a result of problems that are rooted much deeper).

So, here it is – it’s obviously subjective so use at your own risk:

  • I think the fundamental problem is that the IR teams don’t hunt i.e. they don’t look at the data their organization generates: AV Alerts, Proxy traffic, DNS queries, etc.
    • IMHO apart from looking at the alerts generated by security controls this should be the major activity of any IR team
    • Triage should be a regular activity on every system; it sounds very difficult logistically, but at the end of the day all you need is something basic e.g. did any Run key change, did any application appear in the %APPDATA% – this can be surely automated company-wide with a few lines of scripting language; introducing such control requires a power of influence though [last point on the list below]
  • Instead:
    • They receive tones of emails daily – few hundreds emails a day is not uncommon with 90%+ being absolutely useless.
    • They receive a lot of ‘threat intel’ feeds which they have to often manually parse and incorporate into their own security controls – these are important, but will never be more important than analysis of the internal data generated by the org.
    • They spend too much time evaluating ‘new’ software from vendors; eventually they end up being beta testers of the software instead of looking at the data.
    • They are often bound by the same rules as all other employees: ironically, a hacker who can download and use any tool imaginable is being treated with dir, netstat, wmic; sometimes sysinternals tools (if they are allowed)
    • They are asked to socialize, network and participate in many corporate activities. The number of man hours wasted by endless meetings is incredible.
    • They are often managed by people w/o credentials to do the job – understanding IR requires skills from a large number of disciplines – unfortunately it is not uncommon for the managers to be typical corporate climbers who don’t have a passion for the job. They will also execute their little authority to bring you down if you happen to step on their toes.
    • They are not allowed to work from home (there are some organizations that allow it which is a huge benefit to the organization: working from home allows to use your home lab to analyze malware, research, access resources banned by corporate policies, freely network with others in the industry, it also allows you to really focus on analysis – this is probably the most important bit).
    • They work in the environment full of legacy applications. Old Java, enabled VBA is a major reason why infections happen – upgrading the environment should be really high on C-level folks’ agenda.
    • They are often trained to overestimate the remediation capabilities of security controls e.g. antivirus software (see my post).
    • They are often doing Project Management work deploying solutions instead of actually using them. I would argue one needs separate roles for tool builders and tool users in the successful IR team.
    • They rarely have a power to influence at the C-level. They end up whining with their peers in their cubicles and… nothing changes.

I don’t think breaches can be prevented 100%, as I argue in my other post every single infection detected by AV is a compromise. Same goes for network alerts. Giving IR team tools and time to deal with all of them is incredibly important so that these small fires can be extinguished quickly. And then give them even more time to hunt.

So…. If you want to establish a successful IR program in your org, give your IR team a power to shrug all the useless activities off, kill useless emails at source, train these guys as hell, give them a monitored access to all security controls and most importantly – let them be totally antisocial, but ensure their voice is heard at C-level.