You are browsing the archive for Malware Analysis.

DefineDosDevice symbolic link trick

June 21, 2019 in Anti-Forensics, Archaeology, Malware Analysis

I don’t know who is the original author of this trick – I saw it being used by some malware a few years ago and it was also discussed on KernelMode forum, and StackOverflow. Reading McAfee’s paper about Process Reimaging I suddenly remembered it.

How does it work?

With a DefineDosDevice API (the same API that is used by the subst command) we can create a new MSDOS device name. We can map it to a new, non-existing file path. The main executable can be then moved to that new space (i.e. new path the space is mapped to).

This little trick makes the original file ‘disappear’ from the system. Most of the process listing tools continue to map the running process to its original path, yet any attempts to access properties of the file itself end up with nothing. This is because the process is running, but the file it was launched from is ‘not there’ anymore:

Let’s examine it step by step:

  • Create a foobar namespace using DefineDosDevice and point it to \??\c:\test\test_hidden.exe.
  • Move the current process’ file e.g. c:\test\test.exe to \.\foobar.

That’s it.

In my test case I just renamed test.exe to test_hidden.exe, still inside the c:\test. It could be any location really, including very deeply nested directories that may be harder to inspect w/o forensic tools.

To find such mapping, one has to use tools like WinObj – it shows the DosDevice called foobar that points to the .exe:

One can also launch it via \\.\foobar (need a dedicated tool tho).

And if you are wondering what Sysmon will see when we launch such hidden file – luckily, it will link to a proper image on the drive:

Last, but not least – we can create a space that maps to Alternate Data Stream too 🙂 e.g. \??\c:\test\test.exe:hidden. In such case, a copy command can be used to copy files to such newly-created space/location e.g.:

  • copy test.exe \\.\foobar

Toying with inheritance

June 15, 2019 in Anti-Forensics, Malware Analysis

When you create a system object e.g. file you can specify if its handles can be inherited by child processes. We then just need to tell CreateProcess to duplicate these inheritable handles and the child process can access these handles as well. It’s a very well-known and very well documented functionality.

BUT

With that in mind, a simple idea was born: what if we create a file (open handle) with one process, then write to the very same file with a different process (or processes) – using that inherited handle. The inherited process doesn’t need then to formally open the handle, because… it’s already there.

This is how it looks like in practice:

We have got two test.exe processes here – 3340 creates the file/opens it for writing, and spawns 3584 that writes and closes the file.

I have not tested it, but I am wondering if such split handle handling (pun intended) could confuse any of the existing security solutions.

If a solution relies on any sort of dynamic per process lookup tables that are mapping handles to file names in realtime (mapping built by intercepting file creation requests), such solution would not be able to map it for a spawn child process (handle is already there, and file creation operation was never there). This is probably rare case, but always…

Also, I believe most of security solutions expect one process to be managing the whole lifecycle of each created file. The typical patterns goes like this: malware is downloaded, malware runs, malware drops a file, malware executes it, and so on and so forth.

BUT

Could we for example write every N-th byte or N-th line of code inside the dropped file using a different process, or a number of them? What if these writes happened at different times (avoiding temporal proximity analysis). Could such operations be coherently put together and presented on a single timeline?

On UI level EDRs and sandboxes are very process-tree oriented and the timelines follow this UI paradigm. Browsing through a timeline of one process would surely be not enough to see the whole context of all the operations for a file managed by multiple processes. What’s more, the main process could be writing benign data to a file knowing that these red herring operations is the very first thing analysts look at, and only spawn children could be writing the real juice to the target file and e.g. 1 hour later – long after the main parent process already died.

There are probably other concurrency issues here?

Obviously forensics will always reveal the actual content of the file, but… over last few years we moved many of our processes away from heavy-duty forensics (hard drives) towards lighter forensics (volatile data) towards timeline analysis (edr/threat hunting). Attacking assumptions that these security solutions rely on is probably one of the first steps to more robust anti-timeline techniques we will see in the future.