Toying with inheritance

June 15, 2019 in Anti-Forensics, Malware Analysis

When you create a system object e.g. file you can specify if its handles can be inherited by child processes. We then just need to tell CreateProcess to duplicate these inheritable handles and the child process can access these handles as well. It’s a very well-known and very well documented functionality.


With that in mind, a simple idea was born: what if we create a file (open handle) with one process, then write to the very same file with a different process (or processes) – using that inherited handle. The inherited process doesn’t need then to formally open the handle, because… it’s already there.

This is how it looks like in practice:

We have got two test.exe processes here – 3340 creates the file/opens it for writing, and spawns 3584 that writes and closes the file.

I have not tested it, but I am wondering if such split handle handling (pun intended) could confuse any of the existing security solutions.

If a solution relies on any sort of dynamic per process lookup tables that are mapping handles to file names in realtime (mapping built by intercepting file creation requests), such solution would not be able to map it for a spawn child process (handle is already there, and file creation operation was never there). This is probably rare case, but always…

Also, I believe most of security solutions expect one process to be managing the whole lifecycle of each created file. The typical patterns goes like this: malware is downloaded, malware runs, malware drops a file, malware executes it, and so on and so forth.


Could we for example write every N-th byte or N-th line of code inside the dropped file using a different process, or a number of them? What if these writes happened at different times (avoiding temporal proximity analysis). Could such operations be coherently put together and presented on a single timeline?

On UI level EDRs and sandboxes are very process-tree oriented and the timelines follow this UI paradigm. Browsing through a timeline of one process would surely be not enough to see the whole context of all the operations for a file managed by multiple processes. What’s more, the main process could be writing benign data to a file knowing that these red herring operations is the very first thing analysts look at, and only spawn children could be writing the real juice to the target file and e.g. 1 hour later – long after the main parent process already died.

There are probably other concurrency issues here?

Obviously forensics will always reveal the actual content of the file, but… over last few years we moved many of our processes away from heavy-duty forensics (hard drives) towards lighter forensics (volatile data) towards timeline analysis (edr/threat hunting). Attacking assumptions that these security solutions rely on is probably one of the first steps to more robust anti-timeline techniques we will see in the future.

Share this :)

Comments are closed.