Memory buffers for… initiated, part 3 – Frida(y) edition

Okay, we can dump heap buffers. What’s next?

What about a sandbox-like, IOC generator & payload dumper? In its most basic version we will run a sample and our handlers will spit out all the file names of files being opened by the analyzed program. They will also dump file buffers read to and written from. And for a good measure, we will try to convert some of the file creation flags/arguments passed to the APIs so we can get a more readable log.

To dump a list of files being opened by APIs I will focus on handling CreateFileA, and CreateFileW APIs. I chose these APIs for a couple of reasons:

  • They are very commonly used and are easy to test
  • CreateFileA & CreateFileW exist inside kernel32.dll
  • CreateFileA & CreateFileW exist inside kernelbase.dll
  • you may hook them all, and you may also want to choose either of them; of course, too many hooks is not good, hence there are challenges introduced by this duplication

To test handlers (copy provided at the end of this post) just run this:

frida-trace -i RtlFreeHeap -i RtlAllocateHeap -i KERNEL32.DLL!CreateFileA -i KERNEL32.DLL!CreateFileW -i KERNEL32.DLL!WriteFile -i KERNEL32.DLL!ReadFile -f <exe> > log

Same as with buffers, we will store file handles in a table at the time file is created / opened. We will then lookup these handles at the time of file reading and writing so we can log actual file names in our logs, as opposed to just file handles. In my old sandbox I used a code inject that was relying on NtQueryObject executed in a context of a target process at the time Read/Write APIs were executed, but then again – I had to inject my code into that process, hook APIs before the malicious implant took over. Pretty complicated.

Anyway… since we can map file handles to file names we can now output the content of buffers/arguments to appropriate files (one file will store list of files/objects and the other one – actual file buffers). And for the fun of it, we will file buffers in hex + will include PID and TID, and of course a file name in our log:

The list of objects (and file handle to file name mapping) created using CreateFile APIs will be stored inside objects_list.txt file:

You may notice that some of them are 0xFFFFFFFF — these failed to open. It’s an interesting result – you will not only see existing files being accessed, but also these that don’t. Let me reiterate — these are calls to CreateFile API to access _some_ files or directories that may not be present on the system. Pretty much like Procmon, but a bit easier to read and far easier to mod the output to our needs. Such log’s value in security research cannot be overstated — it can help finding references to non-existing files, phantom libraries, anti-debugging strings e.g. device names, etc..

Finally, our attribute/flags resolution code works as well:

The screenshot below shows how this works in practice – the dwDesiredAccess’s value of 0x80000000 is translated to ‘GENERIC_READ’:

Now, before we get too excited about our ‘building our own sandbox’ experience… let me mention that there are caveats. One of them is that Frida doesn’t work all the time. For the benefit of this article, I tried to run my handlers over pafish.exe executable and… it just got stuck:

I wanted to test pafish, because it refers to a number of devices associated with guest OS devices that help to detect a virtualization:

and

– so I thought I can output all these referenced device names and show how cool the handlers can be. Then the main pafish.exe process got stuck and that’s about it. So, you have been warned.

Still, I have never worked with such rapid prototyping & hooking engine in one. It’s amazing what you can do with a few lines of JavaScript.

You can download my testhandlers from here.

Where all the Cyber Tooth Fairies go?

One of my favorite TV Series is Dexter. Early seasons were so-so, focused on a cheap thrill, lame TV that you can see all over the place. As the series progress though we observe a shift in the narrative and we witness a true character of the main protagonist developing in front of our eyes. Dexter’s inner thoughts are full of curiosity, inquisitive reflections on life and it’s hard not to relate. We all try to fit in and be a part of it, whatever that ‘IT’ is.

So far I watched the series twice and I know I will come back to it.

One of my fav parts of the series is the history of the Tooth Fairy Killer. Walter Kenny is in his 70s when he is introduced to the audience, and due to his serial killing activities he becomes one of Dexter’s targets. Tooth Fairy Killer’s character is very interesting, because… he is way past his prime, he never got caught and … he is a somehow lonely, old, yet still arrogant individual.

When we swap ‘killer’ with ‘cyber’ we bring this post back to our infosec world.

What happens or will happen to us, aging ‘serial cybers’?

I don’t know. We don’t hear much from cyber people who already retired and are either enjoying their Autumn years, or became wealthy quickly enough that working is no longer necessary and they can focus on hobbies, angel investment, whatever. Then there are these not so happily-ever after retired – these who we end up hearing about on the news or through a grapevine. And it is not surprising to find out that many of these we hear of commit suicide, end up imprisoned, or live bigger life than themselves.

How many of us will end up there?

Putting difficult, and somehow inevitable mental health and medical issues associated with aging aside, what is that we want to do at the age of 70? Will we still work thinking we are saving the world from the cyber crime? What if futuristic laws and protocols make the cybercrime almost obsolete? And if not, will we still care? Will we still hold true and honest the ideals from our 20s? Or, worse, will we become victims of some sophisticated future social engineering tricks that will target us – the elderly? Again, I don’t know the answer. I am not that old yet, yet the questions like this start popping up in my head as I am getting older.

Our industry expanded so quickly that it’s impossible to keep up. It’s now mandatory to specialize. The good ol’ corporate entered the game and we are being institutionalized like any other company department. Is the anniversary watch we get as we retire the only prize for all these efforts, all-nighters and opinions we so eagerly shared with others over these early cyber years?

Maybe it is a price of being in the industry that very quickly goes through stages of maturity. From random, opportunistic to systematic, managed. Very rapidly. There is a final stage of cyber process already emerging today. I expect that in the next few years most of the ‘really’ technical jobs in cyber will move and gravitate around specialized vendors – these providing classification, automation, orchestration or whatever you call it, and providing value derives from frameworks like Mitre Att&ck.

Forget manually crafted super-timelines, inspections of systems, bit-to-bit imaging, and file format analysis. Forget manual malware analysis. Not only OS/Cloud telemetry and forensic/sandboxing capabilities will be provided out of the box, but they will be easy to use, already built-in and the DFIR/RCE hacking as we know will be over. Plus, more and more zerotrust-ish, docker-ish stuff, logs that can be actually used, and finally more and more reliable MFA.

So, where do we land? Working for vendors is an easy answer. Client-side IT Security efforts coordinators aka security vendor managers is another. Security advisors? Security consultants? Table Top exercise coordinators? Teachers at uni?

Or.. perhaps cyber is here to stay for another 100 years ? And maybe, hopefully… Cyber Tooth Fairies is only the problem of the bad guys? Because… there is always something ‘for the benefit of good’ to do?