You are browsing the archive for Reusigned Binaries.

Reusigned Binaries – Living off the signed land, Part 3

August 4, 2018 in Anti-*, Anti-Forensics, Compromise Detection, Forensic Analysis, Living off the land, LOLBins, Reusigned Binaries

When I wrote two first parts of this series I used the title ‘reusigned binaries’. The title is of course very cheesy because it’s a portmanteau of ‘reuse’ and ‘signed’. How predictable…

Now… while in the meantime there was a lot going on in this space f.ex. the LOLBIN/LOLBAS ‘movement’ took off culminating in many cool Twitter posts and an excellent repo of LOLBin ideas and recipes collected by @Oddvarmoe, there is always… more.

Seasoned pentesters use reusigned binaries for a long time so it’s not a new concept.

The most obvious case scenarios are:

  • use native OS programs, and scripts that are signed and make them do something they are not supposed to (–> ‘traditional’ LOLBins helping to break detection rules focusing on process trees, offering various ways of side-loading, persistence, bypassing filters, etc.)
  • use popular, legitimate (and often signed) dual purpose tools and abuse them (e.g. nmap, netcat, psexec, etc.)
  • use common, legitimate non-OS software components (that are NOT dual purpose, and are very often signed) to do something they are not supposed to (–>LOLBins/Other sections e.g. NVidia’s nvuhda6.exe, PlugX abusing lots of legitimate apps for sideloading purposes; vulnerable /unpatched/ drivers that are signed, load beautifully and can be exploited, etc.)

The last item is very interesting. I believe this is potentially a part of the arsenal (and should be a subject of some serious research) of any futuristic offensive security team.

Let me give you an example…

While looking at a large corpora of driver installers (note: not drivers in a sense of kernel mode drivers, but more like drivers for peripherals like printer, scanner, etc.) I noticed the following

  • many of them reuse the same libraries/executables across many products of the same vendor
  • some of them do like using many modules that are often split across many files/libraries (atomic operations/library)
  • there are often no security checks in place – programmers write the modules in a context of their projects w/o thinking too much about security implications (why should they…, right?)
  • code is often ‘funny’ and includes debugging/verbose flags, test code, offensive, or anti-tampering code, etc. (if you need specific examples, just google ‘Conexant Keylogger’, ‘Sony rootkit’, ‘Denuvo protection’, etc.)
  • the code is SIGNED – let me repeat that!!!

So… when I first started thinking of this I did some poking around, combing through libraries and lots of code and I quickly found a number of signed DLL libraries that can be very easily abused to intercept keystrokes (i.e. build a foundation of a proper Keylogger!). Typically, one has to create a shared memory section named in a specific way, create a window receiving messages, sometimes create an event/mutex, sometimes prepare some structure in memory, call the API named something along the lines of ‘InstallHook’  and… a keylogger is born.

You know where it is heading…

I believe that further analysis of ‘clean’ software – no matter how popular as long as signed – will result in toolkits being developed that reduce the amount of ‘own’ offensive code and leveraging existing signed binaries to deliver the ‘bad’ part of the offensive work – one that modern blue teams try to find so much. Using signed binaries to do that is a really step forward as it may potentially fool many security products that so heavily rely on signing, reputation, whitelisting, and virustotaling. There is obviously a need to develop a binder code that makes it all work, but I guess this can be achieved either via VBS, VBA, mshta, msbuilder, powershell, c# snippets, or even instrumented execution of code that can be driven using signed binaries relying on scripts (e.g. windbg script, debugging via powershell, etc.). And of course, the opportunities of using unsigned plug-ins, documented and undocumented command line switches, etc.

It’s perhaps not a very practical post – I’ve been actually thinking a lot whether I should post a PoC of a keylogger based on one of the legitimate signed DLLs I found – but in the end I decided not to enter a dubious legal area I don’t have much experience with (I don’t’ want vendors to come after me, basically); still, I hope the post provides enough food for thought to carry on with your own experiments. All you need is a lot of clean signed samples (drivers, software), and some creative research ideas…

Win16 and Win32 API bad old habits call back…

July 1, 2018 in Anti-*, Living off the land, Random ideas, Reusigned Binaries

One of taboo secrets in a programming world of Windows is that everyone trusts Windows API. They are old, reliable, and really very well tested. They really behave themselves quite well. And even if program crashes, the system is smart enough to clean up all those handles, memory allocations that were not properly freed, and deal with any other booboo in a graceful way. All in all – system does the housekeeping for the naughty coders and returns itself to a proper state. This trust in system clean-up created an ecosystem where many programs just rely on OS to fix programmers’ errors.

I still remember reading somewhere (and not challenging it back then!) that when certain type of Windows API calls fail it means that you have a far more serious problem on your hands – the system is probably already very unstable. And the same article was literally suggesting not to bother checking errors for APIs that are 100% trusted to work. So… over a few decades, many programming books, forums, etc. used example code snippets that work, but no one bothers to check the actual result of many API calls. The assumption is that they don’t fail. It’s quite a wishful thinking, isn’t? That is, many programs written over last 30 years include API calls which by default are trusted 100% to work as expected and no one checks the errors or anticipates … slightly different user input or intervention.

Times changed, and to crash the system it’s not that easy anymore+living off the land is a thing. Yet the habits and a crazy amount of code that is based on all these old code snippets is enormous and still present in many programs.

There are some really good examples of bad code practices still used by many software developers – f.ex. relying on environment variables to determine systemroot or other paths is actually a very bad idea despite being advocated as a standard industry practice on many forums; the thing is that the user can control these variables by manipulating them prior to running the application and force the latter to behave in ways that were not anticipated. That can make these apps lolbin-friendly.

Another example is the file system and file handling enhancements. The other day I mentioned that there are changes introduced to Windows 10 that allow APIs to go beyond the Maximum Path Length Limitation when working with file names. This is a significant problem for older apps that may stop seeing files that (ab)use this new feature. This affects day to day work, reversing, sandboxes, and everything else. Such cases may not be handled well proactively as such changes are rarely ‘forward’-compatible.

Funnily enough, long file name is already a problem anyway. Even before that change was introduced.

If you want to run a simple test go ahead and try to rename your fav. app’s .exe to the below file name (remove new lines):

0123456789
0123456789
0123456789
0123456789
0123456789
0123456789
0123456789
0123456789
0123456789
0123456789
0123456789
0123456789
0123456789
0123456789
0123456789
0123456789
0123456789
0123456789
0123456789
0123456789
0123456789
0123456789
0123456789
0123456789
0123456789.exe

When you try to run it you may either see ‘Access denied’ (system or your shell doesn’t like it) or if it actually runs you may often witness a crash. Here, a simple example of sysmon crashing when renamed to such a long filename and then executed:

Even debugger and programs like Total Commander have a problem locating such files:

So… lots of subtleties here that I believe are not fully explored yet.

And then there many more interesting APIs and ‘code patterns’ to look for.

In my last post I mentioned that sometimes coders don’t adhere to exact API specification (as per MSDN) and as a result may be introducing limitations in their own programs that can be then abused when these discrepancies are discovered. The case I covered was very specific, but such subtleties is where many future problems may arise.

Enter GetModuleFileName (and its sibling GetModuleFileNameEx).

This innocent function is used by a lot of software including security tools. When we read the MSDN description we can learn that if the buffer for the path is too small, the function will truncate the path so that it fills in the buffer fully, will include the ASCIIZ as the last character and will return the ERROR_INSUFFICIENT_BUFFER.

Let me say it again – the OS will truncate the path to fit it into a buffer. Why would they do it instead of insisting on returning error? I really don’t know…

And since many coders assume their buffers are large enough – typical allocated buffers can only squeeze in ~260 characters, and it’s a buffer very often allocated on the stack, not dynamically – they rarely check the error… and yes, checking the source code of many programs you will notice that such approach (not checking if the buffer is too small, and using locally allocated buffer) is actually very common, including examples on MSDN. This discussion on Stackoverflow is far better.

If you are wondering why programmers allocate only ~260 characters. The devil is in the API name. It refers to the module file name, yet the returned data is actually a fully qualified path. While the length of a file name is obviously limited, the full path can be much longer than ~260 characters.

Consider a simple scenario where a trusted program is using GetModuleFileName and expects the path to be say max. 128 characters. The malicious user knows about it and renames the program to 140 characters knowing that the API will truncate the module file path to one that will point to a another file which could be malicious. If such program uses obtained information to e.g. copy its own binary to a destination folder (many apps do it), the copying function will refer to the other (malicious) file, due to that truncation. Yup, once such illogical path is retrieved the legitimate program may actually copy a file based on its (assumed to be correct) file name to the destination directory. By manipulating the file name and the path’s length the attacker can force the trusted program to copy a malicious file instead!

In other words.. by exploiting the fact API return value is not tested/handled properly, one can modify the logic of the program & potentially make it behave in a way not intended/anticipated by the author.

Using a shorter path as an example:

Consider c:\folder\good.exe being that ‘hacked’ long file name; the program, when executed, copies its own file to a destination directory, but since it allocates a much smaller buffer (15 characters instead of 19) to retrieve its full path it is resulting in a truncation that points to a different file name:

  • c:\folder\good.exe –> the actual path assumed by the program, but it will be truncated since the buffer is too small (short by 4 characters)
  • c:\folder\good –> the actual path seen by the program’s code due to truncation&may make the program copy the ‘good’ file instead of ‘good.exe’

The below screenshot from Olly shows how it works:

The first call copies the full path to the 19 characters long buffer. The second, copies the file name partially and returns the error which is ignored by many apps.

I kinda focused on file copying, but the buggy code could be abused in many other ways. For example, if an application uses the returned path as an input to other API functions, these functions may end up reading wrong files, presenting incorrect information on GUI, writing incorrect information to logs, etc..

There are many APIs that can be abused this, or similar way; I am not the first one to point it out – many vulnerability researchers focus on this area for at least 20 years and some do it in a very systematic way. They found lots of programs (including OS) that e.g. don’t check the registry data type when they use Registry APIs; this often leads to a type confusion vulnerabilities that can be abused and in some cases, exploited.

The bottom double line is this:

  • if you can change the path/file name to a long enough one, you can at least crash some apps.
  • if you can change the path/file name to a long enough one, and the app is coded poorly and using the obtained name to make some decisions, you can influence these decisions