Not installing the installers

Looking at installers of goodware is quite boring. They do the right thing, at least most of the time, and there is not much to see there. However, if you add some scale and automation to it, you may actually find some value there. For both Red and Blue sides of the fence.

The most popular installers for Windows are Nullsoft and InnoSetup (apart from MSI). Luckily, we have good decompilers available for both of them (InnoUnp and 7z), so one wanting to explore the possibilities just needs to run these on a bunch of clean samples.

The decompilation results are interesting for many reasons.

If the installer is signed, it may execute its installation script and may bypass EDRs. I have obviously no idea if it is always the case, but if VT says it’s signed and ‘green’ by all AVs, the chances are high that whatever the sample does, it will be permitted to do so.

The opportunity this fact brings to RT is that some of installers’ actions may help to deliver some functionality that RT can abuse.

Many installers add a run key. It’s a lame use case, but one could run such installer, get all the settings in place via a trusted, signed binary, and then swap the executable referenced by the Run key with a payload of choice.

Another opportunity for RT is domain recycling. Many older installers refer to domains that no longer exist. By combing the decompiled installation scripts you may find domains that you could re-use. It is highly possible that an old, but non-existing software developer had all the green marks from web proxy/IDS/IPS, even e-mail security vendors, VT and this setting has never been updated. By recycling such domain you may get a nice way to create a ‘clean’ C2 channel, deliver phish/malspam. And if you are very very lucky, some people may be still using that old software. What if the software has an auto-update mechanism? These could form potential big bounty wins using a legacy autoupdate mechanism as a supply-chain attack .

DLL sideloading or Lolbin executable spawning via installers is also possible. Either via a clever race condition, one-off opportunities or by leveraging GUI that pauses the installer for a moment (enough time to swap files in a tmp folder). Really depends on scenario and you may not find a lot of such installers, but hey… it’s possible.

From a forensic perspective, decompilation of installation scripts gives us yet another way to discover clusters of ‘clean’ paths and file names. It can form a nice exclusion list for analysis. There is also a great opportunity to create exclusion list for process parent-child relationships — many installers are ‘told’ to run some executable at the end of the installation, or simply open a browser to navigate to a site in a default browser. Most sandboxes and EDRs are blind to it and their analysis results often include lots of unnecessary artifacts that could potentially be excluded from such reports. For example, if an analyzed sample’s decompilation script tells us the installer does open the browser, the whole chain of events that follow could be excluded from the final report.

Ever wondered what is a source of some process, services, tasks running on a system? Combing through decompiled installation scripts brings a lot of answers to this question. And even more, it provides an explanation to many command line switches we see in the process parent-child relationships. We may not know their meaning, but we may learn they are preprogrammed inside the installation scripts! Aka build a nice list of ‘good command line switches’ for specific processes.

The ‘open browser at the end of the installation or uninstall’ scenarios are very useful for us too. We can use them to detect very specific events of users installing software that is outside of the acceptable use policy. Yes, we can use EDR or asset inventory tools for that too, but what if the software is portable? Any clue of an install event is important.

Finally, you could possibly write signatures/yara definitions for installation scripts that could help to detect different version of the same software w/o a need to sandbox them.

I am sure there are more ideas out there.

Hijacking HijackThis

Long before endpoint event logging became a norm it was incredibly difficult to collect information about popular processes, services, paths, CLSIDs, etc.. Antivirus companies, and later sandbox companies had tones of such metadata, but an average Joe could only dream about it.

This is where HijackThis came to play. At a certain point in history, lots of people were using it and were posting its logs on forums – for hobbyist malware analysts to review. And since HijackThis Log has a very specific ‘look and feel’, it was pretty easy to parse it. And find it.

In order to collect as many logs as possible, I wrote a simple crawler that would google around for very specific keywords, collect the results, then visit the pages, download them to a file, and parse the result. Each session would end up with a file like this:

[Processes - Full Path names]
[Processes - Names]
[Directories]
[All URLs]
[Registry - Full Path names]
[Registry - Names]
[Registry - Values]
[BAD URLs]
[CLSIDs]

There are plenty of uses for the collected data — one of the handy ones back then was a comprehensive list of CLSIDs — knowing these, you could incorporate these into a simple binary/string signature and search for them inside analyzed samples. If a given, specific CLSID was found, it was quite easy to ID the sample association or at least, some of its features. Another interesting list of artifacts is rundll32.exe invocations. There are many legitimate ones and it’s nice to be able to query them all and put them together on a ‘clean’ list. Of course, URLs are always a good source for downloads, and directories and paths, as well as registry entries and process/service list handy for generating statistics on which paths are normal and which are not. A list of ‘known clean’ that could be a foundation for a more advanced version of Least Frequency Occurrence (LFO) analysis. And even browsing file paths is an interesting exercise as well as – for example, it allowed me to collect information about many possible file names of interest (f.ex. these that could be used in anti-* tricks).

I had a lot of ideas around that time on incorporating these research ideas into my forensic analysis workflow. For instance, if we knew certain paths are very prevalent, it kinda makes sense to exclude them from analysis. Same goes for other artifacts. And a twin idea from around that time was filelighting – it’s common for software directories to include a list of files that are referenced in at least one of the other files. That is, if I find a file foo.bar inside program directory, there is a high possibility that at least one of the other files – be it executables, or configuration files – will reference that foo.bar file! It actually works quite well. And the main deliverable of this idea was that if we can find orphaned files, they are suspicious. And, from a different angle, if we know what clusters belong to what software package, we could use that tree of self-referencing file names to eliminate them from analysis.

Times have changed, of course, and while these ideas may still have some value, reality is that we live in a completely different world today.

In the end, I cannot say the database helped me a lot, but it was an interesting exercise, and since the data is quite obsolete by now I decided to drop its content online. It’s not a very clean data set, mind you. You will find errors in parsing, some HJT logs were truncated, referred to non-English characters, etc. Still, maybe you will find some use for it. Good luck!

Download it here.