The story of an underTAG that tried to wear a mitre…

Today we tag everything with Mitre Techniques. I like it, but I would also want a bit more flexibility. So, I like to mix the ‘proper’ Mitre tags with my own tags (not only subtechnique tags, but also my own specific tags).

Why?

Say… you detect that System.Management.Automation.dll or System.Management.Automation.ni.dll is loaded into a process. Does it mean it is T1086: PowerShell? Or does it mean, that the process is using a DLL that offers automation capabilities? And may not even do anything dodgy? I suspect (s/suspect/know/) that calling it T1086: PowerShell w/o providing that extra info loses a lot of context.

Why not calling it what it is? <some magic prefix>: Powershell Automation DLL?

Is loading WinSCard.dll always an instance of T1098: Account Manipulation, or T1003: Credential Dumping? Or is it just a DLL that _may_ be used for nefarious purposes, but most of the time it is not (lots of legitimate processes use it; sysmon logs analysis is a nice eye opener here).

Why not calling it <some magic prefix>: Smart Card API DLL?

As usual, at the end of this tagged event, or events’ cluster, there is a poor soul, and the underdog of this story that is tasked with analysing it. If our tagging is as conservative as the mindset of politicians who studied at Eton… then so will be the quality of analysis, statistics, and actual response.

And it is easy to imagine confusion of analysts seeing events tagged with a vague name. For example, net.exe command that accesses user/account data, and the loading of WinSCard.dll may make them assume that there is a cluster of ‘account manipulation’ events indicating an attack. A triage of such vague cluster of events is a time wasted… There is a response cost, and there is an opportunity cost at play here. The example is over-simplistic of course, but the devil is in the details. Especially for the analysts.

I’d say… given the way most of events are logged today, often w/o much correlation and data enrichment at source, the event collection process should make any attempt possible to contextualize every single event as much as possible.

We can argue that combining data at its destination, in aggregation systems, SIEMs, collectors, or even ticketing systems, or on a way to them, is possible and actually, desirable… today’s reality is that we need that single event to provide as many answers as possible…

This is because we know that Data Enrichment at the destination is a serious pain in the neck and relies heavily on a lot of dependencies (up to date asset inventories, list of employees, DHCP mapping, then there is a lack or poor support of nested queries, poor performance of nested queries, and this forces us to use a lot of lookup tables that need to be up to date and require a lot maintenance). And if we need to go back to the system that generated event to fetch additional artifacts, or enrich data manually, then we are probably still stuck in a triage process A.D. 2010…

So… if we must tag stuff, let’s make it work for us, our analysts, and make it act as a true data enrichment tool that it was meant to be… If the event, their cluster, or detection based on them is not actionable, then it’s… noise.

Excelling with sysmon configs

Writing your own sysmon config is a painful exercise. Well, maybe not if you start from a scratch and only rely on your own research, because there is an organic growth that you fully control.

Sooner or later you will reach the end of your creative ideas though… and will start borrowing ideas from others. You will then want to compare your config against others.

You can find an existing tool that does it for you (recommended), write a proper parser (recommended), or try to cheat and use Excel 😉

Despite it looking like an impossible task, Excel can do a pretty good work extracting rules from a sysmon config. We just need to use a bunch of formulas, and in the end can ‘visualize’ the data using e.g. a pivot table like the one shown here:

or this:

From there, it’s not too far from comparing multiple configs, or even merging them in Excel (I know, I will burn in hell for saying that!).

Anyways… if you are interested in doing similar analysis yourself you can have a look at this workbook. It’s just one of many ways this can be done, and there is plenty of room for improvements.

And if you are wondering what config I analyzed with this ‘tool’, it is the one from ionstorm (kudoz!) & you can download it from here.