The Hour Between Dog and Wolf

10-15 years ago DFIR / EDR / Threat Hunting were not even a ‘thing’. Apart from law enforcement efforts, and a few consulting companies… there were literally no companies doing this sort of work, and even if they actually did… their focus was primarily on QIRA/QFI (today’s PFI) aka analyzing carding breaches, or analyzing APT attacks targeting US gov and defense contractors.

At that time my big wishful thinking was that if I had at least a snapshot of volatile logs from the system I wanted to analyze I would be already better off as opposed to if I had to look at the content of the HDD image alone.

Many in this field of course agreed, and even more, often led us all by an example, so in the years that followed we went through iterations of different solutions… from basic volatile data acquisition batch/bash scripts, memory acquisition tools, then memory dumpers supported by parsing scripts, and we finally ended up with EDR solutions that feed our log just-in-time and fulfill our needs very well today.

Are we better off tho?

I am wondering…

The emergence of EDR evasions, living of the land techniques, static EDR rule breakers, reemergence of macromalware, new code injection techniques, powershell obfuscations, supported by exploits, fileless attacks, code signed with stolen certificates, supply chain attacks, etc. makes me believe that… EDR is going to be for a host what IDS/IPS ended up being for a network.

At first all we got power drunk with firewall/IDS/IPS/proxy capabilities… few years later though many companies literally ignore alerts from these systems as they generate too much noise.

I see a similar trend with EDR.

By comparison… we are very used to AV generating many alerts (especially when AV is configured in a paranoid and/or ‘heuristic’ and/or reputation-check state), but AV itself is still a pretty high-fidelity business. And we often ignore AV alerts that are lower fidelity.

When EDR joined the alerting battleground we at first thought it is going to add a lot of value. After the few years of experience now we face the very same alert fatigue as we experienced with firewalls, IDS, IPS, AV, and proxy. Same old, same old. Just a different marketing spiel.

Along came Threat Hunting… a discipline that is hard to define, but it somehow got its foundation solidly embedded in many companies thanks to Mitre Att&ck Framework. Today’s definition of Threat Hunting is pretty much ‘the act of Mitre Att&ck implementation in your org’. It is actually far more serious than it sounds because it is far more difficult than many people feel. You get to implement a lot of detection in your own environment. One that almost by definition is poorly managed, doesn’t have a proper asset inventory and enforcement of rules is hard. It’s fu, but it’s VERY tough in practice. Yes, in practice, we walk through all the known Mitre tactics and techniques, we cross-reference them with our own org threat modelling/log situation and then come up with new alerts/dashboards that help us to cherry-pick the bad stuff…. hah… very easy.. it it not…

So…

Now we have tones of alerts from ‘high-fidelity’ alert sources: AV, IDS/IPS, proxy, WAF. Then we have middle/low level fidelity alerts from EDR/AV/IDS/IPS/WAF/proxy. Then we have very FP-prone alerts / dashboards from Threat Hunting activities.

What is next?

I do believe it’s time to go deeper and trace user’s activity on a spyware level. Ouch. Yes. I said it. It’s a very difficult topic from a legal perspective, but imho it’s the only way to link user’s actions to actual events we see on our blinkenlight boxes. If we can establish a solid link between user clicking certain GUI elements, typing certain commands, credentials, etc. it’s only then we can be sure that we can provide a context for events we observe in our logs. I mean.. seriously… if we need to spend a lot of resources trying to link multiple Windows Event Logs together to narrow down activity that could be easily tracked to actual user’s behavior.. then why not doing it the opposite way? Follow the user’s behavior and track it high-level.

It’s not the first time I refer to this topic, but I guess it finally has to be said: you can’t fully monitor the box if you don’t monitor its users activities _fully_.

Welcome to the world of next-gen, panopticon EDR solutions of tomorrow.

And for the record… take any advanced OCR/ICR dictionary software, desktop enhancer, IME, accessibility suite, etc and then you realize that at least for the Windows platform, problem of tracking/monitoring of UI and the data flow as well as user interaction is already solved. Did I mention existing spyware solution used in the Enterprise environment? EDR can be cool, but will never be as cool as a proper keylogger…

Time to hook more APIs EDR vendors…

Le coût du développement des capacités

How much does it cost to develop la capacité ?

I took a stab at it, because there is an opinion out there suggesting that delayed, limited, or otherwise responsible disclosure of certain ‘open source’ security tools will affect the attackers by costing them time and money to develop their own.

I argue that this cost is low. Low enough to make it negligible. I base my assumption on a pure technical assessment of the code development task. Let me clarify it a bit: I am assuming that the aim is to replace a tool only, while existing operators and processes they follow is already established. From this point of view, I believe my technical approach is not far fetched.

The time to develop capabilities is hard to assess. There are coders who are magicians of assembly and produce very high quality code, with novelty ideas, tricks and solutions, and do it quickly. And then there are these who use RAD tools to develop quickly, and w/o much flair, but may actually cut time a lot. The end result is often similar tho — the capability exists and can deliver desired results.

In order to make it easier, I split the assuming coding task into a couple of categories:

  • atomic operations (you need them to build everything else e.g. create a file with content)
  • utilities (you need these as building blocks to code more complex features e.g. save screenshot)
  • rich features (quite complex coding tasks that require more time to code and test, and often research e.g. VNC client)
  • very complex stuff (some evasions, but primarily vulnerability research that helps to develop 0days)

Additionally, I introduced extra time (and cost) ‘penalty’ for writing in assembly and position-independent code (PIC). As many argue, and I agree with them, such extra time is usually negligible and in some cases non-existent, but I aim to present the worse case cost scenario.

Another assumption I make is that the coder has 3-5 years of experience. Knows how to program, but may need to research new topics and learn by trial and error. Last, but not least — it is very Windows-centric.

I didn’t list all the features, and I bet I missed some — please send me a feedback on what I missed and I will add it to the sheet.

This assessment is based on a 50 USD / hour rate. You can adjust it easily to any other hourly rate. It is important to mention that 100K USD / year is a lot of money and in many countries this number should be much smaller e.g. closer to 15-45K. As such, the final cost may be far lower.

Also, you may have a team of coders working on different parts of a project, and only code cherry-picked bits reducing both time and cost of the development. Finally, imho a 16-20yo bored coder can kill it (except for VR part) in ~3 months for no salary at all, but putting a dollar value on it helps to make it a tangible data in any argument about a cost of capabilities

The latest version is shown below:

You can also download a sheet and play around with it yourself.

If you have any comments, please let me know.