The Hour Between Dog and Wolf

10-15 years ago DFIR / EDR / Threat Hunting were not even a ‘thing’. Apart from law enforcement efforts, and a few consulting companies… there were literally no companies doing this sort of work, and even if they actually did… their focus was primarily on QIRA/QFI (today’s PFI) aka analyzing carding breaches, or analyzing APT attacks targeting US gov and defense contractors.

At that time my big wishful thinking was that if I had at least a snapshot of volatile logs from the system I wanted to analyze I would be already better off as opposed to if I had to look at the content of the HDD image alone.

Many in this field of course agreed, and even more, often led us all by an example, so in the years that followed we went through iterations of different solutions… from basic volatile data acquisition batch/bash scripts, memory acquisition tools, then memory dumpers supported by parsing scripts, and we finally ended up with EDR solutions that feed our log just-in-time and fulfill our needs very well today.

Are we better off tho?

I am wondering…

The emergence of EDR evasions, living of the land techniques, static EDR rule breakers, reemergence of macromalware, new code injection techniques, powershell obfuscations, supported by exploits, fileless attacks, code signed with stolen certificates, supply chain attacks, etc. makes me believe that… EDR is going to be for a host what IDS/IPS ended up being for a network.

At first all we got power drunk with firewall/IDS/IPS/proxy capabilities… few years later though many companies literally ignore alerts from these systems as they generate too much noise.

I see a similar trend with EDR.

By comparison… we are very used to AV generating many alerts (especially when AV is configured in a paranoid and/or ‘heuristic’ and/or reputation-check state), but AV itself is still a pretty high-fidelity business. And we often ignore AV alerts that are lower fidelity.

When EDR joined the alerting battleground we at first thought it is going to add a lot of value. After the few years of experience now we face the very same alert fatigue as we experienced with firewalls, IDS, IPS, AV, and proxy. Same old, same old. Just a different marketing spiel.

Along came Threat Hunting… a discipline that is hard to define, but it somehow got its foundation solidly embedded in many companies thanks to Mitre Att&ck Framework. Today’s definition of Threat Hunting is pretty much ‘the act of Mitre Att&ck implementation in your org’. It is actually far more serious than it sounds because it is far more difficult than many people feel. You get to implement a lot of detection in your own environment. One that almost by definition is poorly managed, doesn’t have a proper asset inventory and enforcement of rules is hard. It’s fu, but it’s VERY tough in practice. Yes, in practice, we walk through all the known Mitre tactics and techniques, we cross-reference them with our own org threat modelling/log situation and then come up with new alerts/dashboards that help us to cherry-pick the bad stuff…. hah… very easy.. it it not…


Now we have tones of alerts from ‘high-fidelity’ alert sources: AV, IDS/IPS, proxy, WAF. Then we have middle/low level fidelity alerts from EDR/AV/IDS/IPS/WAF/proxy. Then we have very FP-prone alerts / dashboards from Threat Hunting activities.

What is next?

I do believe it’s time to go deeper and trace user’s activity on a spyware level. Ouch. Yes. I said it. It’s a very difficult topic from a legal perspective, but imho it’s the only way to link user’s actions to actual events we see on our blinkenlight boxes. If we can establish a solid link between user clicking certain GUI elements, typing certain commands, credentials, etc. it’s only then we can be sure that we can provide a context for events we observe in our logs. I mean.. seriously… if we need to spend a lot of resources trying to link multiple Windows Event Logs together to narrow down activity that could be easily tracked to actual user’s behavior.. then why not doing it the opposite way? Follow the user’s behavior and track it high-level.

It’s not the first time I refer to this topic, but I guess it finally has to be said: you can’t fully monitor the box if you don’t monitor its users activities _fully_.

Welcome to the world of next-gen, panopticon EDR solutions of tomorrow.

And for the record… take any advanced OCR/ICR dictionary software, desktop enhancer, IME, accessibility suite, etc and then you realize that at least for the Windows platform, problem of tracking/monitoring of UI and the data flow as well as user interaction is already solved. Did I mention existing spyware solution used in the Enterprise environment? EDR can be cool, but will never be as cool as a proper keylogger…

Time to hook more APIs EDR vendors…

Att&ck updates…

I like the recent update to Mitre Att&ck. For many reasons:

  • It finally covers the cloud as a separate entity!
  • It introduces cloud-specific techniques
  • and most importantly – it breaks many assumptions

Many of us took Att&ck for granted. It is already there, it’s pretty established, and it doesn’t change much. New tactics and techniques are introduced on regular basis, but in fairness — changes were very manageable.

This is why the recent update is so important. It emphasizes the volatile state of the framework that is still closer to in statu nascendi than a fully formalized and complete framework.


We are so used to OS platforms being Windows, Linux and OS that we may find it surprising that now it’s a completely different game – the update includes the following platforms:

  • Linux
  • macOS
  • Windows
  • Office 365
  • Azure AD
  • Azure
  • GCP
  • AWS
  • SaaS

In terms of log sources, we now have:

  • File monitoring
  • Process monitoring
  • Process command-line parameters
  • Process use of network
  • API monitoring
  • Access tokens
  • Windows Registry
  • Windows event logs
  • Azure activity logs
  • Office 365 account logs
  • Authentication logs
  • Packet capture
  • Loaded DLLs
  • System calls
  • OAuth audit logs
  • DLL monitoring
  • Data loss prevention
  • Binary file metadata
  • Malware reverse engineering
  • MBR
  • VBR
  • Network protocol analysis
  • Browser extensions
  • AWS CloudTrail logs
  • Office 365 audit logs
  • Stackdriver logs
  • Netflow/Enclave netflow
  • Disk forensics
  • Component firmware
  • PowerShell logs
  • Host network interface
  • Network intrusion detection system
  • Kernel drivers
  • Application logs
  • Third-party application logs
  • Web application firewall logs
  • Web logs
  • Services
  • Anti-virus
  • SSL/TLS inspection
  • Network device logs
  • DNS records
  • Web proxy
  • Office 365 trace logs
  • Mail server
  • Email gateway
  • User interface
  • Windows Error Reporting
  • BIOS
  • Environment variable
  • Asset management
  • Sensor health and status
  • Digital certificate logs
  • Named Pipes
  • Azure OS logs
  • AWS OS logs
  • Detonation chamber
  • EFI
  • WMI Objects

These logs are very wide in scope and many of them now directly reference cloud-specific sources.

I would still like them broken down into even more granular pieces though, for instance: the Windows event logs. If you think of them as a single item you will fail to observe the following:

  • Not all Event Logs come enabled by default and need specific audits to be enabled
  • Event Logs come from various buckets: system, security, application are the most popular, but there are more: powershell, BITS, WMI, Sysmon, etc. (you should follow @SBousseaden for more ideas) — in my opinion every combo Log Event Source/Event ID needs to be called out as a separate log soource entry. It has to be configured, tested, and then used & monitored…
  • The metrics are as good as your input data; if you fly high, you miss the subtleties… and in the Blue Team game subtleties matter a lot; you do want to know how many systems are covered by each security control (whether provided by a vendor, or developed in-house)


  • Some of these logs do seem a bit redundant e.g. Loaded DLLs vs DLL monitoring
  • Windows Event Logs do cover Powershell logs and these are listed separately
  • Named Pipes sounds like a weird log source — is it a subset of API Monitoring(?)
  • API Monitoring and System calls are hard to obtain; we have the auditd on Linux that helps, but a full-blown API monitoring on Windows is hard to implement (performance hit)
  • I personally don’t like the term Detonation chamber – it suggests a sandbox processing of some sort, but kinda misses the point of dynamic metadata extraction… while saying so, I can’t propose a better term, so I guess it’s probably the most accurate…
  • Some are not granular enough: Anti-virus logs require a dedicated book where every popular security solution is inspected for the crazy number of events they provide and in what form, let alone their fidelity (distinction between various types of logs would help too)

In any case… it’s a great update and a very exciting one. We have got 266 unique techniques defined as of today. It’s time to catch up!

Really great work from the Mitre Att&ck team – imho it’s a defining milestone for our industry.