You are browsing the archive for Mitre Att&ck.

Att&ck updates…

October 24, 2019 in Mitre Att&ck

I like the recent update to Mitre Att&ck. For many reasons:

  • It finally covers the cloud as a separate entity!
  • It introduces cloud-specific techniques
  • and most importantly – it breaks many assumptions

Many of us took Att&ck for granted. It is already there, it’s pretty established, and it doesn’t change much. New tactics and techniques are introduced on regular basis, but in fairness — changes were very manageable.

This is why the recent update is so important. It emphasizes the volatile state of the framework that is still closer to in statu nascendi than a fully formalized and complete framework.

Proof?

We are so used to OS platforms being Windows, Linux and OS that we may find it surprising that now it’s a completely different game – the update includes the following platforms:

  • Linux
  • macOS
  • Windows
  • Office 365
  • Azure AD
  • Azure
  • GCP
  • AWS
  • SaaS

In terms of log sources, we now have:

  • File monitoring
  • Process monitoring
  • Process command-line parameters
  • Process use of network
  • API monitoring
  • Access tokens
  • Windows Registry
  • Windows event logs
  • Azure activity logs
  • Office 365 account logs
  • Authentication logs
  • Packet capture
  • Loaded DLLs
  • System calls
  • OAuth audit logs
  • DLL monitoring
  • Data loss prevention
  • Binary file metadata
  • Malware reverse engineering
  • MBR
  • VBR
  • Network protocol analysis
  • Browser extensions
  • AWS CloudTrail logs
  • Office 365 audit logs
  • Stackdriver logs
  • Netflow/Enclave netflow
  • Disk forensics
  • Component firmware
  • PowerShell logs
  • Host network interface
  • Network intrusion detection system
  • Kernel drivers
  • Application logs
  • Third-party application logs
  • Web application firewall logs
  • Web logs
  • Services
  • Anti-virus
  • SSL/TLS inspection
  • Network device logs
  • DNS records
  • Web proxy
  • Office 365 trace logs
  • Mail server
  • Email gateway
  • User interface
  • Windows Error Reporting
  • BIOS
  • Environment variable
  • Asset management
  • Sensor health and status
  • Digital certificate logs
  • Named Pipes
  • Azure OS logs
  • AWS OS logs
  • Detonation chamber
  • EFI
  • WMI Objects

These logs are very wide in scope and many of them now directly reference cloud-specific sources.

I would still like them broken down into even more granular pieces though, for instance: the Windows event logs. If you think of them as a single item you will fail to observe the following:

  • Not all Event Logs come enabled by default and need specific audits to be enabled
  • Event Logs come from various buckets: system, security, application are the most popular, but there are more: powershell, BITS, WMI, Sysmon, etc. (you should follow @SBousseaden for more ideas) — in my opinion every combo Log Event Source/Event ID needs to be called out as a separate log soource entry. It has to be configured, tested, and then used & monitored…
  • The metrics are as good as your input data; if you fly high, you miss the subtleties… and in the Blue Team game subtleties matter a lot; you do want to know how many systems are covered by each security control (whether provided by a vendor, or developed in-house)

Also…

  • Some of these logs do seem a bit redundant e.g. Loaded DLLs vs DLL monitoring
  • Windows Event Logs do cover Powershell logs and these are listed separately
  • Named Pipes sounds like a weird log source — is it a subset of API Monitoring(?)
  • API Monitoring and System calls are hard to obtain; we have the auditd on Linux that helps, but a full-blown API monitoring on Windows is hard to implement (performance hit)
  • I personally don’t like the term Detonation chamber – it suggests a sandbox processing of some sort, but kinda misses the point of dynamic metadata extraction… while saying so, I can’t propose a better term, so I guess it’s probably the most accurate…
  • Some are not granular enough: Anti-virus logs require a dedicated book where every popular security solution is inspected for the crazy number of events they provide and in what form, let alone their fidelity (distinction between various types of logs would help too)

In any case… it’s a great update and a very exciting one. We have got 266 unique techniques defined as of today. It’s time to catch up!

Really great work from the Mitre Att&ck team – imho it’s a defining milestone for our industry.

The story of an underTAG that tried to wear a mitre…

March 10, 2019 in Mitre Att&ck, Preaching, Random ideas, Sysmon, threat hunting

Today we tag everything with Mitre Techniques. I like it, but I would also want a bit more flexibility. So, I like to mix the ‘proper’ Mitre tags with my own tags (not only subtechnique tags, but also my own specific tags).

Why?

Say… you detect that System.Management.Automation.dll or System.Management.Automation.ni.dll is loaded into a process. Does it mean it is T1086: PowerShell? Or does it mean, that the process is using a DLL that offers automation capabilities? And may not even do anything dodgy? I suspect (s/suspect/know/) that calling it T1086: PowerShell w/o providing that extra info loses a lot of context.

Why not calling it what it is? <some magic prefix>: Powershell Automation DLL?

Is loading WinSCard.dll always an instance of T1098: Account Manipulation, or T1003: Credential Dumping? Or is it just a DLL that _may_ be used for nefarious purposes, but most of the time it is not (lots of legitimate processes use it; sysmon logs analysis is a nice eye opener here).

Why not calling it <some magic prefix>: Smart Card API DLL?

As usual, at the end of this tagged event, or events’ cluster, there is a poor soul, and the underdog of this story that is tasked with analysing it. If our tagging is as conservative as the mindset of politicians who studied at Eton… then so will be the quality of analysis, statistics, and actual response.

And it is easy to imagine confusion of analysts seeing events tagged with a vague name. For example, net.exe command that accesses user/account data, and the loading of WinSCard.dll may make them assume that there is a cluster of ‘account manipulation’ events indicating an attack. A triage of such vague cluster of events is a time wasted… There is a response cost, and there is an opportunity cost at play here. The example is over-simplistic of course, but the devil is in the details. Especially for the analysts.

I’d say… given the way most of events are logged today, often w/o much correlation and data enrichment at source, the event collection process should make any attempt possible to contextualize every single event as much as possible.

We can argue that combining data at its destination, in aggregation systems, SIEMs, collectors, or even ticketing systems, or on a way to them, is possible and actually, desirable… today’s reality is that we need that single event to provide as many answers as possible…

This is because we know that Data Enrichment at the destination is a serious pain in the neck and relies heavily on a lot of dependencies (up to date asset inventories, list of employees, DHCP mapping, then there is a lack or poor support of nested queries, poor performance of nested queries, and this forces us to use a lot of lookup tables that need to be up to date and require a lot maintenance). And if we need to go back to the system that generated event to fetch additional artifacts, or enrich data manually, then we are probably still stuck in a triage process A.D. 2010…

So… if we must tag stuff, let’s make it work for us, our analysts, and make it act as a true data enrichment tool that it was meant to be… If the event, their cluster, or detection based on them is not actionable, then it’s… noise.