Anti- techniques refresh A.D. 2019

August 3, 2019 in Anti-*

The old-school malware used to detects Reverse Engineering tools by looking for artifacts created by this type of software. The most common artifacts include Process Names, DLL Names, mutexes, files, Registry Entries, Window Classes / Titles. It’s actually really trivial to catch sysinternals tools, Wireshark, OllyDbg, IDA, etc. by using simple Windows API calls that find Window with a specific class/title…

Long, and always growing lists of ‘interesting’ window classes/titles used by these tools have been circulating within a cracking / malware community for many years & are kinda a standard now. So standard, sometimes they include artifacts as old as these created on Windows 9x (e.g. Softice references that are obsolete today).

Anyways….

I’ve been recently thinking of all these well-known tricks and it suddenly hit me that we don’t really hear much about software targeting newer tools on our scene:

It is handy to review these tools from an attacker perspective – we may be able to collect additional data points that can be easily converted into yara sigs, etc.. And of course, this is a new class of old-is-new-again tricks that may be out there and we are just not focusing on finding yet – a.k.a. potentially missing them.

IDA’s QT windows seem to be hard to spot using your standard window enumerations APIs — the drawing routines are all internal to QT and there are no native Window primitives used by the class other than a generic window belonging to class Qt5QWindowIcon. Still, we can query its window text though and if it contains .idb, or _i64 strings (that refer to IDA database file extensions) chances are high that IDA is running.

GHIDRA is Java-based so windows’ classes are related to it e.g. SunAwtDialog or SunAwtFrame. The Window titles will of course reveal references to the program name e.g. Ghidra: <project name>.

PE-Sieve is a command line tool, so there are no windows created, but it can still be spotted by looking at a process list. Any process with a pe-sieve in name should be a red flag.

Detect It Easy (DiE) is written in QT, and same as IDA doesn’t use native window hierarchy (just one class QWidget). Still, its window titles are revealing the name of a program e.g. Detect It Easy 1.01 or Die.

WinDbg with Time Traveling relies heavily on a bunch of DLLs that will be loaded into a target process:

  • ttdloader.dll
  • ttdplm.dll
  • ttdrecord.dll
  • ttdrecordcpu.dll
  • ttdwriter.dll

Detecting presence of any of these should work as a neat anti-debug trick. (note: I have not explored it enough yet, but it would seem that ttdrecordcpu.dll and ttdwriter.dll are always loaded into a debugged process; others are helper libraries & may not be present in a debuggee’s address space; need to run more tests).

XDBG is a great debugger and gaining more and more users as it breaks Olly’s hegemony when it comes to user mode code analysis – it offers a debugger for both 32- and 64-bit programs. And since it was written in QT as well, it kinda suffers from the same detection limitations as other programs I described above. Still, the window class Qt5QWindowIcon and the Window Title x32dbg or 642dbg give it away. Same goes for process names.

Fakenet-NG is a nice local network redirector. When it’s running, a service called WinDivert xxx is in operation, so it’s one way of to detect it. Others may include spotting boilerplate file content that is delivered on monitored ports — if an analyst forgot to edit these, the content returned by a local server is predictable and can be identified as a default FakeNet reply e.g.:

With regards to WireShark, there are tones of ways to detect it. The filenames in default install directory, the Registry entries, NPCAP/WinPCAP driver/service, Window class/title, the file extensions it takes over, etc. Notably, newer versions of Wireshark also use QT, so you can look for a Qt5QWindowIcon class with a title The Wireshark Network Analyzer.

Sysmon and EDRs is a completely different category. If you see them running — need to rely more on Lolbins, and/or other trickery (e.g. common whitelisting points i.e. directories whitelisted by EDRs/analysts often relying on ‘standard’ configs like e.g. SwiftOnSecurity). There is a growing body of knowledge that focuses on bypassing EDRs and it’s just a matter of time that it will become a de facto part of attackers’ toolkit. Bugs, clever bypasses, code patching etc. are on a rise. It’s also a time we create a curated list of artifacts that EDR tools give away: program locations, processes running, Registry keys, services, etc.

For obvious reasons, I am not listing all artifacts, to make it a little bit harder for the bad guys, but these are potential detections capabilities out there and it’s good to keep them in mind. Not only they can help malware to detect defenders tools, but may also be useful for sandbox vendors / SOC analysts to identify sample behavioral traits as well.

I guess, the Cat and Mouse game continues?

Moar and Moar Agents – sthap!

July 27, 2019 in EDR, Preaching

$Vendors love agents.

  • One does the AV
  • One does the DFIR
  • One does the EDR
  • One does the CIDS
  • One does the DLP
  • One does the FIM
  • One does the IAM
  • One does the SSO
  • One does the Event Forwarding
  • One does the Asset Inventory
  • One does the Client Proxy
  • One does the Managed Updates
  • One does the Vulnerability Management
  • One does the Employee Monitoring, on demand
  • One does the Conferencing
  • etc.

Some claim they are agent-less, but under the hood they use WMI, psexec, GPO, SCCM, etc.

Every single agent adds to a list of events that are generated and collected by a system/and often other agents. Every single one steals CPU, RAM, HDD cycles. Almost every single agent runs other programs. Almost every single agent works by spawning multiple processes at regular intervals. Almost every agent that is noisy renders all Mitre Att&ck’s Discovery tactic detections useless.

A quick digression: I used to have a work laptop with 4GB RAM. At least once a day my work would come to a halt. I always had Outlook, Chrome, and Microsoft Teams opened. At that special time of a day an agent would kick off its work and my computer’s CPU/RAM usage would jump to 100%. I couldn’t switch between apps, and literally had to wait each time for good 5-10 minutes for the agent to stop, before I could resume my work.

This has to sthap.

We all know that we need that Magic Unicorn single-vendor solution that works for Win/OSX/Lin + offers AV+EDR+DFIR+FIM+DLP+CIDS+VM+SSO+IAM in one + uses minimum resources + is cheap :). Atm all of these features are typically addressed by solutions from different vendors & the moar of them make a claim to your box, the worse the performance will be.

Let me focus on EDR here for a moment as they ARE one of the worse resource hogs, especially these ‘solutions’ that rely on polling. IMHO tools that primarily use this approach to collect data have to go, and pronto & I would personally never (re-)invest in them; polling is not only very 2011, but it literally misses stuff, adds a lot of stress to the endpoint, data synchronization and accuracy are questionable, and so on and so forth — ah, and these solutions often piss off analysts a lot – it’s so often that they want to do triage the system & they can’t, cuz the system is offline.

To elaborate on the ‘synchronization and accuracy ‘ bit:

  • system offline or on a different network –> no data accessible at all –> delays in triage/analysis
  • if you are doing env sweeps, you end up polling a few times to ensure you collect data from ‘all’ systems; the ‘all’ is just a wishful thinking — you have no control over it; also, as a result, some systems that are always online end up being polled more than once (resources wasted)
  • datasets are not synchronized & you got duplicates since you will get a few batches with different timestamps

So… IMHO polling will always give you an imperfect data to work with; it just doesn’t work in a field that is so close to Digital Forensics + doesn’t help to answer questions that will be asked by management:

  • how many systems in our env have this and that artifact present? you will never be able to answer with a 100% certainty
  • is our env. clean? yeah, right… 75% of boxes replied to our query with a negative result, others didn’t, so… we are 75% clean

Plus, they often rely on third party/OS binaries to do the job + are often using interpreted language (slow, cuz interpreters are often executed as external programs that add to the event noise, especially the ‘Process creation’ event pool).

What I find the most hilarious is the fact actual malware can squeeze in system info collection, password grabbing, screen grabbing, video recording, vnc modules, shell, etc in <100KB of code; most of vendors use RAD, Java, scripts and end up with awful bloatware.

What I am trying to say is that EDR tools that are worth looking at are:

  • tools that integrate with the OS on the lowest possible level — AV is integrating on a low-level for a reason (also, look at Sysmon)
  • collect all real-time events
  • send data off the box ASAP (any data stored on the box can be compromised/deleted/modified)
  • send data out by any means necessary (multiple protocols?)
  • send stuff to cloud anytime box goes online (no matter what network)
  • use native code (machine code) for main event collector modules instead of interpreted language –> performance / minimal footprint
  • single service process (supported by kernel driver, when necessary) instead of multiple processes
  • doesn’t spawn other processes — native code-based modules collect data as per need, loaded as DLL or always present (the interception of events is a code that can be VERY lean; the bulkier the code, the crappier the solution; red flags: .NET, Java, Powershell, VBScript, Python, WMI, psexec, etc.)
  • run queries on data / analyze outside of the endpoint

Basically: the agent intercepts, collects, caches, sends out to cloud when any network is available & asap, then sleeps until the next event occurs.

Of course, the solution may have extra modes for deploying heavy-weight stuff e.g. scripts, DFIR modules (memory dumping, artifacts collection, etc) + prevention modules etc., but this is used only during actual analysis, not triage.

So, what I covered is a basic architectural requirement:

  • An agent acts as a event forwarder ONLY & sends them to a Collector + can launch heavy ‘forensic’ modules / programs as per necessity
    • Events that are collected must be configurable, ideally (pre-processing –> less events –> better performance/less storage/less bandwidth)
  • Collector acts as a repository of events
    • Just store & index
    • Perhaps apply some generic out of the box rules/tests (VT, vendors’s IOCs, yara, etc.) and trigger alerts
  • Console allows to query Collector events, set up watch lists, manage rulesets, etc.

Coming back to agents as a whole — it’s time for some consolidation to happen… As usual, big players will be the winners as only they can afford to acquire and integrate.