You are browsing the archive for Preaching.

Moar and Moar Agents – sthap!

July 27, 2019 in EDR, Preaching

$Vendors love agents.

  • One does the AV
  • One does the DFIR
  • One does the EDR
  • One does the CIDS
  • One does the DLP
  • One does the FIM
  • One does the IAM
  • One does the SSO
  • One does the Event Forwarding
  • One does the Asset Inventory
  • One does the Client Proxy
  • One does the Managed Updates
  • One does the Vulnerability Management
  • One does the Employee Monitoring, on demand
  • One does the Conferencing
  • etc.

Some claim they are agent-less, but under the hood they use WMI, psexec, GPO, SCCM, etc.

Every single agent adds to a list of events that are generated and collected by a system/and often other agents. Every single one steals CPU, RAM, HDD cycles. Almost every single agent runs other programs. Almost every single agent works by spawning multiple processes at regular intervals. Almost every agent that is noisy renders all Mitre Att&ck’s Discovery tactic detections useless.

A quick digression: I used to have a work laptop with 4GB RAM. At least once a day my work would come to a halt. I always had Outlook, Chrome, and Microsoft Teams opened. At that special time of a day an agent would kick off its work and my computer’s CPU/RAM usage would jump to 100%. I couldn’t switch between apps, and literally had to wait each time for good 5-10 minutes for the agent to stop, before I could resume my work.

This has to sthap.

We all know that we need that Magic Unicorn single-vendor solution that works for Win/OSX/Lin + offers AV+EDR+DFIR+FIM+DLP+CIDS+VM+SSO+IAM in one + uses minimum resources + is cheap :). Atm all of these features are typically addressed by solutions from different vendors & the moar of them make a claim to your box, the worse the performance will be.

Let me focus on EDR here for a moment as they ARE one of the worse resource hogs, especially these ‘solutions’ that rely on polling. IMHO tools that primarily use this approach to collect data have to go, and pronto & I would personally never (re-)invest in them; polling is not only very 2011, but it literally misses stuff, adds a lot of stress to the endpoint, data synchronization and accuracy are questionable, and so on and so forth — ah, and these solutions often piss off analysts a lot – it’s so often that they want to do triage the system & they can’t, cuz the system is offline.

To elaborate on the ‘synchronization and accuracy ‘ bit:

  • system offline or on a different network –> no data accessible at all –> delays in triage/analysis
  • if you are doing env sweeps, you end up polling a few times to ensure you collect data from ‘all’ systems; the ‘all’ is just a wishful thinking — you have no control over it; also, as a result, some systems that are always online end up being polled more than once (resources wasted)
  • datasets are not synchronized & you got duplicates since you will get a few batches with different timestamps

So… IMHO polling will always give you an imperfect data to work with; it just doesn’t work in a field that is so close to Digital Forensics + doesn’t help to answer questions that will be asked by management:

  • how many systems in our env have this and that artifact present? you will never be able to answer with a 100% certainty
  • is our env. clean? yeah, right… 75% of boxes replied to our query with a negative result, others didn’t, so… we are 75% clean

Plus, they often rely on third party/OS binaries to do the job + are often using interpreted language (slow, cuz interpreters are often executed as external programs that add to the event noise, especially the ‘Process creation’ event pool).

What I find the most hilarious is the fact actual malware can squeeze in system info collection, password grabbing, screen grabbing, video recording, vnc modules, shell, etc in <100KB of code; most of vendors use RAD, Java, scripts and end up with awful bloatware.

What I am trying to say is that EDR tools that are worth looking at are:

  • tools that integrate with the OS on the lowest possible level — AV is integrating on a low-level for a reason (also, look at Sysmon)
  • collect all real-time events
  • send data off the box ASAP (any data stored on the box can be compromised/deleted/modified)
  • send data out by any means necessary (multiple protocols?)
  • send stuff to cloud anytime box goes online (no matter what network)
  • use native code (machine code) for main event collector modules instead of interpreted language –> performance / minimal footprint
  • single service process (supported by kernel driver, when necessary) instead of multiple processes
  • doesn’t spawn other processes — native code-based modules collect data as per need, loaded as DLL or always present (the interception of events is a code that can be VERY lean; the bulkier the code, the crappier the solution; red flags: .NET, Java, Powershell, VBScript, Python, WMI, psexec, etc.)
  • run queries on data / analyze outside of the endpoint

Basically: the agent intercepts, collects, caches, sends out to cloud when any network is available & asap, then sleeps until the next event occurs.

Of course, the solution may have extra modes for deploying heavy-weight stuff e.g. scripts, DFIR modules (memory dumping, artifacts collection, etc) + prevention modules etc., but this is used only during actual analysis, not triage.

So, what I covered is a basic architectural requirement:

  • An agent acts as a event forwarder ONLY & sends them to a Collector + can launch heavy ‘forensic’ modules / programs as per necessity
    • Events that are collected must be configurable, ideally (pre-processing –> less events –> better performance/less storage/less bandwidth)
  • Collector acts as a repository of events
    • Just store & index
    • Perhaps apply some generic out of the box rules/tests (VT, vendors’s IOCs, yara, etc.) and trigger alerts
  • Console allows to query Collector events, set up watch lists, manage rulesets, etc.

Coming back to agents as a whole — it’s time for some consolidation to happen… As usual, big players will be the winners as only they can afford to acquire and integrate.

The art of writing (for IT Sec)

May 19, 2019 in How to..., Off-topic, Preaching, Random ideas

When I wrote my first DFIR report it was terrible. After receiving the commented version back from my reviewers my heart sunk. I felt I am not going to make it. While I love technical and investigative bit, and had some good win on that particular investigation… somehow, I was unable to communicate it. And since I always liked to write I was really surprised (a.k.a. shocked a.k.a. ego-hurt-badly).

All these hours of work put into report didn’t matter, all these cool technical bits I described didn’t matter – when the doc came back to me it was pretty much a different document… Yup, so many comments and corrections. I literally couldn’t see my original content. There was so much of ‘Adam, you are doing it wrong’… Ouch.

I must add that it was for a Law Enforcement case, so it was a big deal.

I went back and forth on these comments with my reviewers and finally…

  • Got that report into a decent shape & submitted it to the LE
  • Realized that writing for a general public or blogging is not the same as writing for DFIR, especially for LE

And it became especially clear when I received a letter to show up in court and to testify… Imagine my horror. I was a noob and yes, that absolutely terrible report was going to be talked about. And I will be questioned on its content…

Holy cow.

It’s actually pretty intimidating. Confidence from a safety of home, or office seat is one thing, but talking about your work in Court is something completely different. And the guys who ask you questions will try to break you and show you as an incompetent clown. And your report and work may lose credibility… After a mandatory panic attack I started asking around. Some of my peers went through this before and gave me many hints: only answer questions, don’t add any extra info, don’t speculate, don’t be afraid to share a professional opinion, but keep it concise, don’t get emotional, watch out for attempts to dismiss your evidence, or target your credibility (personal attacks, etc), etc. So… YES. That was pretty intimidating, to say the least.

I kinda got lucky on that one and eventually didn’t go to testify, because the guy pleaded guilty (my report actually helped to persuade him!!!), but from there on I learned to be more careful, more humble, and definitely more organized with regards to what I write, especially commercially.

It’s really easy to make claims, it’s much harder to support/describe evidence, build a proper case, argument, timeline, or in case there is no evidence at least offer an educated guess, share professional opinion to support them (including contextualizing circumstantial evidence).

Think about it for a second: from a DFIR perspective we use a lot of tools to extract and interpret evidence. While we are happy building timelines, the whole process of data extraction and interpretation could be called into a question. How do we know, or how are we so sure the programs we use extract and interpret data correctly?

Notably, what you know, or what you think you know will be scrutinized in any possible way, so as you write your report you do need to re-read a lot of older documents, or reference materials to avoid making a mistake of making a statement that is easy to prove to be incorrect, inaccurate, or too general. This may ruin your case. To give you an example… Say… you describe that programs always load in a certain way under Windows, and that’s the only way to run programs. Be careful not to make an overstatement or misrepresentation. As it turns out, there is a lot of other ways to run code on Windows, whether via shellcode, exploits, side-loading, etc. The moment you are caught with statements that can be proven inaccurate your credibility may suffer.

This is where this article begins.

Whether you write a DFIR report, pentesting report, malware write-up, Threat Intel doc, or just fill-in the ticket or even post on the blog or Twitter think for a second of the following:

  • Who is your audience?
  • Who is your audience that you don’t know of?
    • Tickets are often reviewed by Compliance/Audit teams
    • Your most Senior Management may do it one day, even if whimsically
    • In case of a breach, tickets related to the breach-related events/incidents may become evidence in Court
  • How accurate is your description?
    • Did you write about facts or shared an opinion?
    • Did you use language that may not be fit for the purpose? Slang, vulgarisms, personal opinions, puns, jokes, commentary, etc. have no place in these cases
    • Can a non-technical person understand what you wrote? Will they understand how it will affect them?
    • If it is a ticket, is there a closure? You shouldn’t close tickets with no closure statements even if it’s just a simple ‘Based on the investigation, there is no further risk, and the ticket can be closed’; it helps you, helps your manager, and helps the org if these statements are there
  • Will the audience focus on the headline only, summary, or gore details?
  • Are you the first one to publish about it? Do your homework – and always give credit to any relevant older research, if you can find it. Update your post, if you find the references later, or someone provides you with a link (you will be surprised how many time people send me links to where some long forgotten blog/PDF from early noughties discusses some similar topic I just wrote about thinking it’s a novelty)
  • Assume that at least one person will come back to you with comments that will bring a revolution to your thought process (e.g. to point out gaps in your thinking, suggest/reference older /often better/ research on the same topic, or better, more efficient approach to the same problem); anticipate it and accept it in a humble way; remember to thank these guys – they not only read your stuff, they enrich your knowledge!!
  • Assume you may need to explain your claims in ELI5 fashion
    one day and finally…
  • If possible, describe what you did so it can be replicated, and/or re-analyzed; share code, data, examples, queries, attach files, results, add comments how you interpreted them.

This sounds trivial and kinda overdone, right? Let’s see…

  • Twitter is mainly opinions – who cares
  • Tickets’ content is almost never read by anyone – who cares
  • Blogs are blogs – who cares
  • Malware reports are now so generic that they are primarily part of a PR machine, and are actually really easy to write (most of the time=quick intro, some IDA/Olly/Xdbg/Ghidra/DNSpy screenshots of walking through malware stages, finally a conclusion with a marketing bit and then yara+IOCs; they can also be semi-automatically generated from sandboxes – who cares
  • Red Team/ Pentest reports are also semi-automated in many ways, and often just focus on an extensive list of vulnerabilities found by scanners, or ‘I pwned you, patch your systems, kthxbye’ bit if they managed to actually compromise some systems; notably, red teams, similarly to DFIR teams need a lot of willpower and incentive to keep logs of all the steps they take; why? because it’s often poking around w/o any success for many hours; it’s when they hit the jackpot, they immediately chase the leads (DFIR) or explore new paths (red team); this is _hard_ to document, because excitement takes over – still, who cares
  • DFIR reports, even if still manually written, more and more suffer/benefit from an automation too; copypasta and generalizations are a norm, and a predictable TOC (often enforced by standards e.g. in PFI breaches) is there too
  • Finally, Threat Intel is a kinda beast on its own; from literal forwards of PDFs through copypasta exercises to actual valuable intel pieces affecting your org (it was very bad a few years ago, but it’s getting better and better).

Notably, other industries suffer from templates and copypasta as well, so it’s not a phenomenon that is infosec-centric. So many T&S, commercial reports, surveys, searches, etc. are not only non-conclusive, but almost all of them are written in a ‘we don’t don’t take any responsibility’ way. With regards to searchers, reports they are also typically direct exports from databases and while in some cases may get enriched by a quick, yet superficial ‘personal touch’ to make it more credible, they are just an easy source of revenue for companies that own these databases. Sadly, infosec is following these steps. And while we are all pressured by time, and billable hours is what matters… it will be quite a shame if we end up delivering the same vague content as a part of BAU (Business As Usual).

This is where this article begins being practical.

Lenny Zeltser published Writing Tips for IT Professionals. If you have not read it, please do so. This is a great tutorial on how to be strategic about your writing.

Also, for anything you write assume that LE, C-level guys, firms engaged commercially to re-do/confirm/audit your DFIR / pentest analysis, experts in the industry will read it at some stage. Also… assume these reports will become public.. cuz… breaches.

So… try to write in a defensive way, make your lack of knowledge known (where applicable). Suggest avenues for additional research if you can. Don’t claim anything 100%, but at the same time use common sense so that your article doesn’t overuse words like ‘allegedly’, ‘possibly’, ‘probably’, ‘reportedly’, ‘supposedly’, etc.. Be honest, be humble. Focus on facts, not editorializing.

Also… use Alexious Principle, it’s such a simple, yet powerful recipe for writing almost any report/write-up within an infosec space in a defensive way. If you include these 4 points it’s almost guaranteed that all the questions asked by a client, LE, sponsor will be addressed. The less follow-ups on the report, the better writer you are.

Finally, you need to practice. The more you write, the better you will get at it. Also, read documents that are within the same audience spectrum — if you need to write DFIR reports, read available public reports about breaches. Cherry-pick language, statements, as well as formatting style, and the document organization.

And last, but not least – do peer review, if possible. Ask more senior guys to look at what you write. Ask them if there is anything that sounds too vague. Correct it.

And to be honest, this post is a good example of bad writing. I mixed up a lot of things and didn’t have much structure here; if you read that far, thank you.