You are browsing the archive for Compromise Detection.

Why breaches happen under IR teams noses

October 12, 2014 in Compromise Detection, Preaching

Having an IR Team is not a guarantee of breach-free life for the organization. In this short post I am trying to list very specific reasons why breaches happen despite IR teams being present and active. Instead of writing yet-another-smart-ass-who-knows-it-all post that talks about ‘events are ignored’, ‘teams are underfunded’, etc. I am trying to list very specific issues that negatively affect the IR team’s work & contribute to actual breaches actually happening (note: the ‘events are ignored’ is not a root cause; it is a result of problems that are rooted much deeper).

So, here it is – it’s obviously subjective so use at your own risk:

  • I think the fundamental problem is that the IR teams don’t hunt i.e. the don’t look at the data their organization generates: AV Alerts, Proxy traffic, DNS queries, etc.
    • IMHO apart from the alerts generated by security controls this should be the major activity of any IR team
    • Triage should be a regular activity on every system; it sounds very difficult logistically, but at the end of the day all you need is something basic e.g. did any Run key change, did any application appear in the %APPDATA% – this can be surely automated company-wide with a few lines of scripting language; introducing such control requires a power of influence though [last point on the list below]
  • Instead:
    • They receive tones of emails daily – few hundreds emails a day is not uncommon with 98-99% being absolutely useless.
    • They receive a lot of ‘threat intel’ feeds which they have to often manually parse and incorporate into their own security controls – these are important, but will never be more important than analysis of the internal data generated by the org.
    • They spend too much time evaluating ‘new’ software from vendors; eventually they end up being beta testers of the software instead of looking at the data.
    • They are often bound by the same rules as all other employees: a hacker who can download and use any tool imaginable is being treated with dir, netstat, wmic; sometimes sysinternals tools (if they are allowed)
    • They are asked to socialize, network and participate in many corporate activities. The number of man hours wasted by endless meetings is incredible.
    • They are often managed by people w/o credentials to do the job – understanding IR requires skills from a large number of disciplines – unfortunately it is not uncommon for the managers to be typical corporate climbers who don’t have a passion for the job. They will also execute their little authority to bring you down if you happen to step on their toes.
    • They are not allowed to work from home (there are some organizations that allow it which is a huge benefit to the organization: working from home allows to use your home lab to analyze malware, research, access resources banned by corporate policies, freely network with others in the industry, it also allows you to really focus on analysis – this is probably the most important bit).
    • They work in the environment full of legacy applications. Old Java, enabled VBA is a major reason why infections happen – upgrading the environment should be really high on C-level folks’ agenda.
    • They are often trained to overestimate the remediation capabilities of security controls e.g. antivirus software (see my post).
    • They are often doing Project Management work deploying solutions instead of actually using them. I would argue one needs separate roles for tool builders and tool users in the successful IR team.
    • They rarely have a power to influence at the C-level. They end up whining with their peers in their cubicles and… nothing changes.

I don’t think breaches can be prevented 100%, as I argue in my other post every single infection detected by AV is a compromise. Same goes for network alerts. Giving IR team tools and time to deal with all of them is incredibly important so that these small fires can be extinguished quickly. And then give them even more time to hunt.

So…. If you want to establish a successful IR program in your org, give your IR team a power to shrug all the useless activities off, kill useless emails at source, train these guys as hell, give them a monitored access to all security controls and most importantly – let them be totally antisocial, but ensure their voice is heard at C-level.


The art of disrespecting AV (and other old-school controls), Part 2

October 5, 2014 in Compromise Detection, Preaching

In December 2013 I posted about  ‘The art of disrespecting AV (and other old-school controls)‘. I saw people retweeting it at that time and was quite happy that it generated some small feedback. It was meant to stimulate some discussion, but also be a reflection on security controls in general – it’s sometimes good to just step back and think a bit more on what they are and how to use them properly.

Well, almost a year later we are witnessing ever increasing number of breaches exposed to the public almost daily (and breaches that are often of gargantuan proportions like JPM) and we also know that they often happened right under the nose of the involved companies.

How could that happen?

Many articles about these breaches mention that ‘alerts were seen, but ignored’.

It was this commentary that triggered me to write a part two.

Before I begin though: when I published the first part someone made a point – which I missed – that AV is an extra attack surface/vector. Indeed, it is. I did not include it in the first part, so I am now mentioning it to tick all the boxes. I do argue in the part two though that this is not such an important factor considering the tremendous & positive role AV plays in a protection of the company assets.

Let me explain.

Over last few years I often witness a ‘romantic’ notion celebrated by the IR/SOC teams which I believe is (sadly) quite universally accepted (and I really do hope I am just imagining it) that:

  • antivirus detecting stuff is a sign of a bad thing, but since AV includes remediation it solves the problem when it removes bad stuff; there are only two major scenarios considered here:
    • antivirus detection followed by removal –> good thing, let’s move on, nothing to see here
    • antivirus detection followed by a failed removal –> bad thing, we better check this out, but it’s probably a glitch/bug in the antivirus/signatures, or we simply need to reboot the host and if it doesn’t work either do some work by removing the malicious piece manually or recommend rebuilding

This is a common approach in many companies. There are also variations of course: alerts from a fixed drive and a removable drive – are they important the same way? Is f.e.x. omnipresent virus Sality or Virut picked up on the removal drive a real deal? Or should we just ‘eventify’ it and move on?

There is a very important bit missing in the approach described above and I already mentioned it in the first part. Once the alert triggers we have a few outcomes:

  • antivirus detected it and removed it – we are good
  • antivirus detected it and failed to remove it – we are not so good, but well, we just need to work on this system
  • antivirus detected it and maybe even removed, but… what if it didn’t detect some extra malware present on the very same system/attached media?

Nowadays when AV detects stuff it often works in a very ‘malicious neighborhood’.

When it does pick up something you should always think of that ‘hood’. So say the AV picked up a PDF exploit, or a JAR file. The fact it deleted it doesn’t matter. Remember that this is a time of professionally written exploit packs and if one doesn’t work, another may. And will. And is most likely delivered the very same moment you received the first alert.

That leads me to a single conclusion which I hope you will start promoting in your organization:

When you see an AV alert you need to triage the system, because it has been compromised + there may be still some undetected malware present on it.

To put it in a more formalized way:

  • A malware present on the system for which you have received an antivirus alert is an incident
  • It is an incident, because the system already got compromised
  • The reason for calling it a compromise is that antivirus alert is equal to a detection of a malicious agent introduced to the environment (drive-by, removable device, malicious insider, unintentional action of the user, etc.); whether it executes or not, doesn’t matter

Handling AV alerts should be a priority for any IR team – while it will be a common resolution for the alert to become just an event or near-miss [e.g. removable device and nothing else 'funny' running on the system] it should be treated as a compromise. And every AV alert should be seen as a big compromise waiting to happen. The scary part is that the number of alerts is crazy – it happens daily, hourly, and in large companies every few minutes.

Under the umbrella of ‘business case’ there is always an exposure and rest assured that it’s much easier to exploit users’ gullibility or plain stupidity than spending days trying to find a bug in the AV. Obviously this sort of statement requires some backing in numbers and I can only say that you need to ask your friend administrator what are the statistics on the AV alerts in their company. And then ask if they act on it? You may also compare it against statistics of AV vulnerabilities and how much havoc they spread. And I don’t mean to downplay this aspect of security – it’s just not that important.

I think the main problem of AV is not its presence, but actually a lack of. Over the years due to complains from users it became completely invisible to the users and admins.  It’s kinda ironic, because it is a direct result of the volume and intrusiveness of the alerts. In the end alerts are ignored or misunderstood, users don’t even know something happens. Only dashboards look nice. (Update: just to clarify – by visibility to users I don’t mean the old-school intrusive alerts popping up on users’ desktops; there are more effective ways for users to react to threats – escalation to managers, daily email alert until it’s removed followed by SMSs if 3 emails are ignored and eventually a call from support, etc. – something along these lines).

AV alerts mean trouble, they should be visible to the user and you should look at them regularly and triage every single system the alert triggers on. Sounds daunting. And it is. Perhaps that’s why it is called ‘job’ :-)