You are browsing the archive for Preaching.

But VT says so!

October 23, 2014 in Malware Analysis, Preaching


Removed some non-sense, fixed grammar mistakes and added info about regression.

Old post

I am really tired today so the only thing I can do is preaching :) I have a few (I hope cool) research posts piled up, but really no time recently to polish them and publish. Pls wait for the second half of November when I am back from holidays.


Today I want to talk about Virus Total.

It’s an awesome web site that went from a resource known to a few¬† to ‘yet another lucky guy acquired by Google’.

Now, let me tell you one thing: Virus Total’s importance is the biggest B/S in the IR universe. Note that I love VT, I just hate the perception of its importance.

There are many reasons, but the simplest to pick up on is “but VT says so”.

It is not uncommon nowadays for people – often including these who can’t distinguish a virus from a trojan – to utilize VT on daily basis and treat its statistics as a deity that tells them about law & order in the software/sample universe.

I uploaded the file XYZ to VT and it says: bad.


Let me tell you a little secret of Antivirus industry here:

  • Problem: Lots of samples. Mucho unknown samples
  • Question: How to cut the corners?
  • Answer: Use other AV to tell us if it is bad; if we get lucky, we will generate an automatic def/sig for it and move on; we can be smart and rely on the judgment only if at least 1,2,3…N of them say so, but we still cut corners.

Yup. You heard that right.

Your VT score is now worth mierda. One guy detects it, suddenly everyone detects it. Human involvement = 0, maybe 1. All of it is bots at work. And sometimes the first guy realizes it was an FP and removes the sig. Yet the others who blindly followed don’t. Not everyone cares or can afford regression testing and the file remains ‘detected’ forever.

VT is a resource that presents you an aggregated information from various providers including, but not limited to:

  • antivirus vendors
  • sandboxes
  • result of running various proprietary tools
  • etc.

but it DOES NOT tell you how many of these detections/hits are derived from each other: vendors, others on the list, some heuristic rules.

Read the score. Understand it. But also understand the context of it.

It’s unreliable & you should NOT use it blindly or you will make bots replace you as a decision maker.

Why breaches happen under IR teams noses

October 12, 2014 in Compromise Detection, Preaching

Having an IR Team is not a guarantee of breach-free life for the organization. In this short post I am trying to list very specific reasons why breaches happen despite IR teams being present and active. Instead of writing yet-another-smart-ass-who-knows-it-all post that talks about ‘events are ignored’, ‘teams are underfunded’, etc. I am trying to list very specific issues that negatively affect the IR team’s work & contribute to actual breaches actually happening (note: the ‘events are ignored’ is not a root cause; it is a result of problems that are rooted much deeper).

So, here it is – it’s obviously subjective so use at your own risk:

  • I think the fundamental problem is that the IR teams don’t hunt i.e. the don’t look at the data their organization generates: AV Alerts, Proxy traffic, DNS queries, etc.
    • IMHO apart from the alerts generated by security controls this should be the major activity of any IR team
    • Triage should be a regular activity on every system; it sounds very difficult logistically, but at the end of the day all you need is something basic e.g. did any Run key change, did any application appear in the %APPDATA% – this can be surely automated company-wide with a few lines of scripting language; introducing such control requires a power of influence though [last point on the list below]
  • Instead:
    • They receive tones of emails daily – few hundreds emails a day is not uncommon with 98-99% being absolutely useless.
    • They receive a lot of ‘threat intel’ feeds which they have to often manually parse and incorporate into their own security controls – these are important, but will never be more important than analysis of the internal data generated by the org.
    • They spend too much time evaluating ‘new’ software from vendors; eventually they end up being beta testers of the software instead of looking at the data.
    • They are often bound by the same rules as all other employees: a hacker who can download and use any tool imaginable is being treated with dir, netstat, wmic; sometimes sysinternals tools (if they are allowed)
    • They are asked to socialize, network and participate in many corporate activities. The number of man hours wasted by endless meetings is incredible.
    • They are often managed by people w/o credentials to do the job – understanding IR requires skills from a large number of disciplines – unfortunately it is not uncommon for the managers to be typical corporate climbers who don’t have a passion for the job. They will also execute their little authority to bring you down if you happen to step on their toes.
    • They are not allowed to work from home (there are some organizations that allow it which is a huge benefit to the organization: working from home allows to use your home lab to analyze malware, research, access resources banned by corporate policies, freely network with others in the industry, it also allows you to really focus on analysis – this is probably the most important bit).
    • They work in the environment full of legacy applications. Old Java, enabled VBA is a major reason why infections happen – upgrading the environment should be really high on C-level folks’ agenda.
    • They are often trained to overestimate the remediation capabilities of security controls e.g. antivirus software (see my post).
    • They are often doing Project Management work deploying solutions instead of actually using them. I would argue one needs separate roles for tool builders and tool users in the successful IR team.
    • They rarely have a power to influence at the C-level. They end up whining with their peers in their cubicles and… nothing changes.

I don’t think breaches can be prevented 100%, as I argue in my other post every single infection detected by AV is a compromise. Same goes for network alerts. Giving IR team tools and time to deal with all of them is incredibly important so that these small fires can be extinguished quickly. And then give them even more time to hunt.

So…. If you want to establish a successful IR program in your org, give your IR team a power to shrug all the useless activities off, kill useless emails at source, train these guys as hell, give them a monitored access to all security controls and most importantly – let them be totally antisocial, but ensure their voice is heard at C-level.