The story of a possible prank

September 25, 2015 in Preaching

In 2011 a security researcher pulled – what I believe – a prank on a well-known org. He made them publish a paper with an appendix containing a non-sensical data. I reported this to the org in 2012 as soon as I discovered it. I was actually flabbergasted at that time that someone could be that bold to pull the org’s leg this way (risking both author’s and org’s credibility), but it was still 2 months before the infamous Nmap Guide made it to the news and trolling security orgs became a norm.

I forgot about it for a long time, but recently it came back to me & I checked the web site of the org to see if they pulled the paper – the paper is still there – 3+ years after I reported it – the goofy appendix is of course there as well.

I must emphasize that I do not have a proof that it is a prank, but the non-sensical information included in the paper cannot be a result of a typo, or an accident; it looks like someone deliberately made stuff up. Of course, if it is just a result of the author’s ignorance or it was the intern who wrote that it would make it for even more lulz.

I don’t want to mention the gore details for many reasons. Thanks for understanding.

I do want to mention though 2 interesting side-effects of this paper being published:

  • The information was copied to other blogs (not too many, but always).
  • Based on the information in this paper someone created IDS signatures – talk about quality & tests

You may be wondering why I am posting such a vague info at all.

It’s simple: question everything you read.

I personally make tones of mistakes. I sometimes read some of my older posts and I find bugs. Not only typos, but actual logical bugs that make me really ashamed. I don’t like to be wrong, I really don’t, but if I am the only finding out then what about the poor guys who believed it then and believe it now?

There is a certain responsibility of a writer, a researcher to ensure the quality of the writing is at the appropriate level. But it is impossible if there is no feedback. Especially the critical one.

To certain extent I can understand frustration of HC when he insists on receiving feedback from readers. Seeing people retweeting, but not reading can be certainly disheartening. In my opinion expectations of a blog writer should be very low here, and it keeps me sane writing & babbling anytime I feel like – at certain level I don’t even care – these are more my notes that I feel may be interesting to share, less my interest or a will to change the world (we all die; I am great at parties :) ).


But if there is one thing that I care about is accuracy. If I make a mistake and no one tells me, it really sucks. And the fact is that most of people don’t even bother to read in-depth anymore. Everything is ‘just in time’ – you only read stuff when you need it. I do it all the time. Skimming is a necessity. And this is fine, as long as the stuff you read is correct.

But it rarely is 100%.

So if you read this – please read whatever you read with an assumption that what you read may not be 100% right. It is especially important with materials endorsed by orgs. Like everyone who made their hands dirty & sinned by publishing – they sometimes publish bad quality stuff. Only these who don’t do anything make no mistakes at all.

Keep your eyes open.

Enter Sandbox – part 9: Message is in a bottle, and sometimes in a box

September 24, 2015 in Batch Analysis, Reversing, Sandboxing

Running programs and expecting them to behave is nothing, but a wishful thinking. When you start processing thousands of files you quickly discover that the reality of automated dynamic software analysis is quite harsh. Component and software dependencies, missing command line arguments, crashes, annoying nag screens, installers using non-standard GUI toolkits, software written for older OS, or frameworks, expired evaluation versions of software protection schemes, trials, evaluation copies of shareware, pranks, corrupted files, uninstallers, and many more make the samples misbehave.

Once executed, many samples simply exit – not necessarily in a very graceful way. Analysis fail.

There is no easy way to force these applications to actually run – typically, manual analysis are required to create a new behavioral rule (often with a patch) that will force this, and similar apps in the future to execute further, beyond the exit condition. Sometimes it’s not even possible. Notably, patches can be applied not only to the samples, but also to the analysis system (f.ex. install missing dependencies like a specific version of .NET, old-school OCX, old Borland files, etc.). It may be also necessary to bypass software protection schemes i.e. crack samples – legality of it is somehow shady.

To apply these patches and workarounds, one needs to analyze existing conditions that cause these samples to fail. Surprisingly, lots of them can be found by reading the message box captions and texts. As usual, this is not a trivial task since we deal with many languages, many different cases, and uncertainties, but it is possible. But patterns can be observed.

For starters, let’s look at expired software protection schemes. There are lots of malware samples where author used an evaluation version of the protection scheme with the aim of hiding the actual payload. When the sample is executed the protection scheme checks the conditions and if the evaluation period expired, it just prevents the app from running. One could argue that if evaluation / trial /unregistered version is detected, it alone is a good condition to classify sample as potentially unwanted, at least. Still, it does require signatures (either static or dynamic) to detect this sort of samples.

Here are some examples:












The missing component scenario is also very common. While Visual Basic and Borland C are no longer that popular,  there are lots of old samples out there belonging to a software written in these old programming platforms. Sandbox should be expecting these in a queue…

Again, a few examples:








Large repositories of samples can’t escape programs written for localized versions of Windows. Running such applications on English OS leads to ‘garbage’ message boxes showing up with lots of gibberish that have no particular meaning and it’s hard to deduct what they mean, until analyzed.

Here is an example of such message box:


It turns out that it’s not a crash – just a message telling the user:

  • 제거를 위해 모든 익스플로러창을 닫게됩니다.

which – after Google translation – says:

  • To remove any Explorer window it will be closed.

I don’t speak Korean, but guessing by the GT output I assume it is just a notification the program will kill all (Internet) Explorer windows before it can remove some app. Whatever is the meaning – one has to ensure it is analyzed so that the sample (and potentially similar) actually works properly.

Many samples crash – this is overwhelming and I think the only way to handle this is to signal in the report that the app has crashed. Again, not a trivial problem to solve. You may detect Dr Watson launch, .NET crashes, or other default crash windows popping up, but you can’t expect them all the time – many frameworks handle crashes gracefully and as such, sandbox needs to recognize these properly. There are also  samples I came across that don’t even indicate the crash – one needs to recognize it from the flow of the code execution i.e. program’s business logic following the ‘something is wrong’ path (f.ex. some installers do it).

If you think a crash detection would require a quick regexp on a couple of commonly used ‘crashing’ words (‘error’, ‘crash’, ‘corrupt’, etc.)  – think again – here are some examples of such messages, and in reality there are hundreds, if no more variants:









Last, but not least – some malware intentionally shows fake message boxes. They may contain misleading information and may confuse naive engines looking for specific keywords or even phrases – relying on the messages alone is not enough to make the final call.

Yup, sandboxing can be preceived as pretty hopeless :)