Choosing your reverse engineering poison (a very subjective post on tools!)

November 10, 2018 in Tips & Tricks

15-20 years ago it was pretty easy to choose your reversing tools. Like in the good old ‘soviet Russia’ meme, you wouldn’t choose them – they would choose you.

You would probably get a hold of a (legal or illegal) copy of Softice, W32Dasm, maybe a few years later you would grab a copy of DataRescue’s Ida, LordPE, PETools, Stud_PE, PeiD, and finally OllyDbg (that is probably the most popular user mode debugger out there for last 15 years), Windows Spies, file converters, etc.. Then tones of olly plugins, scripts, maybe a bit of windbg, and that’s it. And of course lots of reading of MSDN, crack forums, and finally emerging at that time ‘modern’ forums like rootkit.com, sysinternals, openRCE, etc.

The year is now 2018.

There are TONES of tools. Not only, because everyone who can code a bit releases a tool now :-P, but because it’s a really mature and busy place. Thanks to malware and cybersecurity ‘markets’ as a whole we are really spoiled with a lot of fantastic tools.

There are a number of platforms with lots of tools e.g. REMnux+to some extent SIFT, disassembler engines (capstone, BEA, etc.), IDA has fantastic plugins (Hex-Rays Decompiler for x86, x64, arm, then there is Diaphora, Pigaios, FLARE Team Reversing Repository, and lots of others cool plug-ins), Jeb, Hopper, BinaryNinja, radare, lots of hex editors, resource editors, .net decompilers/debuggers/deobfuscators (ILSPY, dnSpy, de4dot), unpackers, locally ran sandboxes, file IDentifiers (die, oletools), process viewers (process explorer, process hacker), process monitors (procmon), system monitors (sysmon, Event ID 4688+cmd line), API monitors (rohitab’s), wmi monitors, diffing tools (regshot), Event Tracers, dumpers and analyzers for physical and logical memory (volatility), hook analyzers (XueTR, PESieve), hiding tools (old hidetoolz), java decompilers, flash decompilers, rootkit detectors, network analysis tools (wireshark), memory viewers, editors, and game analysis tools (kernel detective), yara, sample management tools, quarantine files decrypters, deobfuscators, malware configuration decrypters, browser developer tools, and lots lots of other goodness. Plus, fakenet tools, VPN, tor redirectors, pewpew maps, sinkholing tools + even data analysis tools are better. Plus, VirusTotal, sample Sharing groups, Twitter, Slack channels, and cons. And finally – closely related to analysis – a large repo of forensic tools, both for file/malware analysis and enterprise mammoths helping to detect stuff early (EDR) and cross-reference with other sources (threat intel tools).

We work so much faster now.

When we look at the reversing man-hour it’s so much easier than before. We moved to rely heavily on sandboxes, and VM snapshots. Instead of rebooting the system every time we make a serious mistake, we can just revert to the last snapshot, re-image the system, etc.. Some sandboxes offer a full trace of what has happened including network, memory, PE dumps. Add to it fast SSD drives and… making reversing mistakes is now really cheap. This allows to test your luck and step through some code quicker as you can almost instantly determine if it does what you want/assume. And if it doesn’t, you can quickly revert.

Despite all this, the question remains : how to choose the best toolkit for your manual analysis?

It’s actually extremely hard. And depends on what you do. Again, over a decade, or two ago you would have an easy choice. Today a lot depends on the circumstances and what you do. Firmware analysts, iOS researchers will use different tools than an AV researcher, AV Researcher analyzing state campaign, or forensic investigator who just needs to get a basic understanding of a malicious file.

Time for the ‘me’ bit.

Despite passing years I remain relatively conservative.

I use a set of core tools that I rely heavily upon and often, and only when I hit the wall I go and explore the alternatives. It’s probably not the best strategy toolkit-wise, but keeps me away from chasing after latest and probably not necessarily best (seriously, some tools from early noughties still work like a charm, while today’s .NET monsters often crash randomly and don’t really have that ‘finesse’ of the good old crackscene progs). Okay, truth be told that I do keep an eye on my fav tools and try to get the new versions when possible. BUT. I do keep older versions as well – when the new versions come out the old plugins often stop to work, etc. This is definitely the case with IDA. It’s pretty annoying actually. Sometimes it’s just easier to launch an older version of IDA with older plugins working than making the old plugins work with the new version.

First things first, I don’t use Windows Explorer at all. I use a File Manager that replaces Explorer: Total Commander is awesome for this task and I am a strong believer (since the good old times of Norton Commander, Volkov Commander, Dos Navigator) that it offers the easiest and most reliable approach to manage files w/o giving much chance to accidentally execute the malware; it also streamlines many operations for anyone who works with files a lot. You just can’t go wrong with this tool. IMHO no command line, or drag and drop can replace it.

IDA+ Hexrays plug-ins remain my main set of disassembler and decompilers. I love it and hate it. I wish it could still do the string recognition better, but when it comes to analyzing code, I don’t know a better tool. As I said before, I sometimes run older version of IDA because newer one pisses me off (plugins not working :).

For debugging, my user mode debugger for a long time was Olly, but I find myself relying on xdbg more and more. It supports 32- and 64- code. And is not as ugly as windbg. The xdbg has a lot of feel of good ol’ Olly, with a lot of new features being added to it on regular basis (less now though, I think the projects has less releases than say 2 years ago). It has a lot of cool features like early system breakpoints or built-in Scylla.

For VM snapshots I use VMWare Workstation.

Why?

I tried Virtual PC, VirtualBox and in my opinion VMWare is the smoothest. Been using it for a long time and while it has hiccups every once in a while, it is highly reliable. It’s also easily configurable and you can manage the risk of being detected by one of the anti-* tricks.

I kinda don’t talk about the hardware, but let me mention one more time that if you have a chance to use SSD please do so. When you revert your snapshot stored on the SSD for the first time you will know what I am talking about.

When it comes to OS – this is controversial. Should we analyze on Win 10? Win 8? Win 7? or even Win XP? Believe it or not, I still do a lot work on XP and win7. (note: I know I ignore OSX and Linux and even browsers, plus iOS, Android, etc. – but most of malware is still on Windows, so it’s obviously a biased, Windows-centric post; perhaps I will expand on other platforms if there is an interest).

Why winxp and win7?

They are ‘light’, plus they don’t clutter your logs with telemetry, and your UI with unnecessary gimmicks like anything win8+, and… are less restrictive, security-wise. I only use win10 when I have to.

These are quick stats on the number of files you can find in c:\Windows

  • 22698 winxp
  • 64800 win7
  • 117260 win10

When you do diff snapshots using regshot and similar tools it will take much longer to ‘screen’ win10 than xp+you will get tones of additional noisy artifacts. And I know, XP is outdated, but most of the samples still run on it, or you can modify them to make them run! No ASLR, and quick reverts may save you a lot of time.

For file identification I rely on automated tool I wrote myself and I use a really great tool called Die; I don’t use PeID and I don’t recommend it. It’s old and you will see a lot of FPs. Protectors are not that commonly seen anymore either, so you can do yourself a good favor by ditching this tool. Same goes for TrID. This is almost a joke that this tool is still being used by VT (i.e. what does it mean that ‘There is 10% chance that this file is XYZ type’. You can’t say that in 2018 – it doesn’t fly in reverse engineering or forensic world).

For sample recognition, I use a bunch of yara rules. Some are mine, some are scavenged from the net. BUT…. for manual work I rarely use it, because I make an effort to look at each file myself. Yup. There are polyglot files, there are tricky files, there are files that are wrappers (most often, so what you see is not what you get – bye, bye tools that look at the PE file and its ‘static’ properties) and there are files that are simply corrupted. You eyeball, you run, you know.

For process viewing and exploring, I use Process Hacker (PH). Process Explorer (PE) was good for many years and was my fav until I started working with PH – its list of features is way longer than PE and its actual sources (code) are available. Nothing better to learn a bit of Windows internals and use a tool that smoothly gives us access to many, usually hidden, properties of running processes… Highly recommended.

For API analysis, I rely on rohitab’s API monitor for 64-bit processes and for 32-bit processes I use my own monitoring tool. If mine fails, I fall back to Rohitab and some old-school tools e.g. csrss process watcher (I wonder how many ppl know it).

For memory dumping I use Process Hacker, Scylla, sometimes LordPE, sometimes userdump, sometimes my own tools.

For memory analysis, it’s just one simple answer: volatility.

For editing I rely on Utraedit; it’s both a text and hex editor and for quick binary hacks it’s pretty handy! when it comes to text editing, I really couldn’t find a better editor so far (I know, I know, vi exists :).

That’s pretty much it.

I also think Windows is a better platform for analysis than OSX and Linux. I know some people rely on *NIX or even OSX setup most of the time. BUT. There are so many tools on Windows that are far superior to these on OSX and Linux that it’s impossible to be a fully capable reverser if we just stick to these two non-win platforms. And on that note, I must emphasize that I do NOT avoid OSX and Linux when it’s needed. For example, some FOSS tools can only be used on Linux and I had my share in actually compiling these for my own purposes. Not a beautiful view / time when you have to go and manually fix a lot of warnings and errors before you get a running tool, but this is how it works. One of the most trickiest ever was LUA decompiler that relies on the size of the integer identical with the one stored inside the compiled LUA Script to work properly. So, we need to be flexible.

And there is nothing really right or wrong here. It’s all about setting up an environment where you can comfortably and quickly analyze code.

Fundamentally, it’s your workflow and you just need to make sure it works for you.

I probably forgot a lot of cool tools, or approaches, but as I said – the core remains the same. Everything else is on demand really. For example, if you do Office analysis, often Office 2003 will give you better insight than Office 2017. Then you can do some conversions between formats and save/downsave files as .rtf, .doc, .docx and get some interesting insight. But these are more techniques than tools. So, a topic for a different post.

Share this :)

Comments are closed.