Timestomping and event spoofing in the cloud?

January 23, 2019 in Silly

When events are logged on the endpoint there is an unspoken trust that the log writing code puts in the OS. The trust that the OS will provide an accurate timestamp so the log can be valid. This trust is very important, because when it is not there, all the logs are not worth a dime… Something happened, but no one knows exactly when.

Obviously, anyone who ever changed time on their system, cracked a basic, time-based software protection, used a timestomp tool, or manually edited binary timestamps knows that these can be manipulated.

I wonder to what extent testing has been done on forensic tools to see how such manipulations affects the results of analysis, and in particular, the resulting timelines, and perhaps even how it affected some court cases. If you know any research on this subject I’d love to hear from you.

The complications that network brings to the table are even more interesting. If we barely can trust the timestamps on the host, what about the host managed over the network? Assuming that we have a number of systems that collect, and cache/forward events from the endpoint to the server, could these events be cleverly timestomped / manipulated / spoofed in any way?

This is purely theoretical, I have not tested any security solution for susceptibility to this kind of attack, but I thought it’s an interesting idea to throw out there. I will be happy if anyone debunks it, or perhaps someone on the red team side finds enough interest in it to take it further and maybe even develop a POC. Plus, I bet I am not the only one thinking of this, so maybe there exist a body of knowledge that already deals with it, and I just missed it?

Say, we have two systems, A and B. A is clean, and B is under the control of an attacker.

If the logs:

  • are being collected on the endpoint and events are cached, and sent in batches, or forwarded immediately
  • the events are collected / forwarded in their ‘final’ stage i.e. data enriched, and include all the fields that must be transmitted to the central repository – i.e. tuples ready to be stored on the server side
  • these tuples can be manipulated (on disk or in memory)
  • the central repository ingests logs in a way they arrive i.e. not verifying the content of the individual events e.g.
    • are all timestamps ok?
    • is hostname/ip in the logs the same as the hostname/ip that claims to be the source of the logs?
    • perhaps other checks (integrity, etc.)

If this is the case, then there is a theoretical possibility of host B being able to inject fake events for host A and storing it on the central server.

The consequences could be very interesting:

  • Alerts about bad stuff would be coming from A instead of B.
  • Timelines of both systems could be manufactured / redacted.
  • Manhours lost on chasing unicorns and manual forensics would be possibly long.

Again, I have not tested it and it could be just a silly idea that cannot be implemented in practice, because the logs are handled properly by all the security solutions, or perhaps there is a flaw in my reasoning (e.g. some sort of non-repudiation/integrity check / challenge for the sources, timestamps and the content is always present in all security solutions for ages?).

In any case, it’s the good ol’ idea of IP address spoofing applied to the events. Timestomping can be an added bonus.

Comments are closed.