The Future of SOC

Over last few years we moved away from a SOC that used to be almost solely focused on Network and Windows events and artifacts (probably a strong fintech bias here) towards the one that is a Frankenstein’s monster we see today – very fractured, multi-dimensional, multi-platform, multi-architectural, multi-device and multi-everything-centric, plus certainly multi-regional (regulated markets, data across borders)/privacy-savvy, on- and off-prem, covering *aaS and endpoints/servers/mobile/virtualization/containers/CI/CD pipelines, and did I mention multi-cloud, public and private environments, vendor vs. proprietary, with a bonus of over-eager employees who keep sending ‘dangerous’ stuff to SOC because they have been trained well to report <insert any suspicious events here>’ ? And finally, one where NO ONE knows anymore at least the basics of all the existing, rapidly emerging, and more and more confusing technologies, let alone the gamut of ideas and solutions that help to address (or, at least detect) many of these security problems.

I think we moved away from a fairly understood model known between … let’s say 2000-2018 /COMFORTABLE/ towards the one that is (as of today at least) 2018-+ full of unknown unknowns /VERY UNCOMFORTABLE/…

How do we deal with it today?

Usually a bridge, a Slack or Teams channel with 100-200 people on it.

I think divide and conquer is the only way to deal with it. Also, more and more work has to focus on building bridges with internal owners of technologies / architects than ever before. This includes for instance, a lot of DevSecOps work, shifting left, early involvement in app development improvement and their release cycles, security-oriented feature and LOG requests, heavy red-team footprint on breaking it all, and in opposition to the previous decade – lots of very hands-off work. Lots of commanding and coordination.

Borrowing the quote from dreBlue Teaming is 90 percent social capital today.

Times have definitely changed…

And in parallel:
– stronger than ever reliance on vendors
– real (as in ‘old school’) cyber skills are in a strong decline — what took years to acquire and master is now gamified by vendor offerings that dumbify a lot of problems and requirements; I am not against it, because we need help and while sometimes it comes in a form of a b/s and extrapolations, we must admit that many non-technical analysts today, even w/o reading a single RFC in their life can easily handle many incidents by just… talking and via vendor consoles – this would be impossible 10 years ago
– seriously, tools of today are fantastic: advanced sandboxes, threat intel portals, bug bounty portals, and the whole social media sharing makes it far easier to find and share information that used to be available only to a few in the past
– the environments get more complicated — we need to work towards universal playbooks that cover heavily regulated regional markets
– portability is the key (work in one place -> work everywhere w/o many changes) — affects multiple instances of systems of records, SOPs, detections, metrics (again, regional/regulated markets, plus ability to quickly recover in case of a breach)
– follow the Sun is now more complicated as it includes Follow the Regulated Market
– from log deprivation to log over-saturation — time for some log governance, at source? common models for field naming? not only naming conventions, but also … and I really mean it… one, common, universal, … TIMESTAMP FORMAT?
– optimization efforts should be a norm — most of detection engineering, threat hunting teams add to the workload, we need an opposite force that asks — hmm is it really necessary? the same goes for a ruthless approach towards an email fatigue — convert to tickets or kill at source, disable, decommission
– how many emails your workflow and automation is sending today? can you trim it down?

The above is what scares me. It’s currently hardly manageable, and it’s not sustainable. It’s a whack-a-mole on steroids. We were meant to stop the whack-a-mole. And it not only happens now, it has intensified a lot in recent years and imho this trend will continue. When we all started to really like the idea of having EDRs at our disposal… the *aaS happened and there is no way back. Suddenly all our incident response playbooks, SOPs need to focus on a completely different type of threats. Lots of this work is actually more focused on a proper access management than infiltration by APT actors via well-known TTPs. A lot of work is also focused on shared responsibility — when you deal with alerts on-prem, on endpoints, it’s all nice and cozy, but when you are *aaS, there is a moment where an external password spray hitting the client application ran on the server you host, one has to decide when the transfer of security responsibility occurs. Is it a threat to the hosting environment? The instance of the app? Both? It’s… complicated.

What is the SOC of the future?

I think there is no SOC in the future. There is a cross-organizational incident response committee (you don’t wanna know how much I hate this word!) that actively engages in tackling issues at hand, and ‘incident commands’ respective teams leading the issues to a closure. Security becomes part of the day to day operations. Representatives from many functions actually are talking to each other, often, and the ‘old security’ in isolation is no longer a topic of any conversations. What is though, is addressing the ‘are we affected’ question on VERY REGULAR BASIS? To help with that, advanced Asset inventories covering hardware, software, *aaS, SBOMs, packages, all aiming at exposure assessment and potential containment & closing communication loops are a MUST. It’s no longer a strictly speaking, a technical problem. It’s a problem that has a stage, that stage is not only political, but also visionary. Whoever does the minimum effort to collect and maintain the best asset inventory, then predict, plan to contain, and finally close, will be the winner of the many brownie points to be distributed in this area in the future.

And that’s why the future belongs to TRIAGE function.

That Omelas child, the punchbag, the scapegoat. The first line of defense, and the most important. Yet so often neglected.

Clear SOPs for Triage will help to handle most of incoming ‘requests’. You want that Triage team to be supported as hell. Their procedures must be simple, to the point, and with clear paths of both closure and escalation. Such triage function will train the best IR practitioners of the future. Jacks of all trades, outspoken, cooperative, and assertive.

The game is changing and we need to adapt. It’s time you take your Triage team out for a good dinner.

Adobe: JSX and JSXBIN files

I wrote about older Adobe scripting before. I recently discovered that Adobe products support scripting using so-called ExtendScript language with code being stored either in a source-level JSX file, or its binary equivalent – JSXBIN (it’s actually considered legacy at this stage). Add these file extensions to your watch list.

The documentation [PDF warning] suggests that some security precautions are in place, and:

1.5 Activating full scripting features
The default is for scripts to not be allowed to write files or send or receive communication over a network. To allow
scripts to write files and communicate over a network, choose Edit > Preferences > General (Windows) or After Effects Preferences > General (Mac OS), and select the Allow Scripts To Write Files And Access Network option.
Any After Effects script that contains an error preventing it from being completed generates an error message from the application. This error message includes information about the nature of the error and the line of the script on which it occurred. The ExtendScript Toolkit (ESTK) debugger can open automatically when the application encounters a script error. This feature is disabled by default so that casual users do not encounter it. To activate this feature, choose Preferences > General, and select Enable JavaScript Debugger

I don’t have access to Adobe products, but these seem to be interesting features.

Defenders should look for:

  • unusual processes spawn by Adobe products
  • invocations of afterfx.exe – scrutinize its command line
  • on macOS: use of AppleScript to invoke DoScript
  • presence/invocation of autostart scripts in Adobe’s Startup and Shutdown folders (the exact location is unknown to me; if you have Adobe products, please let me know and I will update this post)

1.6.5 Running scripts automatically during application startup or shutdown
Within the Scripts folder are two folders called Startup and Shutdown. After Effects runs scripts in these folders
automatically, in alphabetical order, on starting and quitting, respectively

Yes, it’s not a lot to pivot from, but if you have an access to a large number of systems, you may want to keep an eye on process trees spawn around Adobe products. Just to be ahead of time.

Other info: