HOME Resources Blog Detection, Evasion, and the Pursuit of Immutable Artifacts

|

Detection, Evasion, and the Pursuit of Immutable Artifacts

You’re probably familiar with the classic thought experiment: If a tree falls in a forest and no one is around to hear it, does it make a sound? In cybersecurity, we can ask a similar question in SOC terms: If a log falls in the SIEM, does it generate an alert?

After years of drowning in telemetry, chasing ghost alerts, and fine-tuning SIEM rules, I’ve come to believe detection isn’t about more data — it’s about intentional signal design. So if a log does fall in the SIEM, does it matter? Should we focus on detecting the log falling, or something even more fundamental?

Since starting as a security researcher at CardinalOps, I’ve gotten broad visibility into how organizations across the globe approach detection and how they leverage our platform for engineering their rules. No matter how mature their programs start, the best detection engineering teams continuously evolve their mindset and shed older paradigms for a more agile, adaptive detection philosophy to stay ahead of adversaries. 

My detection mindset has also shifted. In this post I’ll share some reflections on my journey, as I shift my focus on quality, not quantity, to build rules with less noise, more context, and a sharper focus on what really matters in detection: immutable artifacts.

What Makes a Good Detection?

So what makes a “good” detection rule? A strong detection strikes a balance between precision and practicality. It’s not too broad (where it floods your SOC with noise), and not too atomic (where it misses broader malicious behavior). It’s forward-looking, resilient, and designed with real-world performance in mind—particularly in your SIEM of choice.

Here’s what I believe separates a solid detection from the rest:

  • Balanced Scope: Neither overly specific nor too general. It captures intent, not just artifacts.
  • Resilience by Design: Built to withstand tool variations, obfuscation, and minor changes in attacker tradecraft.
  • SIEM-Aware Engineering: Designed with cost, cardinality, and performance in mind—because a well-crafted detection is worthless if it overloads your pipeline.
  • Technique-Focused: Anchored in the behavioral essence of a technique—not just chasing volatile IOCs or superficial strings.

A strong detection isn’t just a rule — it’s a reflection of intentional design. After years of tuning, breaking, and rebuilding detection logic, the framework I’ve come to trust focuses detections on immutable artifacts.

What Are Immutable Artifacts?

In detection engineering, immutable artifacts are the golden signals—the unchangeable traces an attacker leaves behind, no matter what tool, technique, or obfuscation they use. Think of them as the fingerprints of behavior, not the clothes the attacker wears.

Here’s the core idea: no matter how an attacker performs a technique, there are certain artifacts they must generate to succeed. These are consistent, unavoidable (for now), and ideal for durable detections.

Unfortunately most detection rules on public databases such as SIGMA, SnapAttack and SOCPrime, rely on artifacts that are mutable by attackers. But we should aspire to write detections that find immutable artifacts that the attacker is going to leave behind 100% of the time, and that are not subject to obfuscation, source code modifications, or anything that the attacker controls.

How to Write Better Detections (With Examples)

We tend to write detections based on isolated signals — specific process names, file paths, known command-line flags. But attackers aren’t working off a checklist. They’re navigating a graph of possibilities, finding new paths toward the same objective.

“Defenders think in lists. Attackers think in graphs. As long as this is true, attackers win.” – John Lambert

To outpace attackers, we need to shift focus from detecting how they do something, to what must happen no matter the method. This is where immutable artifacts come into play, again and again.

Example 1: Service Creation in Windows

Let’s dig into a common example scenario: detecting service creation in Windows: 

Traditionally, you’d look for Event ID 4697 in the Security log, or Event ID 7045 in the System log — a solid starting point. But is that the only way to detect this behavior? Not even close.

Shifting your focus to immutable artifacts anchors your detections on behaviors that must occur regardless of the tool or method. Remember, a tool is simply a representation of a technique, but the technique itself remains constant.

When detecting service creation, no matter the tool — PowerShell, sc.exe, WinRM, or a custom binary — every one of them will generate the appropriate registry key. That’s your immutable artifact. That’s what lasts. 

Sure, this introduces challenges: enabling SACL auditing, managing increased storage, and handling more verbose logs. But resilient detection often comes with a price — and it’s worth paying if you want to detect the behavior, not the wrapper.

Example 2: Enabling RDP on Windows

If an attacker wants to enable a remote desktop protocol (RDP) on Windows, to the best of my knowledge and based on examples from the wild, they MUST set the registry key HKLM\System\CurrentControlSet\Control\Terminal Server\fDenyTSConnections to 0. This happens no matter what tool or technique the attacker uses.

By contrast, indicators like process names, command-line flags, or file hashes can be renamed, encoded, swapped, or evaded. But the underlying action — the behavioral truth — can’t be hidden forever. So when building detections, focus on the traces the attacker can’t change.

For example, Metasploit’s implementation of this code enables RDP:

And in order to enable RDP via WMIC, the following can be done:

In both of the above, the following is generated:

The registry key must be set to zero. That’s your immutable artifact. Whether the attacker uses reg.exe, PowerShell, or sneaky C2 implants, this registry value has to be flipped.

Example 3: Malicious Powershell Use (The One Where Powershell Lied!)

Early in my detection journey, I was proud of a rule I wrote to catch malicious PowerShell use. It looked for powershell.exe with a base64-encoded command—classic, right? It worked… until it didn’t.

One day, red team activity sailed right past my rule. Why? They’d used pwsh.exe (the new name for PowerShell Core starting with version 6.0.), used a different encoding flag, and ran the payload through a dropper that launched PowerShell via System.Management.Automation.

My rule saw nothing.

Turns out, I’d anchored my detection to how the attack looked, not what it did. When I retraced the attack using Procmon, I realized it still made the same system-level changes: downloading a payload, writing to disk, and establishing outbound C2 via HTTP. These were the immutable artifacts — and I’d missed them because I was too focused on the wrapping paper.

Lesson burned into my brain: If you’re detecting tactics by name, you’re probably behind the attacker by three steps. Detect what must happen, not what might happen. Anchor your logic on detecting the immutable artifacts.

Lessons on The Progressive Learning Journey

Additionally, I believe security professionals need to understand that while some of us are just born with talent and mindset suited towards this area, progressive, persistent learning is still king. 

Don’t get discouraged because you can’t immediately jump to use the methods and processes that the big league players are using. It takes years to gain the knowledge of wisdom to do so properly. Take my journey for example:

  • At first, I relied heavily on process names.
  • Then, I progressed to detecting based on command line patterns.
  • Eventually, I realized that even those were mutable. So I dug deeper — studying the tools themselves, testing them, tracing their impact. That’s where the real magic is: the system-level changes an attacker can’t avoid.
  • Each step forward required unlearning, relearning, and a lot of trial and error. For example, did you know that there are some commands that are not logged in Event ID 4688? and that using the pipe (“|”) character in a single command in CMD will split an event into two entries? Go further into the rabbit hole 🙂

So with that, here’s a high-level list of dos and don’ts to incorporate into your detection approach:

Detection Do’s

  • Use programs that follow the actual changes (registry, file operations, network) that the tool creates (procmon), which will help you towards understanding their immutable artifacts.
  • In the context of process creation logs, focus on all known possible command parameter variations (PowerShell’s “EncodedCommand” can be called via at least 24 different flag variations!)
  • Use adversary simulation platforms and tools (Atomic Red Team is a great place to start at).
  • Frequently reference best practices, like MITRE’s Summiting the Pyramid methodology, for creating more durable detections. 

Detection Don’ts

  • Rely on preconceived notions of detection standards without verifying and testing yourself — this is a point that caused me to fail many times before.
  • Blindly follow detection databases (SIGMA, Elastic, Splunk). They are such a fantastic contribution to the community, but I treat them as lead/idea generators that need to be tuned to your own environment.

A More Durable Detection Philosophy

So… if a log falls in the SIEM, does it generate an alert? Maybe. But that’s not the real question anymore. The real question is: Will that alert mean something? Will it be heard in a sea of noise — or buried in the rubble of false positives and misfired logic?

The point of this article is to challenge security professionals (myself included) that tend to default to detecting “known” attacker tools. Instead, focus on what the attacker is going to leave behind 100% of the time–the immutable artifacts that are not subject to obfuscation, source code modifications, or anything that the attacker controls.

This year, my detection philosophy is less about catching every possible log and more about listening for the ones that matter. It’s about anchoring detection in behaviors, not the wrapping of those behaviors. It’s about tuning for resilience, not volume. And about accepting that a great detection isn’t one that looks cool — it’s one that works when it matters most.

Detection engineering isn’t a checklist anymore. It’s a design discipline. And in this forest of signals, I’d rather hear one meaningful alert than a thousand that lead nowhere.

TL;DR: The Immutable Artifacts Detection Manifesto

  • 🎯 Detect intent, not syntax.
  • 🔒 Anchor logic in Immutable Artifacts — not tool names or strings.
  • 📉 Noise kills. Tune hard.
  • 🧠 Understand the system before you try to protect it.
  • 🧰 Test everything — especially the “standard” rules.
  • 🧭 Map to behavior, not branding. A tool is not a threat.
  • 🔍 Every detection is a hypothesis. Validate it with telemetry.
  • 🛠️ Make detections that will still matter when the TTPs evolve.