Is the “intelligence” in Threat Intelligence actually a misnomer?
Intelligence implies analyzing and interpreting raw, unprocessed information to make decisions and solve problems. Information becomes intelligence when it’s actionable.
That’s the missing gap with most threat intelligence implementations. Practitioners struggle actually doing something with it. It’s difficult operationalizing threat intelligence to guide strategic decisions, like implementing purposeful controls, refining detection rules, and holistically managing exposures.
In this post we’ll unpack key factors holding threat intelligence back from its full potential and look at ways that AI functionality can fill the gaps.
The Promise–and Pitfalls–of Threat Intelligence
The promise of threat intelligence is clear. You get in-depth insights on the latest adversary behaviors and TTPs, emerging threats, APT profiles, attack campaigns, and the latest CVEs. In theory, this helps you anticipate attacks, prioritize risk, and align exposure management efforts with real-world adversary tradecraft.
In reality, threat intelligence often fails to live up to this promise. Why? Because the best way to operationalize threat intelligence is not always clear, even for mature security organizations. Let’s unpack some of the main challenges.
Formatting and Structure
Threat intelligence comes in wildly different formats: PDFs, blog posts, CSV files, JSON feeds, email newsletters, and more. Some vendors provide lengthy reports with paragraphs of in-depth descriptions of attacker behavior, plus detailed countermeasure recommendations. Others provide sparse indicators of compromise (IOC) data feeds.
Vendors that offer the prose-based, descriptive reports favor narrative approaches, like “this attack employed base64-encoded Powershell for detection evasion.” That’s great until you need to revisit the report to quickly reference the specific command or string the adversary used. The paragraphs describe what it is, but don’t actually have the string. Also, different vendors describe behaviors with varying levels of abstraction. Some use specific ATT&CK technique names, while others don’t mention MITRE at all. Mapping the behaviors manually is subtle enough to make standardizing processes difficult.
Meanwhile, feed-based TI vendors provide machine-readable data, most commonly JSON formatted IOC feeds designed for automated ingestion into other security tools. That’s all good, until a junior analyst with minimal knowledge of JSON syntax and limited context on your threat landscape and attack surfaces has to make sense of it all.
There’s no right or wrong format. They all have their place. But the lack of standard formatting increases reliance on human interpretation and intuition, making automation and scale difficult to achieve.
Vendor Diversity and Prioritization
Enterprise security teams commonly cobble together several TI sources. Developed teams with mature SOC processes may actually need all the different formats mentioned above for their key use cases. Unfortunately this introduces another set of challenges. Now your team has multiple collections of reports to synthesize and prioritize (not to mention multiple vendors and contracts to manage).
We’ve covered the formatting differences, but the general focus can also differ significantly between vendors. Some specialize in technical IOCs, while others profile new threat actors. Some dig into campaigns and TTPs with an industry vertical lens, while others examine activity in the context of the broader geopolitical climate. These different framings may give you critical context for your strategy and tactics, or it may bog you down with unnecessary details.
This leads to more and more information for your team to sift through, and less and less clarity on what actually matters. Are the details relevant? What actually matters for your attack surfaces and risk profile? Prioritization becomes an impediment to value with high volumes of reports across many threat intelligence sources.
Manual Translation Across Disjointed Tools
When (if?) you’ve distilled the most useful TTPs and IOCs from your threat intel, the most important step remains: translating them into tactical actions that actually reduce risk and improve your security posture. This is a manual process highly-dependent on individual expertise and knowledge of the organization’s IT footprint and security stack.
A TI report might mention that a relevant threat actor uses a specific command when establishing persistence. A detection engineer still needs to interpret how that could be used in their environment, then write a corresponding detection rule for their SIEM. As they’re writing the rule, they need to know whether the relevant logs are actually available. If they are, additional parsing or data enrichment may be required, potentially adding new functional requirements (and work) for ingestion pipelines.
Disjointed tools and siloed processes make matters worse. Analysts are usually the primary users, but TI is relevant to several other teams, including IT/ops, security architecture, vulnerability management, detection engineering, and incident response. Siloed processes make it difficult to correlate intelligence and context between these teams. An analyst may flag a novel adversary behavior and create a fire drill for detection engineering to deploy a new SIEM rule… only to find that another team had implemented a compensating control that same day that covered the exposure.
Getting More Value from TI with AI
Unfortunately, there isn’t a magic solution to completely translate TI insights into tailored implementations of new prevention controls, pretuned SIEM detections, incident response runbooks, or other targeted tactics. But AI developments have opened new possibilities to streamline processes and resolve some gaps. Let’s take a look.
LLMs for Parsing, Summarizing, and Mapping to TTPs
Large language models (LLMs) offer one of the most immediate and impactful use cases in this space: parsing unstructured text and normalizing disparate input formats. LLMs can ingest any of the formats discussed above–PDFs, markdown blog posts, JSON feeds, or STIX/TAXII government data–and classify the data into standardized schemas for downstream processing.
Let’s say you have a 30 page threat report describing an espionage campaign by APT29. Summarizing the most important techniques and mapping them to MITRE would be tedious work for human analysts (and prone to error and inconsistencies). That makes it a perfect candidate for AI support. An LLM can summarize the relevant TTPs (e.g. use of regsvr32.exe, registry modifications), then map them to the corresponding MITRE ATT&CK techniques (e.g. T1218.010 and T1112)–all in a fraction of a second, with high fidelity.
Domain-specific language models (DSLMs), or large language models specifically trained on cybersecurity datasets, are well positioned to take these concepts even further. DSLMs have the potential to understand more nuanced attacker behaviors and distinguish them from benign activity. For example, a DSLM can separate routine PowerShell usage from malicious remote code execution, or tease out the intention behind certain encoded parameters. These models can automatically classify IOCs and assign them confidence levels, suggest relevant ATT&CK IDs, and even score their relevance to an organization’s specific environment using asset intelligence and the current status and context of available security controls.
The ability of LLMs to turn unstructured information into structured, contextual TTP mappings lays the foundation for the use cases below.
Generative AI for Detection-as-Code and Contextual Reasoning
Once TTPs are mapped and key behaviors are identified, the next step for AI’s role in operationalizing TI is taking action: creating SiEM rules or EDR queries that turn intelligence into contextualized alerts. Generative AI can function like a detection co-pilot that takes insights from the TI; crafts detections with appropriate logic, conditions, and filters; then create alerts with relevant runbooks to guide effective triage and response efforts.
Generative tools can also explain why a detection actually matters in the context of the threat landscape and your environment. That contextual reasoning is super helpful for both junior analysts and overwhelmed veterans. GenAI can help teams quickly understand implementation trade-offs, such as additional ingestion requirements, tuning process considerations, and false positive risks.
With the proliferation of code-centric generative AI tools, detection-as-code is the obvious candidate for AI to streamline processes. But a genAI co-pilot can also support a broader exposure management program. That contextual reasoning can also help teams understand tradeoffs between different mitigation approaches while reinforcing opportunities for cross-functional collaboration that reduce risk.
Agentic Workflows for Detection Coverage, Tuning, and Refinement
Sure, parsing data and generating detections is powerful. But the real magic happens when AI agents autonomously identify coverage gaps, propose new rules, and tune existing ones, continuously incorporating new threat intelligence and adapting in real-time.
Imagine an agentic detection pipeline system that continuously monitors TI feeds for new adversary behaviors, maps them to ATT&CK techniques, then cross-references them with existing detection coverage in your SIEM. If there are gaps, it proposes new rules for the team to review and tests them for validation. After the team reviews the test results, the system either deploys the rule or iterates on the rule’s logic.
An agentic system like this could track rule health over time too, proactively monitoring for issues that create broken rules, like log schema drift or missing fields in parser logic, then proposes a fix. This power extends to tuning as well: if a rule generates a sudden spike in false positives, the agent can suggest ways to reduce noise (while not letting the real signals go unnoticed). Humans remain involved at every step of the way, but the toilsome components are outsourced.
Depending on your outlook on AI more generally, the prospect of this actually happening anytime soon may seem farfetched. AI advancements are happening very fast, making this potential reality closer than you might think. And when embedded into exposure management programs, agentic workflows can become force multipliers that act on intelligence, lean from context, and continuously strengthen your security posture.
The Bridge Between Intelligence and Action
LLMs, GenAI tools, and autonomous agentic systems offer paths to closing the operational gap that has plagued threat intelligence for years. By combining language and domain understanding, structured reasoning, code generation, and virtuous feedback loops, AI doesn’t just enhance detection engineering. It redefines it.
Instead of security teams poring over reports and manually crafting rules, AI systems can read the reports, understand the threats, suggest controls, create rules, assess coverage, recommend mitigations, and continuously propose improvements. That’s the promise of AI’s role in operationalizing threat intelligence.
At CardinalOps, we’re helping leading enterprise security teams get more value out of their Threat Intelligence investments, partly with the help of AI concepts in this post. We also have exciting new developments in store for using AI to incorporate TI into broader exposure management programs to programmatically eliminate risk. Sound interesting? Let’s chat.
