Organizations deploying artificial intelligence face a new frontier of security threats that traditional frameworks were never designed to address. AI-enabled adversary attacks surged 89% compared to prior years according to industry threat intelligence research, up from a 72% increase reported in 2025. This escalation demands a structured approach to understanding and defending against adversarial threats to AI systems. Enter MITRE ATLAS — the Adversarial Threat Landscape for Artificial-Intelligence Systems — a comprehensive adversarial ML knowledge base designed specifically to catalog how attackers target machine learning and AI systems.
For security teams already familiar with MITRE ATT&CK, ATLAS (sometimes referenced as Atlas MITRE in search) provides a natural extension into AI security territory. This guide delivers everything security analysts, SOC leaders, and AI engineers need to operationalize ATLAS against adversarial AI attacks — from framework fundamentals to practical detection strategies.
MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) is a globally accessible adversarial ML knowledge base that documents adversary tactics, techniques, and procedures (TTPs) specifically targeting artificial intelligence and machine learning systems. Modeled after the widely adopted MITRE ATT&CK framework, this adversarial AI knowledge base provides security teams with a structured approach to understanding, detecting, and defending against AI-specific threats. The MITRE ATLAS framework serves as the definitive machine learning security framework for AI threat modeling.
As of version 5.1.0 (November 2025), the framework contains 16 tactics, 84 techniques, 56 sub-techniques, 32 mitigations, and 42 real-world case studies according to the official MITRE ATLAS CHANGELOG, up from 15 tactics and 66 techniques in October 2025. The February 2026 update (v5.4.0) added further agent-focused techniques. This rapid growth reflects the accelerating evolution of AI threats.
Adversarial machine learning — the study of attacks on machine learning systems and defenses against them — encompasses four main attack categories as documented by NIST: evasion, poisoning, privacy, and abuse attacks. ATLAS organizes these attack patterns into a matrix structure that security practitioners can immediately put to use.
MITRE created ATLAS to address a critical gap in the security landscape. While ATT&CK effectively catalogs threats to traditional IT and OT infrastructure, it lacks coverage of attacks that exploit the unique characteristics of machine learning systems. ATLAS fills this void by providing the same rigorous, community-validated approach to AI threat intelligence.
The framework also connects to MITRE D3FEND, which provides defensive countermeasures that organizations can map against ATLAS techniques.
Understanding the distinction between ATLAS and ATT&CK helps security teams determine when to apply each framework.
Table: Comparison of MITRE ATT&CK and MITRE ATLAS frameworks
ATLAS inherits 13 tactics from ATT&CK — including Reconnaissance, Initial Access, Execution, and Exfiltration — but applies them specifically to AI contexts. The two AI-specific tactics unique to ATLAS are:
AML.TA0004): Describes how adversaries gain access to target ML models through inference APIs or direct artifact accessAML.TA0012): Covers how adversaries prepare attacks targeting ML models, including training data poisoning and backdoor insertionSecurity teams should use both frameworks together for comprehensive coverage — ATT&CK for traditional infrastructure threats and ATLAS for AI-specific attack vectors.
The MITRE ATLAS official knowledge base organizes threat intelligence using the same matrix structure that made ATT&CK successful. Understanding this structure enables effective threat detection and AI threat modeling.
The MITRE ATLAS matrix (sometimes called the MITRE framework matrix for AI or the AI threat matrix) displays tactics as columns and techniques as rows. Each cell represents a specific method adversaries use to achieve tactical goals against AI systems. This visual organization allows security teams to quickly identify coverage gaps and prioritize defenses.
The framework components work together:
ATLAS data is available in STIX 2.1 format, enabling machine-readable integration with security tools and platforms. This standardized format supports automated ingestion into threat intelligence platforms and SIEM systems.
The framework receives regular updates through community contributions and MITRE's ongoing research. The October 2025 update through Zenity Labs collaboration added 14 new agent-focused techniques, followed by the November 2025 v5.1.0 release that expanded the framework to 16 tactics with 84 techniques. The February 2026 v5.4.0 update added further techniques including "Publish Poisoned AI Agent Tool" and "Escape to Host," demonstrating the framework's active evolution.
Tactics, techniques, and procedures (TTPs) form the core vocabulary of threat-informed defense. In ATLAS:
AML.TXXXX.AML.T0051) includes sub-techniques for direct and indirect injection methods.This hierarchy enables progressively detailed threat modeling. Teams can start with tactic-level coverage analysis and drill down to specific techniques based on their AI system's exposure.
ATLAS organizes 84 techniques across 16 tactics that span the complete adversarial lifecycle, up from 15 tactics and 66 techniques in October 2025. This comprehensive breakdown addresses a significant content gap identified in competitor analysis — no existing guide covers all tactics with detection-focused guidance.
Table: MITRE ATLAS tactics with key techniques
The November 2025 v5.1.0 release added a 16th tactic — Command and Control (AML.TA0015) — along with 18 new techniques and 6 new mitigations focused on AI agent security controls. This brought the total from 66 to 84 techniques and from 33 to 42 case studies.
The attack lifecycle begins with reconnaissance, where adversaries gather information about target ML systems. Key techniques include:
AML.T0051): Adversaries craft malicious inputs to manipulate LLM behavior — this maps to OWASP LLM01These AI-specific tactics describe how adversaries interact with and exploit ML models:
Threat actors maintain access and avoid detection through:
Later-stage tactics focus on achieving adversary objectives:
AML.T0020): Data poisoning corrupts training data to manipulate model behavior — a critical data exfiltration vectorUnderstanding lateral movement patterns helps security teams track how attackers progress through these tactics.
ATLAS provides free, practical tools that transform the framework from documentation into actionable security capabilities. This tools ecosystem addresses a major content gap — few competitors cover these resources comprehensively.
Table: MITRE ATLAS official tools ecosystem
The ATLAS Navigator provides an interactive web interface for visualizing the framework matrix. Security teams use Navigator for:
Navigator integrates with the ATT&CK Navigator, enabling unified views across both frameworks. Teams already using ATT&CK Navigator will find the ATLAS interface immediately familiar.
In March 2023, Microsoft and MITRE announced collaboration on Arsenal — a CALDERA plugin enabling automated adversary emulation against AI systems. Arsenal implements ATLAS techniques without requiring deep machine learning expertise.
Key capabilities include:
Arsenal supports threat hunting by validating detection coverage against realistic attack simulations. For incident response teams, Arsenal helps understand attacker capabilities and test response procedures.
The AI Incident Sharing Initiative enables organizations to share and learn from AI security incidents. This community-driven platform provides:
This intelligence feeds directly into ATLAS updates, ensuring the framework reflects current threat patterns.
Security teams often ask which AI security framework to adopt. The answer: use all three for complementary coverage. This comparison helps teams understand when to apply each framework, addressing a common PAA question.
Table: AI security framework comparison: ATLAS vs OWASP vs NIST AI RMF
According to Cloudsine's framework analysis, these frameworks serve different phases of the AI security lifecycle:
Table: Framework crosswalk for common AI vulnerabilities
Understanding vulnerabilities across all three frameworks enables comprehensive coverage. Teams should map their AI assets to relevant techniques in each framework.
Integrating ATLAS into security operations requires mapping techniques to detection capabilities and workflows. According to ThreatConnect's SOC integration guide, approximately 70% of ATLAS mitigations map to existing security controls. The remaining 30% require new AI-specific controls.
Steps for SOC integration:
Effective detection requires mapping ATLAS techniques to specific log sources and detection logic.
Table: Example detection mapping for priority ATLAS techniques
Network detection and response capabilities complement application-layer detection. User and entity behavior analytics (UEBA) helps identify anomalous access patterns to AI systems.
Track these metrics to measure ATLAS operationalization:
Quarterly threat model reviews ensure coverage keeps pace with framework updates and emerging threats.
ATLAS includes 42 case studies documenting real-world attacks against AI systems, up from 33 in October 2025. Analyzing these incidents provides actionable defensive insights that go beyond theoretical threat modeling.
In November 2025, MITRE ATLAS published a case study documenting deepfake attacks against mobile KYC (Know Your Customer) liveness detection systems. According to Mobile ID World's coverage, this attack targeted banking, financial services, and cryptocurrency platforms.
Attack chain progression:
Reconnaissance -> Resource Development -> Initial Access -> Defense Evasion -> Impact
Defensive recommendations:
This case study demonstrates how attackers combine social engineering with AI tools to defeat security controls, potentially leading to data breaches.
The SesameOp case study, added to ATLAS in late 2025, documents a novel backdoor technique that leverages AI assistant APIs for command and control. Instead of building traditional C2 infrastructure, adversaries repurposed legitimate agent service APIs as covert control channels — blending malicious activity into normal AI workflows. This attack pattern maps to the new AI Service API technique (AML.T0096) and demonstrates how agentic AI infrastructure creates command and control channels that evade conventional network detection.
ATLAS expanded from 33 to 42 case studies between October 2025 and February 2026. Notable additions include:
The HiddenLayer analysis of ATLAS case study AML.CS0003 documents how researchers bypassed an ML-based endpoint security product:
AI security threats require specialized detection approaches that go beyond traditional security controls. With AI-enabled adversary attacks surging 89% compared to prior years — up from a 72% increase in 2025 — organizations need proactive defense strategies.
Defense checklist for AI security:
Organizations should align AI security investments with both phishing prevention (AI-generated phishing is rising rapidly) and ransomware defense (AI enables more sophisticated attacks).
AML.T0051)Prompt injection is the most prominent ATLAS technique, cataloged as AML.T0051 under the Initial Access tactic. Large language models face unique attack vectors that traditional security cannot address, and ATLAS catalogs these threats systematically.
Table: LLM threat types with ATLAS mapping and detection methods
Recent CVEs demonstrate these threats in practice:
Identity threat detection and response capabilities help detect credential theft attempts through LLM exploitation.
The October 2025 ATLAS update specifically addresses autonomous AI agents — systems that can take actions, access tools, and persist context across sessions. New techniques include:
AML.T0058 AI Agent Context Poisoning: Injecting malicious content into agent memory or thread contextAML.T0059 Activation Triggers: Embedding triggers that activate under specific conditionsAML.T0060 Data from AI Services: Extracting information through RAG database retrievalAML.T0061 AI Agent Tools: Exploiting agent tool access for malicious purposesAML.T0062 Exfiltration via AI Agent Tool Invocation: Using legitimate tool calls to extract dataSecurity principles for AI agents:
According to CISA's December 2025 AI/OT guidance, organizations should embed oversight and failsafes for all AI systems operating in critical environments.
The Model Context Protocol (MCP) — an open standard for connecting AI agents to external tools and data sources — introduces attack surfaces that ATLAS now explicitly addresses. MCP exploits allow adversaries to manipulate the tool-calling layer between AI agents and enterprise systems, bypassing traditional security controls.
ATLAS techniques relevant to MCP security include:
AML.T0061): Adversaries exploit MCP server configurations to invoke unauthorized tool actions or access restricted dataAML.T0062): Attackers leverage legitimate MCP tool calls to extract sensitive data through sanctioned channelsAML.T0096, added 2026): Exploiting AI orchestration APIs for stealthy command and controlThe January 2026 ATLAS update (v5.3.0) added three new case studies specifically covering MCP server compromises, indirect prompt injection via MCP channels, and malicious AI agent deployment. Security teams should validate all MCP server configurations, restrict tool permissions to least privilege, and monitor tool invocation patterns for anomalies.
The AI security landscape evolves rapidly, with regulatory pressure and industry collaboration driving framework adoption. Organizations must prepare for both emerging threats and compliance requirements.
The MITRE Secure AI Program, supported by 16 member organizations including Microsoft and JPMorgan Chase, focuses on expanding ATLAS with real-world observations and expediting AI incident sharing.
Regulatory developments:
AI security threats continue to accelerate, with 87% of organizations reporting AI-powered cyberattack exposure and 92% expressing concern over agentic AI security implications according to industry research.
Vectra AI's Attack Signal Intelligence methodology applies behavior-based detection principles that align with ATLAS framework objectives. By focusing on attacker behaviors rather than static signatures, organizations can detect the techniques cataloged in ATLAS — from prompt injection attempts to data exfiltration via inference APIs — across hybrid cloud environments.
This approach enables security teams to identify and prioritize real AI-related threats while reducing alert noise. Network detection and response combined with identity threat detection provides visibility across the attack surface that AI threats now target.
MITRE ATLAS provides the structured approach organizations need to defend AI systems against sophisticated adversaries. With 16 tactics, 84 techniques, and continuous updates reflecting emerging threats like agentic AI attacks and MCP exploits, the framework delivers actionable intelligence for security teams.
The rapid expansion from 15 tactics in October 2025 to 16 tactics and 84 techniques by February 2026 demonstrates ATLAS's commitment to keeping pace with AI evolution. As AI-assisted attacks continue to surge and regulatory requirements like the EU AI Act take effect, organizations cannot afford to treat AI security as an afterthought.
Start with these immediate actions:
For organizations seeking comprehensive AI security beyond framework adoption, Vectra AI's Attack Signal Intelligence provides behavior-based detection that identifies the adversary techniques ATLAS catalogs — enabling security teams to find and stop AI threats across hybrid environments.
MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) is a globally accessible knowledge base that catalogs adversary tactics, techniques, and case studies specifically targeting AI and machine learning systems. Modeled after MITRE ATT&CK, ATLAS provides a structured framework for understanding AI-specific threats. As of February 2026, it contains 16 tactics, 84 techniques, 56 sub-techniques, 32 mitigations, and 42 real-world case studies. Security teams use ATLAS for threat modeling, detection development, and red teaming AI systems. The framework is freely available at atlas.mitre.org.
While ATT&CK focuses on traditional IT/OT threats, ATLAS specifically addresses attacks targeting AI and machine learning systems. ATLAS includes two unique tactics not found in ATT&CK: ML Model Access (AML.TA0004) and ML Attack Staging (AML.TA0012). Both frameworks use the same matrix structure and TTP methodology, making ATLAS accessible to security teams already familiar with ATT&CK. Organizations should use both frameworks together — ATT&CK for infrastructure threats and ATLAS for AI-specific attack vectors. The frameworks share common tactics but apply them to different technology contexts.
As of February 2026 (v5.4.0), MITRE ATLAS contains 16 tactics, 84 techniques, and 56 sub-techniques, up from 15 tactics and 66 techniques in October 2025. The November 2025 v5.1.0 update added a 16th tactic and expanded to 84 techniques, 32 mitigations, and 42 case studies. The February 2026 v5.4.0 update added further agent-focused techniques including "Publish Poisoned AI Agent Tool" and "Escape to Host." This represents significant growth from earlier versions — some older sources cite 56 or 66 techniques, which reflects pre-November 2025 counts. Always reference the official ATLAS CHANGELOG for current statistics.
Prompt injection (AML.T0051) is an Initial Access technique where adversaries craft malicious inputs to manipulate LLM behavior. ATLAS distinguishes between direct prompt injection (malicious content in user input) and indirect prompt injection (malicious content embedded in external data sources the LLM processes). This technique maps to OWASP LLM01 and represents one of the most common attack vectors against LLM applications. Detection focuses on input pattern analysis and output behavior monitoring. Recent CVEs including CVE-2025-32711 (EchoLeak) demonstrate real-world exploitation.
Use ATLAS Navigator to visualize the framework and create custom layers mapping your AI assets to relevant techniques. Start by inventorying all ML models, training pipelines, and AI-enabled applications. Identify which tactics apply to your ML pipeline stages based on system architecture. Prioritize techniques based on exposure and likelihood. Map detection capabilities to create coverage visualizations. Integrate ATLAS into existing threat modeling methodologies like STRIDE alongside ATT&CK for comprehensive coverage. Review and update threat models quarterly as the framework evolves.
ATLAS offers several free tools. Navigator provides web-based matrix visualization for threat modeling and coverage mapping, with export capabilities in JSON, Excel, and SVG formats. Arsenal is a CALDERA plugin for automated AI red teaming, developed in collaboration with Microsoft. The AI Incident Sharing Initiative enables community threat intelligence sharing through anonymized incident reports. The AI Risk Database provides searchable incident and vulnerability information. All tools are accessible at atlas.mitre.org and through MITRE's GitHub repositories. These tools transform ATLAS from documentation into actionable security capabilities.
ATLAS and OWASP LLM Top 10 serve complementary purposes. ATLAS provides an adversary-centric TTP framework for threat modeling and detection, while OWASP offers a developer-centric vulnerability list for secure development. Use OWASP during development and code review phases; use ATLAS for operational security, threat modeling, and detection development. Many vulnerabilities appear in both frameworks with different perspectives — for example, prompt injection is ATLAS technique AML.T0051 and OWASP LLM01. The best approach combines both frameworks with NIST AI RMF for governance.
The MITRE ATLAS adversarial ML knowledge base is a structured repository of adversary tactics, techniques, sub-techniques, mitigations, and real-world case studies focused on attacks against AI and machine learning systems. As of February 2026 (v5.4.0), the knowledge base contains 16 tactics, 84 techniques, 56 sub-techniques, 32 mitigations, and 42 case studies. It is freely accessible at atlas.mitre.org and its data is available in STIX 2.1 format for machine-readable integration with security tools and threat intelligence platforms.
MITRE ATLAS case studies are documented real-world incidents where adversaries targeted AI and machine learning systems, mapped to specific ATLAS tactics and techniques. As of February 2026, ATLAS contains 42 case studies covering attacks ranging from deepfake-based KYC bypass and ML model evasion to AI agent backdoors and financial transaction hijacking through AI assistants. Notable examples include the iProov deepfake case study targeting mobile banking, the SesameOp AI agent backdoor (AML.CS0042), and MCP server compromise scenarios. Security teams use these case studies to understand real attack chains and validate detection coverage.
ATLAS Navigator provides an interactive web interface at atlas.mitre.org for visualizing the ATLAS matrix. To export coverage layers, create a custom layer by selecting relevant techniques and assigning scores based on your detection coverage. Navigate to Layer Controls and export in JSON format for programmatic use, Excel for analyst review, or SVG for visual reporting. Teams can combine multiple layers using score expressions (such as summing two layers) to visualize overlapping coverage or gaps. Navigator integrates with the ATT&CK Navigator, enabling unified views across both traditional and AI-specific threat frameworks.