AI as a Strategic Capability in Defense
Artificial intelligence is no longer a speculative technology in the defense context. It has become a central axis around which NATO, the European Union, and national defense ministries are restructuring capability planning, threat assessment, and operational readiness. During my time as a consultant at ISDEFE -- the Spanish state-owned engineering company supporting the Ministry of Defence -- I witnessed firsthand how the gap between AI policy and AI deployment shapes outcomes in defense and aerospace programs.
The challenge in defense AI is not technological sophistication alone. It is the disciplined identification, scoping, and deployment of use cases that deliver measurable operational advantage while respecting the legal, ethical, and accountability frameworks that democratic nations demand. This article maps the strategic landscape -- the frameworks, the competitions, and the practical pathways from policy to fielded capability.
"The Alliance must be prepared for a world in which AI is pervasive, enabling both threats and opportunities across every domain of operations."
-- NATO AI Strategy, Revised July 2024
The NATO AI Framework: Revised 2024 Strategy
In July 2024, NATO published its revised AI Strategy, updating the original 2021 document to reflect the rapid evolution of foundation models, generative AI, and adversarial AI capabilities. The revision was significant: for the first time, NATO explicitly recognized AI-enabled disinformation as a distinct area of concern requiring coordinated alliance response. The updated strategy organizes NATO's AI ambitions around four strategic aims.
The 2024 revision reflects a maturation in NATO's thinking. The Alliance moved from aspirational language about AI potential toward concrete implementation mechanisms. The establishment of the NATO Innovation Fund -- a EUR 1 billion venture capital fund spanning 15 years -- and the DIANA (Defence Innovation Accelerator for the North Atlantic) test centres for AI Testing, Evaluation, Verification & Validation (TEV&V) are not just policy signals. They are institutional commitments that reshape how Allies develop and procure AI capabilities.
In July 2024, the European Investment Fund partnered with the NATO Innovation Fund, creating a bridge between European defense innovation financing and transatlantic AI capability development. This partnership underscores a recognition that defense AI cannot be developed in isolation from the broader technology ecosystem.
Principles of Responsible Use
Underpinning the entire NATO AI strategy are six Principles of Responsible Use (PRUs), agreed by Allies as the normative foundation for any defense AI system. These are not abstract ideals -- they function as design requirements and procurement criteria that shape how AI use cases are scoped, developed, tested, and fielded.
Each principle carries operational weight. Lawfulness mandates compliance with international humanitarian law and existing legal review processes. Responsibility & Accountability requires that human agents remain identifiable and accountable for decisions made with AI support. Explainability & Traceability demands that AI systems be auditable -- a nontrivial requirement for deep learning models deployed in time-critical environments. Reliability addresses robustness and safety across the full operational envelope. Governability ensures human operators can intervene, override, or deactivate AI systems. And Bias Mitigation confronts the reality that training data and model architectures can encode discriminatory patterns that are unacceptable in any context but especially consequential in defense.
"The responsible use of AI is not a constraint on military capability. It is a prerequisite for the trust and interoperability that alliance operations demand."
-- Adapted from NATO PRU Framework Documentation
RAND's Four Competitions in Future Warfare
The RAND Corporation has produced two landmark studies that frame how AI reshapes military strategy. The 2024 report "Strategic Competition in the Age of AI" analyzed the risks and opportunities of military AI adoption across great power competition. Building on this, the 2025 report "How AI Could Reshape Four Essential Competitions in Future Warfare" introduced a framework of four fundamental tensions that AI recalibrates.
These four competitions are not hypothetical. They are unfolding now. The RAND analysis highlights that AI does not simply enhance existing capabilities -- it restructures the competitive dynamics between adversaries. A military that deploys AI effectively in the hiding-vs-finding competition, for example, can neutralize an opponent's stealth investments through persistent AI-enabled surveillance and pattern recognition.
AI Use Case Categories in Defense & Aerospace
Drawing from my consulting work and the broader literature, AI use cases in defense and aerospace cluster into distinct operational categories. The table below maps these categories to representative applications and the NATO PRUs most relevant to each.
| Use Case Category | Representative Applications | Key PRUs | Maturity |
|---|---|---|---|
| Intelligence, Surveillance & Reconnaissance (ISR) | Satellite imagery analysis, multi-sensor fusion, pattern-of-life detection | Explainability, Bias Mitigation | Fielded |
| Predictive Maintenance & Logistics | Fleet readiness prediction, supply chain optimization, component failure forecasting | Reliability, Traceability | Fielded |
| Cyber Defense | Anomaly detection, automated threat hunting, network behavior analysis | Reliability, Governability | Scaling |
| Command & Control (C2) Decision Support | Operational planning assistance, course of action generation, wargaming | Accountability, Explainability | Piloting |
| Autonomous Systems | UAV swarm coordination, unmanned maritime vehicles, autonomous resupply | Governability, Lawfulness | Piloting |
| Counter-Disinformation | Deepfake detection, narrative analysis, social media monitoring for influence operations | Bias Mitigation, Lawfulness | Emerging |
| Training & Simulation | Adaptive adversary simulation, synthetic environment generation, AI-driven red teaming | Reliability, Traceability | Scaling |
| Natural Language Processing for OSINT | Multilingual document exploitation, entity extraction, automated reporting | Explainability, Bias Mitigation | Fielded |
From Policy to Deployment: A Practical Framework
One of the most persistent problems in defense AI is the transition from strategy documents to operational capability. RAND research has found that roughly 80% of AI projects fail to move beyond proof-of-concept [10] -- a figure that is not unique to defense but is amplified by the sector's additional constraints around classification, interoperability, and testing standards.
From my experience at ISDEFE, I developed a conviction that successful defense AI deployment requires a structured pipeline that maps directly from operational need to validated capability. The following framework captures the essential stages.
Operational Need Identification starts with the warfighter, not the technology. The most common failure mode I observed was the reverse -- teams with a solution searching for a problem. Genuine use case definition requires embedding AI expertise within operational planning staffs, not in isolated innovation labs.
Use Case Scoping & PRU Alignment forces early confrontation with the responsible use principles. If an AI system cannot satisfy the governability requirement -- if operators cannot meaningfully intervene in its decision loop -- then the use case must be redesigned before any code is written.
Data Readiness Assessment is where many defense AI initiatives stall. Military data is often siloed, inconsistently labeled, classified at multiple levels, and generated by legacy systems with incompatible formats. Without a realistic assessment of data quality and availability, even the most promising use case will fail in practice.
Prototype & TEV&V leverages facilities like NATO's DIANA test centres, which provide standardized environments for testing AI systems against operational scenarios. Testing, Evaluation, Verification and Validation is not an afterthought -- it is the mechanism through which trust in AI systems is built.
Operational Integration demands attention to human-machine teaming, training doctrine, and the organizational change management that accompanies any new capability. An AI system that operators do not trust, understand, or know how to use is operationally irrelevant.
Continuous Monitoring closes the loop. Defense environments are adversarial by definition. AI models degrade, adversaries adapt, and operational conditions shift. Monitoring ensures that deployed AI systems remain reliable and aligned with their intended purpose.
EU Defense AI Landscape
The European Union has pursued its own defense AI trajectory, primarily through the Permanent Structured Cooperation (PESCO) framework. Among the most relevant PESCO projects is AMIDA-UT (AI-based Modelling and Intelligence for Defence Applications -- Unmanned Technologies), which brings together multiple EU member states to develop AI capabilities for unmanned systems and intelligence processing.
The EU approach differs from NATO's in its emphasis on industrial sovereignty and dual-use technology development. The European Defence Fund allocates resources to collaborative AI research, while the EU AI Act establishes a horizontal regulatory framework that applies to defense-adjacent AI systems. The interaction between the EU AI Act's risk-based classification and NATO's PRUs creates a complex but navigable governance landscape for European defense AI.
The July 2024 partnership between the European Investment Fund and the NATO Innovation Fund signals a strategic convergence. European defense startups can now access transatlantic capital and testing infrastructure, while NATO gains access to the EU's deep technology ecosystem. For practitioners, this means that defense AI use cases increasingly need to satisfy both NATO PRUs and EU AI Act requirements -- a dual compliance challenge that favors well-designed, transparent systems.
"AI will not replace strategic judgment. But organizations that fail to integrate AI into their strategic processes will find their judgment increasingly outpaced by those that do."
-- Adapted from RAND, "Strategic Competition in the Age of AI" (2024)
Defense AI by the Numbers
The Responsible AI Imperative
I want to close with what I consider the most important lesson from my work in defense AI: responsible AI is not a constraint on capability -- it is the foundation of trust upon which all capability depends. In allied operations, interoperability requires that partners trust each other's AI systems. That trust is built on shared principles, transparent testing, and clear accountability chains.
The NATO PRUs provide a common vocabulary. DIANA's TEV&V infrastructure provides a common testing baseline. The NATO Innovation Fund and its partnership with the European Investment Fund provide capital to sustain the innovation pipeline. But none of these mechanisms matter if individual programs do not internalize responsible AI as a core engineering discipline rather than a compliance checkbox.
The 80% failure rate identified by RAND is not inevitable. It reflects a pattern of AI initiatives that skip the hard work of operational need identification, data readiness, and human-centered design. Defense organizations that invest in these foundations -- that treat use case definition as a strategic discipline and responsible deployment as an engineering requirement -- will be the ones that successfully field AI capabilities at the speed that the security environment demands.
The defense sector does not need more AI experiments. It needs the discipline to move from experimentation to deployment: rigorous use case definition, alignment with responsible use principles, realistic data assessments, and institutional commitment to testing and continuous monitoring. The frameworks exist. The funding is mobilized. What remains is execution.
The security landscape will not wait for organizations to perfect their AI strategies. Adversaries are deploying AI-enabled capabilities -- from autonomous drones to AI-generated disinformation campaigns -- at increasing pace. The question is not whether defense organizations will adopt AI, but whether they will do so with the rigor and responsibility that democratic accountability demands.
References
- NATO. "Summary of the NATO Artificial Intelligence (AI) Strategy." Revised July 2024. nato.int/cps/en/natohq/official_texts_227237.htm
- NATO. "Principles of Responsible Use of Artificial Intelligence." nato.int/cps/en/natohq/official_texts_187617.htm
- RAND Corporation. "Strategic Competition in the Age of AI." 2024. rand.org/pubs/research_reports/RRA3295-1.html
- RAND Corporation. "How Artificial Intelligence Could Reshape Four Essential Competitions in Future Warfare." rand.org/pubs/research_reports/RRA4316-1.html
- NATO Innovation Fund. "About the NATO Innovation Fund." nif.fund
- NATO. "DIANA -- Defence Innovation Accelerator for the North Atlantic." diana.nato.int
- European Investment Fund / EIB Group. "EIF and NATO Innovation Fund join forces to unlock private capital for Europe's defence and security future." July 2, 2024. eib.org/en/press/all/2024-241
- Council of the European Union. "Permanent Structured Cooperation (PESCO) -- List of PESCO projects." consilium.europa.eu/en/policies/defence-security/pesco/
- ISDEFE -- Ingenieria de Sistemas para la Defensa de Espana. isdefe.es
- Ryseff, J., De Bruhl, B. F., & Newberry, S. J. "The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed." RAND Corporation. rand.org/pubs/research_reports/RRA2680-1.html