When will AI be able to fire without human control?

When will AI be able to fire without human control?

The decision to fire without humans in the loop remains effectively prohibited. What is needed to allow it? Law, ethics, technology, stakeholders, and timing.

In summary

Militaries are already testing automated defenses and drone swarms, but the central question remains: meaningful human control over the use of force and clear accountability. Today, no treaty explicitly authorizes lethal autonomy without humans in the loop; several national frameworks strictly regulate it and require legal reviews and high-level validations. Progress is being made in multi-sensor perception, real-time planning, jamming resilience, and software “insurance Software manufacturers such as Anduril, General Atomics, Shield AI, and Palantir offer military AI components for navigation, detection-identification, and decision support, while defensive systems (CIWS, C-RAM, Iron Dome) already operate in highly automated modes with human supervision. In the short term, limited authorization could emerge in ultra-constrained defensive contexts, if a solid ethical and legal framework is adopted: traceability, Article 36, certification, safeguards, independent audits, and chain of criminal responsibility.

The heart of the debate: definitions, limits, and levels of control

Lethal autonomous weapons systems are those that, once activated, can select and engage a target without further human intervention. The literature distinguishes between human-in-the-loop (mandatory human validation), human-on-the-loop (supervision with veto rights), and “out of the loop” (no intervention during the action). The sticking point is not automation per se, but the absence of “intention” or “consciousness” in the machine, which prevents any direct attribution of moral responsibility. From an operational point of view, autonomy combines perception (electro-optical sensors, radar, radio frequency), classification (deep learning), decision-making (optimization, reinforcement learning, reactive planning) and execution (guidance, control). From a legal standpoint, even though military AI can accelerate the sense-decide-act cycle, it does not replace the human judgment required by the law of armed conflict to assess distinction, proportionality, and precautions. U.S. guidelines define these categories and require that systems allow for the exercise of an “appropriate level of human judgment” on the use of force, without systematically requiring a human in the tactical loop.

The international and national legal framework

The foundation remains international humanitarian law (IHL) and its principles of distinction, proportionality, and precautions. Article 36 of Additional Protocol I requires States to legally review any “new weapon, means, or method of warfare” to verify its compliance. This review now covers autonomous functions, with a particular focus on emerging behaviors, traceability, and governability. Several ICRC guides and SIPRI studies detail the methodological requirements of these reviews and their current limitations in the face of machine learning.

On the policy side, in December 2023, the UN adopted a resolution calling for the development of measures on autonomous weapons systems; in 2024, a second text broadened the discussion to all States, signaling a growing consensus to negotiate a more binding instrument. The ICRC recommends banning “unpredictable” autonomous weapons and those designed to use force against people, and strictly restricting all others (limitations on targets, areas, and duration). These milestones do not yet create positive obligations on the authorization of human-free firing, but they do structure the future framework.

At the national level, the US directive DoD 3000.09 (updated on January 25, 2023) authorizes the development and use of autonomous functions under certain conditions: safety requirements, governability, and validation by senior officials for systems capable of selecting and engaging targets. The United Kingdom has published an “Ambitious, Safe, Responsible” approach, supplemented by the JSP 936 standard on “dependable” AI. The EU has adopted the AI Act, but its military uses are excluded from its scope; NATO, on the other hand, updated its AI strategy in 2024, which sets out principles of responsibility and traceability.

When will AI be able to fire without human control?

The technologies at stake: perception, decision-making, assurance, and governance

Three technical blocks determine lethal autonomy. First, perception and data fusion: AESA radars, EO/IR imaging, RF sensing, and spatio-temporal correlation produce robust tracks in jammed environments (GNSS-denied). Next, real-time decision-making: neural networks for detection/identification, reinforcement learning for interception maneuvers, reactive planning under constraints (geofencing, no-strike lists). Finally, “assurance”: verification/validation through extensive testing, red teaming, runtime assurance, tamper-proof logs (black boxes), degraded modes, and kill switches. Modern architectures impose safeguards: machine-readable rules of engagement, spatial/temporal boundaries, prioritization of targets that are “by nature” military, and adversarial testing against deception and data poisoning. Recent UN technical notes emphasize sufficient explainability, traceability of training data, and governability (the ability to suspend, cancel, or reverse an action). This triptych is essential for any political authorization of firing without humans in the loop, even in a defensive context.

The frontrunners and what they offer

In the United States, Anduril and General Atomics have been selected by the US Air Force for Increment 1 of the CCA program: “teammate” drones capable of collaborative combat with piloted fighters, with a production decision expected in FY2026. The strategic interest lies in the volume (announced target of hundreds of aircraft), the unit cost, and the integration of AI for navigation, avoidance, and mission allocation. Early autonomy flight tests are announced for this year.

Shield AI is deploying Hivemind (onboard autonomy) on the MQ-35 V-BAT and in 2025 demonstrated autonomous flight on BQM-177A for the US Navy, demonstrating navigation and planning capabilities under jamming and link loss conditions.

In the “analysis-targeting” brick, Palantir won a contract in May 2024 worth up to $480 million to develop Maven Smart System, designed to aggregate sensor data and intelligence to identify points of interest and accelerate targeting; at the same time, NATO acquired an MSS version for allied use. In the Ukrainian context, the publisher claims operational uses in targeting. These systems remain decision support tools and do not, in themselves, give the algorithm firing authority.

Israel (Rafael, IAI), Korea (Hanwha), and European players (MBDA, Rheinmetall) are also integrating advanced automation functions into close-range air defense and anti-drone systems, with a clear trend toward human-on-the-loop supervision.

Current applications: what is already authorized

Several defensive systems have been operating for years in automatic modes at very high rates. Phalanx-type CIWS can detect, track, and open fire on supersonic threats in a few hundred milliseconds, with parameters set in advance and an operator supervising. C-RAM systems can, in autonomous mode, intercept rockets/mortars on their trajectory. Iron Dome and its naval derivative C-Dome operate priority detection-engagement chains with integrated sorting and firing decision algorithms within a restricted perimeter. In all cases, human controllability, space-time limitations, and strictly defensive nature justify current legal acceptability. Conversely, loitering munitions incorporate autonomous search capabilities, but authorization to fire at a human target is, in open practice, validated by an operator.

The ethical red line and blind spots

Two main themes emerge. First, the requirement for meaningful human control is not intended to slow down decision-making, but to preserve responsible and contextualized judgment, particularly when data is ambiguous or corrupted. Second, criminal responsibility remains human: commanders, programmers, and political decision-makers. The absence of “intent” in machines prevents them from being subjects of law. Finally, robustness in the face of adversity (visual decoys, RF jamming, sensor deepfakes) remains a practical limitation to the authorization of fully autonomous firing, except in terminal interception scenarios. These considerations inform the positions of the ICRC and recent national strategies.

The likely timeline and conditions for authorization

When will the algorithm be able to legally decide to fire without a human in the loop? Three scenarios are emerging.

In the short term (2025-2028), limited authorization could arise in ultra-constrained defensive contexts, where human latency makes interception impossible (terminal anti-missile, naval platform protection), subject to enhanced Article 36 reviews, independent certification, and tamper-proof action logs. Current CIWS/C-RAM practice suggests that the remaining progress is mainly normative.

In the medium term (2028-2032), very limited offensive authorizations could target military targets “by nature” (radars, ground-based missiles, identified hostile drones) in areas devoid of civilians, with mandatory geolocation, time windows, and fail-safe mechanisms. UN dynamics (resolutions 2023-2024) and political statements on the responsible use of military AI point to a hybrid international instrument: targeted bans + restrictions and audit requirements.

In the longer term, general authorization for firing without human supervision in mixed environments is unlikely without a formal treaty and verification mechanisms. NATO/allied strategies are pushing for the adoption of AI, but reaffirm traceability, governability, and accountability. The sector will therefore remain “human-centric” until societal acceptability and proof of safety have reached a new level.

Governance to be established before any authorization

A credible framework combines eight building blocks:

  1. Default human-on-the-loop requirement, with documented justification for any “out of the loop” mode.
  2. Machine-readable rules of engagement incorporating distinction and collateral damage thresholds.
  3. AI-adapted Article 36 reviews (training data, adversarial attack risks, mitigation plans) .
  4. Security/reliability certification with operational testing in a cluttered environment and public reference datasets.
  5. Full logging and traceability (sensor data, decisions, overrides) for investigation and accountability.
  6. Strong governability: dual-key armament, kill switch, degraded modes, geo-temporal limits.
  7. Qualified human supervision, specific training, and fatigue management.
  8. Regular audits by independent authorities and penalties for non-compliance. These requirements extend the DoD 3000.09 directive, the British approach, and the ICRC recommendations, adding a layer of software assurance specific to machine learning.

Companies and real-world uses as a barometer

Industrialization will come from evidence of use. The Anduril and GA-ASI CCAs will provide a measure of autonomy in collaborative combat; the V-BAT/BQM-177A trials will test robustness in contested environments; and the Maven/MSS suites will show whether targeting assistance remains properly “human-centered.” As long as these programs demonstrate auditability, safety, and measurable operational benefits (reduction in fratricide, increased interception rates, reduced collateral damage), the political window will open, first for defense, then for machine-vs-machine. Conversely, uncontrolled drift would abruptly close the debate.

A pragmatic outcome rather than a sudden shift

Lethal autonomy without humans will not appear by magic decree, but through successive expansions of strictly limited use cases, accompanied by verifiable safeguards. At this stage, the realistic path is the conditional authorization of autonomous fire functions in zero-latency defensive scenarios and, later, in geo-temporal offensive bubbles on material targets. As long as AI has neither consciousness nor intention, keeping humans responsible and equipped to “take back control” will remain the ethical and legal cornerstone. Countries that invest early in traceability, insurance, and certification will have the upper hand when the time comes to authorize—or prohibit—unmanned firing, based on objective criteria rather than algorithmic promises.

Live a unique fighter jet experience