
The physical and mental experience of flying a fighter jet
19 May 2025
The F-16, a pillar of air forces around the world
20 May 2025An overview of the capabilities and limitations of artificial intelligence in modern fighter jets.
The integration of artificial intelligence into the onboard systems of modern fighter jets is one of the most strategic areas of technological development in the defense sector. Promoted as a solution to relieve fighter pilots of increasing cognitive overload, onboard AI promises to manage sensor fusion, automatic target identification, and autonomous tactical navigation. But behind the marketing demonstrations and optimistic projections, the operational reality remains more mixed.
While platforms such as the F-35 Lightning II, the Dassault Rafale F4, and the XQ-58A and Loyal Wingman unmanned aircraft prototypes already use elements of AI, these technologies are still a long way from replacing human judgment in fighter aircraft flight, particularly in electronic warfare environments or when facing unpredictable adversaries.
This article takes a close look at the current limitations of onboard artificial intelligence, distinguishing between the technical obstacles, operational constraints, security risks, and human barriers that restrict its actual use in air combat.
Perception capabilities still dependent on humans
Data fusion: effective but not autonomous
AI-based decision support systems are currently used mainly to aggregate and interpret vast data streams from sensors such as AESA radar, infrared, communications, and electronic warfare. The F-35 combines this information into a unified visual environment presented to the pilot via the HMDS helmet.
But these algorithms do not “understand” the situation. They classify signals, detect correlations, and assign priorities according to predefined rules. Their usefulness therefore depends on the quality of the sensors, the relevance of the database, and the clarity of the tactical scene. In a cluttered environment, with stealth emissions or decoys, AI quickly loses its reliability and requires human validation.
Friend/foe identification: a critical uncertainty
Target identification remains one of the most sensitive functions. While AI can recognize known patterns or radar signatures, it is unable to interpret ambiguous behavior or contextual actions. In asymmetric warfare or in scenarios involving massive jamming, automated identification can produce major errors.
This is why no embedded AI system is currently authorized to open fire without human confirmation. AI acts as an advisor, but the lethal decision remains under the control of the fighter pilot or their command.

Autonomy limited by the complexity of real combat
Tactical AI remains dependent on a predictable scenario
AI systems operate through supervised learning or neural networks, trained on massive databases derived from simulated scenarios. But real air combat does not follow a script. Unforeseen maneuvers, in-flight failures, changing rules of engagement, or adverse human reactions make reliable generalization difficult.
In exercises, AI piloting drones such as the Australian Loyal Wingman or combat algorithms developed by DARPA have demonstrated superior capabilities to humans in limited scenarios. However, as soon as the situation moves outside the learned framework, responsiveness decreases dramatically.
For example, AI may excel at a swarm attack against a fixed target but fail to recognize a decoy tactic or a combined ambush.
Priority and stress management
One of the most difficult areas for AI to master remains dynamic priority management under stress. A trained fighter pilot constantly reassesses threats, objectives, retreat options, and the status of teammates. This process relies not only on sensors, but also on subjective elements such as intuition, memory of the enemy, and strategic anticipation.
No current AI is capable of reliably processing these unstructured elements. The pilot’s cognitive overload is partially relieved, but not replaced.
Persistent technological, human, and ethical barriers
Reliability and cybersecurity issues
A highly connected fighter jet relies on software systems that are constantly exposed to the risk of cyberattacks, calculation errors, or chain failures. Embedded AI that receives corrupted data can make dangerous decisions. This is one of the reasons why most air forces maintain redundant analog systems or a compartmentalized software architecture.
Total autonomy is therefore hampered by stringent military cybersecurity requirements. Artificial intelligence protocols must be validated, verified, tamper-proof, and capable of operating offline.
Cultural and institutional resistance
The human factor remains decisive. Engagement doctrines, air force traditions, and the confidence of senior officers still severely limit the use of AI in critical functions. The idea of entrusting an algorithm with the decision to shoot down an aircraft, change course in a hostile area, or cancel a mission without human confirmation is met with deep resistance in air forces.
The fighter pilot remains at the heart of the decision-making loop. Even autonomous combat drone programs (such as the XQ-58A Valkyrie) currently require constant human supervision.
Ethical issues and rules of engagement
International conventions on the laws of war explicitly limit the use of fully autonomous lethal systems. Artificial intelligence embedded in a fighter jet has no intention, conscience, or moral framework. Any armed decision without human validation would pose a major legal problem, particularly in the event of a mistake or airspace violation.
This dimension is slowing down investment and large-scale implementation of embedded combat AI. It requires a systemic architecture in which humans remain in the loop.
Despite significant advances in data fusion, navigation assistance, and autonomous surveillance, artificial intelligence in fighter jets remains an assistive tool rather than a substitute. It performs well in controlled environments but remains incapable of replacing the flexibility, intuition, and judgment of a fighter pilot in real combat situations.
Future progress will depend not only on algorithms and sensors, but also on how the armed forces redefine rules of engagement and command architectures. As things stand, fighter jet flying remains fundamentally human, even when assisted by a machine.
Get in touch to live a unique fighter jet experience – we fly in France AND YOU CAN TAKE THE CONTROLS!!!