Israel: How Gospel and Lavender select targets in Gaza

Gospel Lavdender AI targeting systems

Israel uses AI systems such as Gospel and Lavender to generate targets in Gaza. Their exact role, level of automation, and human impact are the subject of major controversy.

In summary

Since 2023, the Israeli military has increasingly relied on artificial intelligence to select targets in Gaza, using systems such as Gospel and Lavender. These platforms aggregate massive amounts of data and produce lists of human or infrastructure targets at an unprecedented rate, sometimes hundreds of targets in a matter of days, where human analysts would have taken months. Israeli authorities claim that these tools increase the accuracy of strikes and reduce collateral damage by “prioritizing” military targets. But investigations conducted in 2024 by +972 Magazine, Local Call, The Guardian, and other media outlets report much more aggressive use, with lists of approximately 37,000 people marked as potential targets and a claimed margin of error of up to 10%. These systems were reportedly used with minimal human oversight, in a context where very high levels of civilian casualties were accepted from the outset. The controversy centers not only on the technology itself, but on how it accelerates the tempo of strikes, dilutes individual accountability, and reconfigures the practice of military targeting in the age of AI.

The acknowledged use of AI in targeting in Gaza

Israel no longer hides its use of AI systems to identify and prioritize targets in the Gaza Strip, particularly since the war that began after October 7, 2023. As early as 2021, the Israeli army was already referring to a previous operation as the “first AI war,” mentioning tools to assist in target selection.

The Gospel system is presented by Israeli officials as an analysis platform that ingests multiple intelligence data—interceptions, imagery, electronic signals, databases—to recommend targets: command infrastructure, weapons depots, rocket launchers, firing positions. According to experts interviewed, a group of 20 intelligence officers previously produced 50 to 100 targets in nearly 300 days; with Gospel and associated systems, the same volume of 200 targets could be generated in 10 to 12 days, a factor of acceleration of at least 50 times.

In the first weeks of the 2023-2024 campaign, the army claimed more than 22,000 strikes in Gaza, with up to about 250 targets hit per day, compared to only 1,500 targets hit in 2021, 200 of which came from Gospel.
AI is not the only factor behind this intensification, but it has been explicitly cited as a capability multiplier.

The Gospel and Lavender systems: two targeting philosophies

Behind the generic term “military AI,” two systems play a central role: Gospel and Lavender, associated with the intelligence division responsible for targets.

Gospel is geared toward infrastructure targets. It identifies buildings, complexes, tunnels, or weapons locations suspected of harboring Hamas or Islamic Jihad fighters or capabilities. It functions as a prioritization system: it makes suggestions, and a human is supposed to validate them.

Lavender is a more radical development. Unveiled in 2024 by an investigation by +972 Magazine, Local Call, and The Guardian, this system is a probabilistic model that assigns each individual in the Gaza Strip a score from 1 to 100 indicating the probability that they belong to the military wings of Hamas or Islamic Jihad. On this basis, Lavender identified up to 37,000 Palestinians as potential targets at the start of the war, mainly men of fighting age.

The difference is fundamental: Gospel targets locations, Lavender targets people. In fact, according to testimonies from former officers, Lavender’s recommendations were treated “like a human decision,” with limited surface control, sometimes only a few seconds per target.

The degree of automation and the role of human control

Officially, the Israeli army insists that “AI does not decide” and that each strike requires human validation to comply with international humanitarian law. Systems such as Gospel and Lavender are described as decision-making tools that increase the effectiveness of existing units.

Internal accounts paint a more automated picture. Israeli sources cited in 2024 explain that, for certain categories of targets, operators almost systematically validated Lavender’s suggestions, simply checking that the name, gender, and address matched an expected profile, without reconstructing the model’s reasoning.

Operational parameters reinforce this impression of extensive automation:

  • A score threshold could be lowered to quickly expand the list of suspects, at the cost of an estimated margin of error of at least 10%.
  • “Quotas” or target production targets were reportedly set, encouraging a high rate of validation.
  • For low-ranking targets, more permissive collateral damage rules were reportedly applied, allowing the deaths of dozens of civilians to neutralize a low-level combatant.

This framework does not mean that the system strikes on its own, but that it transforms the role of the human analyst into a quick confirmation gesture within an industrialized decision-making chain, where the machine sets the bulk of the range of possibilities.

Impact on the ground: volume of strikes and collateral damage

The adoption of Gospel and Lavender is part of a bombing campaign whose human cost is unprecedented in Gaza. By the end of 2024, estimates put the number of Palestinians killed in the tens of thousands, with a very high proportion of women and children, although the exact figures remain disputed and politicized.

Investigations into Lavender describe a recurring pattern: the main target is struck at home, often at night, when the family is most likely to be present. Internal sources mention accepted levels of collateral damage that could reach several dozen civilians for mid-level combatants, and up to around 300 civilians for a high-ranking commander, as in a bombing on October 17, 2023.

In this context, AI plays a role primarily in terms of volume and speed:

  • Thousands of human targets produced in a matter of weeks by Lavender.
  • Massive strikes on residential buildings associated with these profiles, often identified by data correlations (phone, social media, public employment).
  • An increase in so-called “opportunity” strikes, when a signal deemed suspicious is detected by the algorithms.

This technological dynamic overlaps with a political choice: to favor a strategy of high intensity and rapid neutralization of Hamas leaders, at the cost of an exceptionally high level of civilian risk. AI does not create this strategy, but it enables its implementation on a large scale.

Concrete examples of Gospel and Lavender in action

Several emblematic episodes illustrate the use of these systems.

During the 2021 operation, Gospel was tested to detect rocket launch pads and firing positions. The army announced that approximately 1,500 targets were hit, including 200 identified by Gospel, using a combination of imagery and intelligence data. This phase served as proof of concept for the integration of AI into the “target chain.”

In 2023-2024, Lavender is deployed across the entire population of Gaza, estimated at 2.3 million people. Each individual is assigned a score; men between the ages of 18 and 50 who exhibit certain digital behaviors (use of certain phones, contacts, movements) are more easily marked as potential combatants.

According to investigations, between October 7 and the end of November 2023, at least 15,000 deaths in Gaza were directly linked to Lavender-guided strikes, a controversial figure but consistent with the scale of destruction observed.
The sources interviewed claim that, during this period, operators “trusted the machine” and validated lists of human targets generated each day en masse.

Arguments in favor of military AI in targeting

Proponents of these systems put forward several structured arguments.

First, processing capacity. In a densely populated and highly digitized theater, manual analysis of millions of pieces of communication, geolocation, and imaging data is impossible on a large scale. AI can identify patterns—contacts with known numbers, repeated visits to sensitive sites—that human analysts would miss or identify too late.

Second, the theoretical possibility of reducing human error. Models can be calibrated and audited, whereas an officer’s individual biases are difficult to measure. Israeli officials argue that Gospel produces “more accurate” targets than traditional methods and that AI helps “minimize damage to non-combatants,” although these claims remain poorly documented publicly.

Finally, for an army facing an organization like Hamas, which operates in urban areas and uses civilian infrastructure, the promise of increasing lethality against combatants while maintaining informational superiority is a powerful argument. This logic explains why other armed forces—American, British, Chinese—are closely following these Israeli experiments.

Critics: ethical, legal, and strategic risks

Critics, on the other hand, consider that Gospel and especially Lavender represent a dangerous automation of violence that is difficult to reconcile with international humanitarian law. Experts and NGOs highlight several points.

The first is that of proportionality. Accepting in advance very high civilian casualty thresholds, even for low-value targets, would amount to using AI to justify strikes that would be considered disproportionate if decided on a case-by-case basis by humans.

The second concerns the reliability of the models. A 10% margin of error means that with 37,000 human targets, several thousand people probably had no operational link to Hamas, while living in densely populated buildings. Added to this error is collateral mortality, which further increases the number of civilians killed on the basis of erroneous algorithmic recommendations.

The third issue is that of responsibility. If a system like Lavender produces a list and operators validate it in a matter of seconds, who is responsible for the illegal strike? The model designer, the commander who set the thresholds, the officer who clicked, or the political chain of command that issued the rules of engagement? For many legal experts, this dilution complicates the attribution of blame and could encourage other states to adopt similar systems to hide their decisions behind an algorithmic “black box.”

Finally, several analysts point to a strategic risk: the lowering of the political cost of using force.

If AI makes it possible to conduct massive campaigns more quickly, with a greater sense of distance, decision-makers may be tempted to choose the military option more often, or for longer, than they would have done with a slower, more human targeting process.

The bottom line: an ambiguous revolution in aerial warfare

The use of artificial intelligence in strikes in Gaza, via Gospel and Lavender, represents a major break with the way in which a modern army can industrialize the production of targets. From a strictly technological standpoint, these systems demonstrate AI’s ability to process masses of data, detect patterns, and fuel an air campaign of unprecedented intensity in real time.

But their deployment took place in a context where operational rules allowed for a very high level of civilian casualties, transforming a promise of “precision” into a vehicle for structured mass destruction. AI does not neutralize political choices; it amplifies them.

The question is no longer simply whether AI can be used responsibly in targeting, but who will set the safeguards, with what transparency, and with what consequences for those who cross them. The Israeli precedent, widely documented but poorly regulated, will serve as a reference—positive for some military leaders, alarming for many legal experts—for future algorithmic wars. This debate, which touches on law, morality, and strategy, can no longer be avoided as AI becomes central to the conduct of military operations.

Live a unique fighter jet experience

Gospel Lavdender AI targeting systems