How AI is used in War: the future of technology in conflict
- Naya Chardon
- Dec 18, 2025
- 5 min read
We all use AI. And why not? It’s a useful tool, and a mark of the technological evolution we are currently undergoing. You have also probably heard about its impact on the environment due to the high energy, water, and mining it requires to function and to be trained. But I’m sure you already knew this; we all do. Yet we continue to use it because it is so convenient.
Artificial intelligence technology is constantly developing, and this means that the industries in which it is used are also expanding. We’ve heard of it being used in finance to predict stock market trends, in education by both students and teachers alike, and even in your Spotify wrapped. However, a less discussed sector in which it is used is in the field of war. Aside from the popular use of AI through personalised playlists or chatbots, the same kinds of pattern‑finding systems are now being used to scan satellite images, track enemy data, and even select targets in active warzones. In other words, technology that feels harmless in everyday life has quietly become part of the decision‑making machinery of war, where one mistake or one malfunction can cost inumerous lives.
In modern warfare, AI already has many roles, all of which raise ethical questions about who or “what” is entrusted with lethal power. For example, AI is able to analyse satellite images, phone data, or information from drones, much more efficiently than a human, meaning it is able to give targeting recommendations to militaries, with humans only approving strikes later in the process.
Not only is AI used to make recommendations, but it is already fully integrated into drones or computer systems, such as Project Maven and Replicator in NATO and the USA militaries respectively, to predict enemy behaviour, model potential scenarios, or map our operations and troop placement.
All of the above name only a few of the ways in which AI has been incorporated into warfare, but they share the main concern of automation bias, i.e. trusting a machine by default. While these systems may sound futuristic and novel, they are already being deployed on real battlefields. One recent, widely discussed case is the Israel Defense Forces’ (IDF) Target Administration Division, established in 2019, which uses an AI Decision Support System (DSS), headed by Aviv Kochavi. Kochavi has commented on the evolution of this technology, which currently allows the IDF to identify as many targets in a month as it previously did in a year.
One of the main programmes the IDF employs is called Lavender, which uses machine learning to assign residents of Gaza a numerical score indicating their suspected likelihood of being a member of Hamas or Palestinian Islamic Jihad (PIJ). Early estimates by an investigation led by +972 Magazine that collected the testimonies of six anonymous Israeli intelligence officers, revealed how in the early weeks of the current war, Lavender identified 37,000 possible Palestinians and their homes as potential targets. This was because the criteria that was input into the AI only broadly defined potential targets, with stereotypical specifications such as “young male, living in specific areas of Gaza, or exhibiting particular communication behaviours”. Meeting this criteria is enough to qualify for the large biometric database of targets, used to justify arrest and targeting with weapons. As a result, even civil defence personnel such as police officers could be targeted, and their surrounding circle, with the use of “dumb bombs” which is another term for unguided munitions.
Taken together, these examples suggest that the ethical problem is not simply that AI is being tested in war, but that it is being treated as if it were more reliable and objective than it really is. Systems like Lavender promise unprecedented efficiency and facility in target generation, yet even the Israeli officers who use it admitted that it “makes mistakes” in roughly ten percent of cases: a margin of error that, in a military context, translates directly into human lives. Once a technological model assigns a target and a name enters a database, it makes it easier for the humans involved to trust its judgement based on confirmation bias, since the determining criteria is based on stereotypes that align with the human user’s existing beliefs or preconceptions.
Furthermore, as AI continues to be granted power in real-life violent conflicts, the issue of accountability will become ever more prevalent, as once the participation of technology has exceeded a certain threshold, humans involved will be able to deny responsibility in what is on track to become an age of “AI Wars”.
In the end, it is impossible to separate “our” AI from “their” AI. The same algorithms that recommend us a song or correct our Service as Action or CAS reflections are being scaled up and adapted for use in missile guidance, surveillance analysis, and automated targeting. As AI becomes part of our daily routines, feeding it information and actively helping it evolve, companies and governments gather more data to train larger models, and initiate new projects such as Project Nimbus for which Google and Amazon signed a 1.2 billion dollar contract with the Israeli government in 2021, to “store, process, and analyse data, including facial recognition, emotion recognition, biometrics and demographic information”.
We might like to think of our daily interactions with AI as harmless conveniences, but in reality, the learning model of open AI means that we are actively contributing to the improvement of this technology that plays a significant role not only in the environment’s wellbeing, but in human lives too. So is the solution to abandon AI entirely (unlikely), or to start questioning how, and for whom, it is being developed and what kind of future we are helping to build every time we choose to use it?
Bibliography
Abraham, Yuval. “‘Lavender’: The AI Machine Directing Israel’s Bombing Spree in Gaza.” +972 Magazine, 3 Apr. 2024, www.972mag.com/lavender-ai-israeli-army-gaza/.
Clement, Sven. NATO and ARTIFICIAL INTELLIGENCE: NAVIGATING the CHALLENGES and OPPORTUNITIES Special Report Sven CLEMENT (Luxembourg) Rapporteur. 24 Nov. 2024, zero5g.com/wp-content/uploads/2025/07/download-file.pdf.
Gulsanna Mamediieva. “Military AI: Lessons from Ukraine.” Tech Policy Press, 20 Mar. 2025, www.techpolicy.press/military-ai-lessons-from-ukraine/.
ICRC. SUBMISSION to the UNITED NATIONS SECRETARY- GENERAL on ARTIFICIAL INTELLIGENCE in the MILITARY DOMAIN. 11 Apr. 2025, www.icrc.org/sites/default/files/2025-04/ICRC_Report_Submission_to_UNSG_on_AI_in_military_domain.pdf.
Kwet, Michael. “How US Big Tech Supports Israel’s AI-Powered Genocide and Apartheid.” Al Jazeera, 12 May 2024, www.aljazeera.com/opinions/2024/5/12/how-us-big-tech-supports-israels-ai-powered-genocide-and-apartheid.
Lin, Chin Yang, and João Alexandre Lobo Marques. “Stock Market Prediction Using Artificial Intelligence: A Systematic Review of Systematic Reviews.” Social Sciences & Humanities Open, vol. 9, Jan. 2024, p. 100864, https://doi.org/10.1016/j.ssaho.2024.100864.
Marr, Bernard. “How AI Is Used in War Today.” Forbes, 18 Sept. 2024, www.forbes.com/sites/bernardmarr/2024/09/17/how-ai-is-used-in-war-today/.
McKernan, Bethan, and Harry Davies. “‘The Machine Did It Coldly’: Israel Used AI to Identify 37,000 Hamas Targets.” The Guardian, 3 Apr. 2024, www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes.
Mhajne, Anwar. “Gaza: Israel’s AI Human Laboratory.” The Cairo Review of Global Affairs, 12 June 2025, www.thecairoreview.com/essays/gaza-israels-ai-human-laboratory/.
Nadibaidze, Anna, et al. Published by the Center for a Review of Developments and Debates. 2024, usercontent.one/wp/www.autonorms.eu/wp-content/uploads/2024/11/AI-DSS-report-WEB.pdf?media=1629963761.
NATO. “Summary of NATO’s Revised Artificial Intelligence (AI) Strategy.” NATO.int, 10 July 2024, www.nato.int/en/about-us/official-texts-and-resources/official-texts/2024/07/10/summary-of-natos-revised-artificial-intelligence-ai-strategy.
Nguyen, Ngoc. “AI in Military: Top Use Cases You Need to Know.” SmartDev, 10 Sept. 2025, smartdev.com/ai-use-cases-in-military/.
Serhan, Yasmeen. “How Israel Uses AI in Gaza—and What It Might Mean for the Future of Warfare.” TIME, Time, 18 Dec. 2024, time.com/7202584/gaza-ukraine-ai-warfare/.
Snapes, Laura. “Massive Attack Remove Music from Spotify to Protest against CEO Daniel Ek’s Investment in AI Military.” The Guardian, The Guardian, 18 Sept. 2025, www.theguardian.com/music/2025/sep/18/massive-attack-remove-music-from-spotify-to-protest-ceo-daniel-eks-investment-in-ai-military.
White, Andrew. “Why Musicians Are Leaving Spotify – and What It Means for the Music You Love.” The Conversation, Nov. 2025, https://doi.org/10.64628/ab.d5vsf6v4c.
Wiese, Lisa, and Charlotte Langer. “Gaza, Artificial Intelligence, and Kill Lists.” Verfassungsblog, May 2024, https://doi.org/10.59704/07a0756a3c08e64a.
Zewe, Adam. “Explained: Generative AI’s Environmental Impact.” MIT News, Massachusetts Institute of Technology, 17 Jan. 2025, news.mit.edu/2025/explained-generative-ai-environmental-impact-0117.





Comments