Share article

Fatal Equations

The lethal impact of algorithms in war.

Illustration: Sophia Prieto

Following the launch of ChatGPT in 2022, you might, as I did, have turned to it for help with minor life decisions. Whereas my experimenting merely prompted me to attend a couple of odd events and go on a few dates, I never reached the level of discomfort experienced by Maxwell Strachan, the author of “I Asked ChatGPT to Control My Life, and It Immediately Fell Apart” (published in Vice, 2023). The article, a light-hearted exploration of the relationship between humans and technology, pinpoints the frustrating yet amusing detours that AI-driven decision-making can lead us on. Rather than AI helping him streamline his daily routine, Strachan ends up with a schedule that encourages him to both neglect his work duties and skip enjoyable activities.

Outsourcing everyday decision-making to AI as an experiment is all fun and games. Yet the implications of technology making decisions on our behalf can be far more sinister and unsettling. The utilisation of AI in war zones is perhaps the most frightening application of all. And it has gone from an abstract “what if” to reality over the course of just a few years.

Subscribe to FARSIGHT

Subscribe to FARSIGHT

Broaden your horizons with a Futures Membership. Stay updated on key trends and developments through receiving quarterly issues of FARSIGHT, live Futures Seminars with futurists, training, and discounts on our courses.

become a futures member

The symbiotic relationship between the tech industry and the military-industrial complex is nothing new. In recent years, as an AI arms race has kicked off, this close relationship has only been reinforced. As a case in point, take the career of tech entrepreneur Palmer Luckey, who founded the virtual reality headset company Oculus in 2012. After Oculus was sold to Meta (formerly Facebook), Luckey went on to found the autonomous weapons company Anduril (named after a sword from Lord of the Rings), using his previous experience with human-VR interfaces to manufacture extremely capable AI-based drones. Their website promises an ‘unfair advantage’ in conflicts. Anduril now has a multi-billion-dollar partnership with the US military, alongside tech mogul Peter Thiel’s data analytics company Palantir (named after the all-seeing crystal balls from Lord of the Rings). Palantir provides data and data infrastructure to US-allied militaries in global conflicts, including to Israel.

Israel, of course, has deployed its own autonomous ‘Lavender’ system in its ongoing war on Gaza. Although Lavender is a black box to outsiders, the independent Israeli-Palestinian media outlets Local Call and +972 Magazine recently reported that by drawing on a vast amount of information and data gathered from cell phones, social media, and drone footage, Lavender created a “kill-list” of individuals suspected of being operatives of the armed wings of Hamas and the Palestinian Islamic Jihad (PIJ). The outputs of Lavender, the investigation found, are fed into an automated tracking system colloquially termed ‘Where’s Daddy?’. Subsequently – with little human interference – these two automated systems provide the foundation for airstrikes targeting said individuals with imprecise or ‘dumb’ ammunition, often in the presence of their families.

While ‘Lavender’ and ‘Where’s Daddy?’ underscore how frighteningly far we’ve come with regards to decision-making processes with lethal outcomes, it’s important to keep in mind that the adoption of autonomous applications hasn’t suddenly materialised out of thin air. Rather, it has emerged against a backdrop of remote warfare – a reality to which we in the West, as distant observers far removed from active battlefields, have long grown accustomed.

Between the 1960s and 1990s, the utilisation of remotely piloted vehicles in reconnaissance missions steadily increased. Military strategists and politicians began to recognise the significant tactical advantage of reducing ground presence, thereby minimising the risk of harm to their own troops in conflict zones. The Vietnam War served as a pivotal testing ground for American drone technology, which in turn led to a transition toward surveillance-focused operations and a subsequent adoption of the “electronic battlefield” approach which would come to fundamentally transform how modern militaries collect intelligence and engage in warfare. In the early 2000s, with the onset of the War on Terror, weaponised drones started populating the skies, engaging in what soon would become termed “joystick warfare”.

The prevalence of armed drones and the resulting asymmetry in conflict dynamics has been subject to much debate. Whereas advocates of remote warfare high light the potential of conducting targeted strikes with surgical precision, critics argue that it minimises the emotional connection between operators and their targets – in turn decreasing the overall concern for human suffering on the ground. As Grégoire Chamayou, philosopher and author of the book A Theory of the Drone, puts it, a ‘hunter and prey’ logic is inherent in drone operations. The asymmetry of power, Chamayou argues, induced by the operators’ distance from the risk of harm, gives rise to the notion that the body of the prey does not only constitute a target – but rather, that it defines the outlines of the battlefield itself. By virtue of the imprecise munitions used in many of the so-called surgical strikes, Chamayou contends, a lethal halo, encompassing every other human being within a 15-meter radius, extends from the body of the ‘prey’, thereby constituting the body as an “autonomous zone of contemporary killing”.

With the evolution of AI in its infancy, the future of remote warfare may look very different from today. Some argue that it’s only a matter of time until AI reaches a level of general intelligence equivalent to, or higher than, that of humans. Until then, human beings still constitute the lubricant of technological systems, necessary to finding and implementing solution-oriented workarounds. When it comes to making fully autonomous decisions, machines (still) lack important capabilities that help humans navigate their often-complex surroundings; a mishmash of morality, emotions, existential intelligence, and a general understanding of human social behaviour, which are all bound to the human experience.

Whereas military training aims to narrow the decision-making space of soldiers by instilling discipline, professionalism, and adherence to protocols, emotions such as fear, anger, and compassion nevertheless, quite naturally, influence decisionmaking in the heat of battle. And of course, soldiers frequently encounter difficult moral judgments – or at least judgments that should be difficult.

As the role of technology in warfare changes, these moral and ethical concerns will only become increasingly complex. While soldiers physically present on the battlefield bear direct responsibility for their actions, delegating decision-making capabilities to machines or AI systems changes this dynamic. This shift complicates the assignment of accountability and raises the risk of increased, indiscriminate lethal force being used against civilians as algorithmic logic potentially would prioritise strategic objectives over humanitarian concerns. Hence, there is a risk that collateral damage increases exceedingly.

GET FARSIGHT DELIVERED TO YOUR INBOX

GET FARSIGHT DELIVERED TO YOUR INBOX

Explore the world of tomorrow with handpicked articles by signing up to our monthly newsletter.

sign up here

The conceptualisation, and significance, of meaningful human control is a subject of vigorous debate within the discourse pertaining to autonomous weapon systems and decision-making processes alike. While opponents of autonomous applications insist that the control of force in warfare must remain with humans, others contend that autonomy could potentially enhance compliance with international humanitarian law guidelines – precisely because human beings make horrendous decisions and commit unspeakable crimes against each other, especially in war zones.

By deploying Lavender, hence circumventing a so-called “human bottleneck” of inefficiency, the Israeli military not only significantly shortened the kill chain, but arguably also relinquished what was supposed to constitute meaningful human control over the lethal decision-making process in targeting suspected military operatives. Officers tasked with approving targets on the kill-list reportedly spent as little as 20 seconds assessing each case, often solely to confirm that the target was male – despite an alleged margin of error of up to 10%. This disregard for thorough assessment and ethical consideration not only highlights the alarming dehumanisation inherent in autonomous warfare, but also underscores the potential for – and exemplifies – the catastrophic consequences that can ensue when machines are entrusted with life-and-death decisions.

Another point of importance is that computer systems are susceptible to encoding a high level of bias – in turn increasing the risk of individuals subjected to systemic inequality or oppression in real life becoming subjected to similar structures by and through machine learning-based systems. Considering that the presence of autonomous applications could potentially render human bodies and minds obsolete in exercising lethal violence, critical questions hence need to be asked in relation to the importance, and essentially the nature, of our input.

The German political scientist Frank Sauer perhaps said it best in an interview with the Future of Life Institute. He argues that the implications of ceasing to concern ourselves with the death and suffering of others by virtue of reducing them to anonymous data points – even if we could create autonomous killing machine applications that worked perfectly in compliance with international humanitarian law – seem profound, dangerous and, frankly, quite chilling.

Unless we see a future where everyone across the world is on an equal technological footing – an unlikely prospect – we may never see a conflict in which bodies aren’t present on at least one side. For as long as this asymmetry exists, theories of conflict will face increasingly complex questions.

Perhaps I’ve already embraced the notion that AI can lead an independent existence if given the chance. My use of “pleases” and “thank yous” in conversations with ChatGPT at least hints that I’ve attributed some human qualities to a computational system. Our AI companions may be in their infancy today, but as they grow and mature, we should tread with caution. The decisions they make on our behalf may increasingly have literal life and death consequences.


Stay up-to-date with the latest issue of FARSIGHT

Become a Futures Member