Share article

Treacherous Utopia

Is ‘longtermism’ safeguarding or
sabotaging our common future?

Illustration: Signe Bagger


‘Longtermism’ has emerged as one of the most influential ideas of our time. Finding growing support in the worlds of big tech, elite academia, and international politics, longtermists want to ensure the survival and wellbeing of our distant descendants by steering clear of existential risks today. Some scholars now warn that longtermism is as dangerous as it is influential, likening it to the most extreme political movements of the past. To understand the rift, we spoke to both a critic and a proponent of the ideology.


Future people count. There could be a lot of them. And we can make their lives better,” is the first line in William MacAskill’s What We Owe the Future (2022), a book found in many influential and educated people’s bookshelves.

It’s considered a manifesto for ‘longtermism’, the view that the interests of unborn generations should be weighted with equal importance against our interests today, and that we must do everything we can to ensure the maximisation of both their existence – as many of them as possible – and their wellbeing. Longtermists consider humanity’s future to be vast, with the number of people yet to be born potentially counting in the trillions. This means the moral duty we have today in minimising existential risks and furthering the wellbeing of our distant ancestors is nothing short of enormous.

Subscribe to FARSIGHT

Subscribe to FARSIGHT

Broaden your horizons with a Futures Membership. Stay updated on key trends and developments through receiving quarterly issues of FARSIGHT, live Futures Seminars with futurists, training, and discounts on our courses.

become a futures member

Once a relatively fringe idea, a child of Oxford’s Future of Humanity Institute (FHI) and Centre For Effective Altruism (CEA), longtermist thinking has begun to spread from elite academia to decision-makers more broadly. In the public sphere, political bodies are beginning to codify and represent future generations directly in their legislature. In 2024, the UN will host a ‘Summit of the Future’, an inaugural event planned to cement the needs of future generations into the forefront of decision-making worldwide.

The threats of climate change, nuclear war, and artificial intelligence have also led to a surge in those trying to tackle ‘existential risks’ which could spell the end of humanity’s vast potential. These range from non-profit organisations and institutions such as Cambridge’s Future of Life Institute (FLI), founded by Max Tegmark and featuring Elon Musk as an External Advisor, and the Centre for the Study of Existential Risk (CSER), co-founded by the British Astronomer Royal Sir Martin Rees.

Longtermism has become most popular in Silicon Valley, where major foundations such as Open Philanthropy provide grants to research addressing ‘global catastrophic risks’. Indeed, the recent schism in Silicon Valley between Effective Accelerationists and Effective Altruists (E/Acc versus EA; speeding progress up versus slowing it down), is fuelled by a shared idea: humanity’s long-term potential can either be catalysed or destroyed by technological progress.

Despite its purported dedication to human wellbeing, longtermism faces criticism as well. Certain scholars caution against it, expressing concerns that it may pose a danger comparable to some of history’s most destructive ideologies. In recent years, Dr Émile P. Torres has emerged as one of the most vocal opponents of what they call ‘TESCREAL’, an acronym which combines longtermism and various other related concepts.

‘Transhumanism’, ‘Extropianism’, ‘Singularitarianism’, ‘Cosmism’, ‘Rationalism’, ‘Effective Altruism’, – the various -isms in the TESCREAL bundle will sound like obscure jargon to the uninitiated. But beneath the esoteric terminology are a set of ideas sharing an emphasis on a techno-utopian vision of the future. Torres defines it as a set of beliefs anticipating “a time when advanced technologies enable humanity to accomplish things like producing radical abundance, reengineering ourselves, becoming immortal, colonising the universe, and creating a sprawling post-human civilisation among the stars full of trillions of trillions of people.”

Torres, a philosopher and historian whose work focuses on existential threats to civilisation and humanity, argues that when humanity’s future is laid out on a weighing scale of potential wellbeing, the near-infinite value of posterity – attainable through improving, enlarging, and ultimately surpassing humanity – can justify radical policies in the present:

“Longtermism minimises and trivialises current-day suffering, given its expectation that the future will be astronomically larger than the present,” Torres says. “This results in the possibility that the ideology could be used by true believers to justify extreme measures, including violence, in order to preserve and protect what one leading longtermist refers to as ’our vast and glorious future in the universe’.”

These extreme measures, Torres contends, might extend to include forms of mass violence or eugenics like those justified by radical political movements in the past, many of which were motivated by envisioned, utopian futures. “Hitler promised Germans a thousand-year Reich, drawing inspiration from motifs in Christian eschatological thinking. It ended up causing the bloodiest conflict in human history. The second bloodiest was the Taiping rebellion in the 19th century between the Taiping Heavenly Kingdom – a utopian and apocalyptic movement – and the Chang dynasty, killing over 30 million people.”

Torres points to these past examples of political fanaticism in their critique of longtermism and TESCREAL, which they believe relies on a similar combination of Edenic goals and the application of utilitarian logic to reach them.

When asked for an example of how an ostensibly benevolent concern for future generations intersects with the possibility of mass violence, Torres points to the potential implications of achieving artificial general intelligence (AGI), which has recently become a focal point among Silicon Valley’s tech set. If one sees this as an existential threat to civilisation, as many influential voices now do, then there’s almost no cost too great if it can help us avoid it. Such views have been expressed by the likes of Eliezer Yudkowsky, the AI researcher who popularised the notion that there might not be a ‘fire alarm’ for AI – no advanced warning of its imminent takeover. This risk, so long as one views AGI as a threat, necessitates a violent first-strike response.

“Yudkowsky, who’s at the heart of the TESCREAL movement, argued in Time Magazine that AGI will probably kill everyone if it’s created in the near future, and that states should be willing to engage in military strikes against data centres in non-compliant countries – even at the risk of triggering a thermonuclear war,” Torres says. “The reasoning is that thermonuclear war would kill maybe 5 billion people – that leaves 3 billion people to carry on civilisation and potentially create utopia. AGI on the other hand, he believes is an existential risk, and therefore we should risk war.”

Although Yudkowsky tends to an extreme, Torres argues that his position is simply the logical outcome of a wider set of TESCREAL and longtermist beliefs held by many other influential figures in tech and futurist academia. Needless to say, it’s not a conviction that’s shared by those whom the criticism is levelled at. “I have never encountered any longtermist who condones violence,” says Dr. Anders Sandberg, a futurist, transhumanist, and Senior Research Fellow at Oxford’s Future of Humanity Institute. Sandberg, a computational neuroscientist by profession, is a firm believer in technology’s ability to push human evolution towards what he calls a ‘postbiological existence’. He admits to probably being one of the few people who embodies all the letters in the TESCREAL acronym.

GET FARSIGHT DELIVERED TO YOUR INBOX

GET FARSIGHT DELIVERED TO YOUR INBOX

Explore the world of tomorrow with handpicked articles by signing up to our monthly newsletter.

sign up here

“In fact, a perennial debate inside the effective altruism and longtermism community is around the problems of extremism and the apparent paradoxes of near- infinite values,” Sandberg says. “It’s a debate that hardly anybody outside this community seems to care about, which leads to the assumption that we come down on the side of extremism, despite this not being the case.”

Torres, though, is not a complete outsider to these environments. On the contrary, they have held an immaculate track-record of positions at the kinds of institutions that they now direct their criticism at. They spent several months at the Centre for the Study of Existential Risk, wrote for the Future of Life Institute and have been a visiting scholar at Oxford’s Future of Humanity Institute, home to prominent futurists such as Nick Bostrom and Toby Ord. Then, in 2019, they changed their views quite radically, suddenly becoming a critic.

This change of heart, Torres explains, came partly from realising that there’s just as much faith involved in longtermism as in traditional religion. “My own interest in the future was initially sparked by my Christian background,” they say. “Eschatology – the study of last things – has always been an important component to the Christian worldview, and I think my religious upbringing planted the seeds of my interest in the long-term future of humanity.”

When a younger Torres picked up Ray Kurzweil’s The Singularity is Near (2005), they found that it checked all the boxes that their faith used to. “When I left religion there was a void left behind. But here was another sense of promise and meaning. The promise of eternal life – it literally being in the heavens,” Torres says.

In his book, the Google-affiliated computer scientist argued that by 2029, unprecedented technological growth would lead to the irreversible and uncontrollable proliferation of superintelligence (he has since amended this to 2045). “What was different about ‘singularitarianism’ was that it purported to be based on scientific principles – looking at tech trends and extrapolating them into the future. So, there was a robustness to the reasoning that made it more appealing than traditional religion,” Torres explains.

Their change in perspective was also influenced by a growing awareness of what they perceive as a homogeneity in both background and thought within longtermist communities. Torres contends that this lack of diversity contributes to a myopic overemphasis on quantification, augmentation, and maximisation as exclusive measures of ‘better’ futures.

“I came to realise that the TESCREAL worldview is essentially an extension of techno-capitalism, crafted almost entirely by white men at elite universities and in Silicon Valley. By consequence it channels and embeds all the biases and limitations of the white, male, Western capitalist worldview in it,” Torres says. “It’s worth noting that capitalism and utilitarianism emerged around the same time. The bottom line of both is maximising something: for capitalists, it’s profit. For utilitarians, it’s ‘value’ in a more abstract sense – something like ‘happiness’ or ‘satisfied desires’.”

Torres’ critique is noteworthy both for its severity and its broadness of scope, encompassing Yudkowsky’s advocacy of pre-emptive war to stop AI, and other, more mild expressions of longtermist-adjacent thinking. Certainly, the notion that extreme ideologies rarely emerge ‘ready-baked’ but need time to build support and mature into their most twisted form finds precedent in history. Yet this conflation of moderate and radical expressions of similar ideas also opens the door to criticism of TESCREAL as a critiquing term in itself. To Sandberg, it risks making a mountain out of a molehill.

“Any ideology can be harmful or dangerous,” he says. “Religions have caused religious wars, environmental concerns have blocked nuclear low carbon-energy sources in the past, and the search for justice and social solidarity led to the Gulags.”

“That is not a reason to reject spirituality, caring for the environment, or justice,” he continues. “One always has to look at the proposed implementations, what people actually believe and do – rather than critiquing the maximally extreme version of longtermism, and then claiming that this is all what the idea is about.”

To Sandberg, the need for distinguishing between extremes within longtermism as well as between the various other branches of far-future advocacy also applies to the contention that the maximalist intentions of longtermists is an expression of a myopically utilitarian and quantitative logic.

“Longtermism doesn’t only care about how many people there are, but also what kinds of lives they can live,” he says, adding that he sees calculations of the value of vast populations primarily as an academic exercise.

“We do not know what lives people may want to live, so we have reason to maintain the openness of the future – preventing value lock-in, stable totalitarianism, and extinction, because they limit the possible good lives. We should not discriminate against people far away in time just as we should not discriminate against people far away in space.”

For Torres, focusing on the very far future is not just a difficult challenge, but a fundamentally flawed exercise, since we have no idea what the world will look like in millions, billions, or trillions of years. “It’s like we’re driving along a winding road at night. If you are going to decide to steer left or right based on what’s three miles ahead of you, you’re going to crash,” they say.

Sandberg doesn’t see it as an either/or proposition. “It is a rational strategy to hedge one’s bets, including moral ones,” he says. “We should distribute our efforts across what appears to matter, and if we disagree, so much better. Maybe it turns out that one side or the other had the right moral theory, and then at least half of the effort went into something good.”

Care for our descendants, of course, does not need to strictly be a far-future concern either, or the exclusive purview of Silicon Valley entrepreneurs and futurist academic institutions. Take the emerging initiatives of government bodies codifying future generations directly into their legislature, such as the Welsh Future Generations Commissioner and UN’s Declaration on Future Generations, the latter of which will be inaugurated during the UN’s ‘Summit of the Future’ in September 2024. The writings surrounding these initiatives are packed with terminology that sounds decidedly longtermist, despite that term not being used outright.

Wales’s Future Generations Commissioner, Derek Walker, describes his mandate as “improving lives now, next year, in 25, 50, 100 years into the future – and more.” The bill underpinning the Commissioner’s legislative authority advises public bodies to consider the likely effect of an objective over a 25-year period – about one generation ahead. While the UN’s Our Common Agenda report doesn’t note a specific time-horizon, it clearly states how it wishes “long-term thinking” and “representing future generations” to be used. In this context, ‘safeguarding the future’ means ensuring a “healthy planet, strong institutions, health/social protection, education/work, and preparedness.” A distant call from the mind-uploading, transhumanist singularity.

Indeed, much of the more progressive work being done within futures studies is applauded even by Torres: “I think that positive images of the future are really important,” they say. “It’s about piecemeal change – we don’t need to buy into maximising the population by becoming digital beings spread throughout the universe in order to embrace long-term thinking.”

(Editor’s note: Oxford’s Future of Humanity Institute has closed down as of April 2024)


Stay up-to-date with the latest issue of FARSIGHT

Become a Futures Member