Share article

Why Public Intellectuals Promote an Overly
Simplistic Future

Our public intellectuals are not responsibly informing the public about the future(s). But that’s ok, says Alex Fergnani, foresight expert and Ph.D. in Management and Organization. As an Associate Professor of Strategy & Fetzer Scholar, Fergnani conducts research on corporate foresight, corporate strategy, and futures and foresight theories and methods. In this essay, he shares his views on why public intellectuals so often fail to communicate the multiplicity and complexities of the future – and what they might do to better live up to that responsibility.


Following recent developments in large language models, many well-known public intellectuals have shared their visions of the future of artificial intelligence (AI). These visions have ranged from humans merging with machines to extreme doomsday scenarios where AI will spell the end of the human species.

Since it’s clear that many of these visions are not supported by systematic analysis, it is necessary to ask ourselves whether our public intellectuals are responsibly informing the public about the future. Indeed, public intellectuals are often asked by journalists to offer opinions on the future of a variety of themes that may or may not be in the purview of their primary area of expertise. These range from the future of capitalism and the global economy to the future of AI, along with many other topics. Unable to escape such demands, many public intellectuals go out on a limb to respond with their forecasts.

However, by doing so, they often venture into foreign territories while lacking the conceptual tools they would need to responsibly navigate such queries. This is because the emerging discipline of futures studies, which is where these tools reside, is not well represented in academia where public intellectuals tend to start their careers. The sobering result is that their projections, visions, and insights about the future are at times overly singular, simplistic, and emotionally loaded. Many well-known public intellectuals fall into this trap, regardless of political leaning or background, which suggests that fame itself may contribute to further decreasing the rigour of scrutiny that they face when informing the public about the future.

The insufficiency of exclusive images of the future

If we were to deconstruct this problem in more detail, we would find that public intellectuals are often biased towards a specific image of the future and that they relentlessly promote this one image while discounting others. In futures studies, i.e., the field of study that systematically investigates several futures ahead of us, an image of the future is a description or portrayal of what the world or a particular domain might look like at a certain point in the future. It can range from the future of pandemics to the future of geopolitics in the Balkans. Images of the future are usually loaded either positively or negatively, depending on whether they are represented optimistically or pessimistically.

Since the future has not occurred yet, the only information we have about it comes through the images we construct of it. It goes without saying that to responsibly inform the public about the future of any topic, we would therefore have to impartially present more than one image of the future, and at least a primarily positive one as well as a primarily negative one, if not more than two nuanced scenarios of the given topic.  Anything that does not at least attempt to showcase this variety is a flawed representation of reality, as no particular image of the future has scientific validity due to its inherent prognostic nature.

Subscribe to FARSIGHT

Subscribe to FARSIGHT

Broaden your horizons with a Futures Membership. Stay updated on key trends and developments through receiving quarterly issues of FARSIGHT, live Futures Seminars with futurists, training, and discounts on our courses.

become a futures member

Showcasing that variety, on the other hand, has the benefit of encouraging the public to realise that the future is not predetermined, and that there is room to manoeuvre through it. On a side note, this is why specialists in futures studies may prefer to use the plural future(s) rather than the singular future. Think of a fair representation of various images of the future like statistics. Anyone who rejects the universally acknowledged notion that men are, on average, taller than woman based on the evidence that a particular woman is taller than a particular man is using an anecdotal fact to explain reality, which is flawed. Equally, anyone who fails to impartially showcase a variety of images of the future based on his or her opinion about how the future will inevitably unfold is showing a bias towards one specific future. Unfortunately, public intellectuals often do not escape this bias.

The Israeli historian Yuval Noah Harari, for instance, is often interrogated about the future of a variety of themes, spanning nuclear war, ecological collapse, and AI. When discussing the future of AI, Harari tends to dedicate most of his attention to deliberating the potential risks associated with a predominantly negative image on the future. In it, machines will govern us by “hacking” our brain, and organisations will colonise entire countries thanks to their access to data, leading to a new “data-based” imperialism. Harari has also argued that AI may create new religions or cause political polarisation through its ability to persuade individuals through intimate conversations. He is indeed aware that this is not an inevitable future, and that the impact of AI on our species may range from utter destruction to human enhancement. Yet the level of nuance Harari applies to his negative image of the future of AI is remarkable when compared to his positive image of the future of AI for humanity. When discussing the former, Harari will go into detailed descriptions of how your refrigerator will come to know you better than your partner thanks to its access to sensitive data, and warn of the ensuing problems caused by this. In contrast, his positive image is often painted with a very light brush, offering no more than vague suggestions for global cooperation.

Jordan Peterson, the famous Canadian psychologist, also does not escape this bias. When asked in a recent interview about the impact of ChatGPT on humanity, Peterson’s response was to list a number of things that large language models can do better than humans, accompanied by the affirmation “Hang on to your hats ladies and gentlemen […] because giants are going to walk the earth once more, and we are going to live through that, Maybe!”. Peterson does not elaborate on other plausible images of the future that should be given equal standing, including ones where artificial intelligence releases us from drudgery and unleashes more talent.

Of course, public intellectuals don’t always lean towards solely negative future scenarios. Some also frequently promote a one-sidedly positive future. Jordan Peterson, Steven Pinker, Johan Norberg, and Hans Rosling, to name a few, talk about a number of global trends  that point to a rosy image of the future. These include a decline in global poverty, advancements in technology, increases in life expectancy, a reduction in violence, better quality of life, and progress in human rights.

These optimistic perspectives are frequently advocated as an understandable response to an academic trend of postmodern pessimism about humanity’s future. This pessimism, in turn, stems from the notion that the capitalist system responsible for these advancements is inherently unsustainable. Yet, the presentation of thus rosy image of the future would be more valuable to the public if accompanied by an acknowledgement of the multiple global trends indicating a less favourable future. These encompass climate change, escalating inequality (both in economic and psycho-social terms), political volatility, and the prevalence of mental health issues. Similar arguments can also be raised about techno-optimist and singularitarian public intellectuals such as Ray Kurzweil and Peter Diamandis, who proclaim a future where exponential technology will solve virtually all of humanity’s most wicked problems, while disregarding the many ways this could go spectacularly wrong.

To be sure, exclusively or disproportionately informing the public about positive images of the future is likely not as harmful as exclusively or disproportionately informing the public about negative images of the future. Indeed, in a study performed at the National University of Singapore, we found that when individuals are exposed to a negative future, they tend to exhibit more unpleasant feelings in response to it as compared to when they are exposed to a positive future or to a mix of the two. Yet, they do not necessarily change their plans for the future as compared to when they are exposed to the other two conditions.

However, the exclusive or disproportionate representation of positive images of the future still traps the public in a predetermined future that does not seem to need their intervention. Indeed, in another study published in Psychological Science, it was found that when the media repeatedly promotes a positive and rosy future, individuals are prone to idealisation and thus to inaction. Indeed, the study further suggested a link between such collective idealisation and a decline in the Dow Jones Industrial Average.

In either case, leaning towards any specific image of the future, positive or negative, implicitly assumes and communicates that the future is predetermined and misses the opportunity to educate the public to think about the future in a more nuanced manner. By instead stressing the existence of numerous scenarios, each of which can be affected by our agency to change it, public intellectuals would also convey a more subtle but equally important message, i.e., that reality is more complex than what both utopian and dystopian images would lead us to believe. Indeed, any future that is desirable for some stakeholders may inevitably be undesirable for others.

The role of emotions

Let us dig into the problem further. Why do public intellectuals focus so vehemently on a specific and often polarised image of the future rather than showcasing a range of future possibilities? Of course, much of this has to do with the fact that promoting dystopias and utopias contributes to one’s personal distinctive ‘brand’ of thought leadership. Exclusively positive images of the future fuel hope and industriousness, while exclusively negative images of the future fuel fear and discontent towards the status quo. Both are guaranteed to receive more attention than a careful, balanced (and likely more boring) analysis of multiple scenarios.

Yet, there are also deeper reasons at work. First, images of either extreme are often driven by emotions. Indeed, emotions also partly explain the overrepresentation of negative images of the future showcased by the current increase of public attention towards developments in large language models. Not only prominent public intellectuals such as Yuval Noah Harari, the philosopher Sam Harris and the psychologist John Vervaeke, but also various industry thought leaders and celebrities like Geoffrey Hinton, Bill Gates, Nick Bostrom, Bill Joy, James Barratt and Stuart Russell have promoted strong narratives of worry over the risks and dangers of AI. A close analysis of these narratives reveals that they often lack consistency and detail and that they instead rely on an appeal to emotions as their driving force.

Harari, for instance, has repeated on multiple occasions that the key difference between AI and other technologies is that the former can take decisions by itself. In a recent interview, he states that “if AI is an amoeba right now, imagine what it would be when it will be a T-rex a few years from now!”. This gives an unduly amount of agency to language models. The truth is that AI will not wake up in the morning and apply for a human job, much less pose a physical threat to us, unless an algorithm commands them to. A plane does not dive down to catch a mouse, unlike an eagle, because it lacks the evolutionary drive to do so. Similarly, it is unlikely that language models will exhibit certain human behaviors unless we program them to do so.

Vervaeke, who has also warned of AI spiralling out of control, repeatedly refers to large language models as “machines”. By doing so, he taps into the dystopian allegory from science fiction of a superior physical organism suppressing our civilisation. AI’s ability to mimic certain human behaviours cause him and other observers to anthropomorphise it and grant it a vastly outsized degree of agency. These inconsistencies suggest that the emotional appraisal of AI trumps any consideration of how to represent it based on their actual and projected capabilities.

GET FARSIGHT DELIVERED TO YOUR INBOX

GET FARSIGHT DELIVERED TO YOUR INBOX

Explore the world of tomorrow with handpicked articles by signing up to our monthly newsletter.

sign up here

Sam Harris takes this logic even further. In a 2016 Ted Talk, he suggested that when AI surpasses human intelligence, it will inevitably choose to destroy us at the slightest difference in goals between AI and humans, comparable to how we would view and treat ants (Harris states that he still subscribes to this argument today). This claim is based on a number of assumptions, including that AI will achieve autonomy, pursue evolutionary driven goals for self-interest, and not be aligned with humans. None of these assumptions are granted, even in a scenario where AI superintelligence is achieved. Indeed, judging from the current capabilities of large language models, which are certainly impressive, huge intellectual leaps of logic and faith must be made to arrive at the notion that these models will achieve self-interest. From self-interest, equally grandiose intellectual leaps must be made to argue that AI will destroy our species.

Similarly, Harari’s assumption that AI intimately conversing with humans on the internet and convincing them to buy products will lead straight to civilisational collapse is not warranted. Although scenarios like this are certainly not unthinkable, they require tremendous sequences of several simultaneous events at the local level (e.g., spread of fake information and datasets, incidents driven by autonomous weapons or drones, cybersecurity incidents, errors in AI-driven management of infrastructures, bad actors taking control of AI, etc.), and then their sudden catastrophic escalation at a global level. The lack of details provided on this once again suggests that emotions about AI are more likely to be the force at play behind these images of the future.

A counterpoint to the above, which is often raised by proponents of such doomsday scenarios, states that to infer the future developments of AI, we must look at the exponential growth of technology. This would lead us to realise that we currently are on the “elbow” of the surge of truly exponential AI developments. Yet, we must acknowledge that not only we have a blind spot in not noticing some exponential curves, but also in noticing them everywhere to promote our emotionally loaded opinions about the future. We might well not be at the elbow of the exponential curve, as large language models are increasingly costly and unwieldy to train, and the next paradigm of AI development is not even near to be established. 

Another often raised counterpoint relies on a 2022 survey among AI experts which found that 50% of them believe that there is a 10% or greater chance that humans will go extinct due to our inability to control AI. This statistic has since been promoted widely, including in a talk given by technology ethics advocates Tristan Harris and Aza Raskin. Yet they and others arguing this point provide no detail of the many possible scenarios in which such an AI takeover may occur. Additionally, AI experts tend to provide greater estimates of AI extinction risk when compared to superforcasters, and we know that superforcasters provide more accurate estimates than domain experts (a superforcaster is a person whose forecasts are consistently more accurate than the general public or experts, ed.).

Once again, it is entirely possible that today’s talk of an AI apocalypse reflects deeper fears that precede current developments in large language models. It may in fact be symptomatic of a more general frustration and anxiety at our inability to control modernity. By making people fear along with us, we make them interested in our field, the field gains relevance and status, along with us, and our fear abates (in the same vein, this essay is meant to increase the public’s interest towards futures studies, which I’d like to transparently acknowledge).

Images of the future of existential risk should certainly be developed for the public and policymakers alike, if only to improve our capacity to imagine preposterous futures and to prevent the worst outcomes. For the same reason, images of “plausible utopias” should also be promoted to spark hope and industriousness. However, to the utmost degree possible, these images should be scrutinised for their emotional loading, the chain of events presented in them should be detailed, and they should be impartially counterbalanced by equally extreme opposite futures that we ought to either prevent or achieve. They should not be irresponsibly spread among the public in an unsubstantiated and unbalanced form, lest the outcome be fearmongering or idealisation.

The inflation of the present

An additional reason why public intellectuals focus so vehemently on a specific and often polarised image of the future, on the top of their emotional appraisal, is that they sometimes overestimate the impact of current trends and events on the future.

The Covid-19 pandemic, for instance, caught many so off-guard that they could not think of a future unaffected by it. In an interview given in May 2020, the famous Slovenian philosopher Slavoj Zizek asserted that the pandemic would change everything and that nothing would return as before. Studying the history of pandemics, however, reveals that the opposite tends to be the case, and that the social and economic upsets caused by pandemics tend to return to a state of ‘normalcy’ after just a few years.

Sam Harris has also intensified his worries over AI-driven extinction after the recent developments in large language models. Yet, as discussed above, the chain of events linking current AI developments and extinction scenarios remains undefined. The pace of developments in large language models today might not necessarily be sustained in the long-term future, and the trajectory of technological change could shift in completely unexpected directions, with both positive and negative effects. Concluding that the extinction of our species is now more likely by extrapolating the last few months of technological progress means overemphasising the impact of short-term trends and events over a more careful analysis of long-term patterns of change.

Current trends and events indeed colour our cognition and we must look beyond them to be able to investigate the future in an informed manner. This is a founding principle in the discipline of futures studies yet one that many public intellectuals are not familiar with.

What images of the future should we demand from our public intellectuals?

In sum, public intellectuals should impartially discuss multiple images of the future to teach the public that the future is not predetermined. They should also meticulously examine the visions of the future they present, taking into account the emotional load they carry, in order to steer clear of fearmongering or excessive idealisation. Additionally, it is crucial for them to ensure that these visions are not influenced by fleeting trends and immediate events.

Having expressed that, merely rebuking present-day public intellectuals for not adhering to these principles is not enough. We should also forgive them, rise above, and progress, while also holding both current and future public intellectuals to higher standards.

Indeed, futures studies is a relatively young discipline which is just beginning to be embraced by intellectuals outside the field itself. Additionally, it is in part born outside of and as a reaction to the academic establishment. ‘Pracademic’ experts or specialists in futures studies tend to reject or escape public light, the media, or professionalisation more generally for fear that their intellectual freedom will be compromised by the exigencies of current mainstream institutions such as academia, journalism and science. This is why it is not entirely the fault of public intellectuals from other disciplines that they are not familiar with it.

Additionally, it has to be acknowledged that public intellectuals do say useful things about the future despite the above biases. Peterson, for instance, is aware of the impossibility of predicting the long term – although he does not propose scenarios as a solution – and repeatedly warns of the danger of falling into the dreams of utopian visions. Harari is also aware that our capacity to imagine the future is one of the greatest human powers, although he does not stress that that capacity is more valuable when we imagine multiple futures rather than a single future.

The most prominent public intellectuals of our age, from Sam Harris and Peterson to Slavoj Zizek and Harari, are also well aware of the lack of a grand future narrative, a shared vision driving our civilization, and of the necessity to foster it. Yet despite their virtues, the public should demand more of our public intellectuals when they discuss the future. We have the right and responsibility to both forgive them and demand more from them. Our future(s) is at stake .

Disclaimer: What is discussed above does not apply to every public intellectual. Some public intellectuals talk about the future responsibly, and many public intellectuals do not talk about the future at all. Yet, as public intellectuals rise in notoriety and are bound to publicly discuss the future, they may likely fall into the biases outlined above.


Read the latest issue of
FARSIGHT: Food Futures

Grab a copy here