
In turn, we use cookies to measure and obtain statistical data about the navigation of the users. You can configure and accept the use of the cookies, and modify your consent options, at any time.
From Socrates’ warnings about writing leading to forgetfulness, to today’s anxieties about AI-induced brainrot, every new media technology has sparked fears of cognitive decline. But does generative AI pose a uniquely profound threat to our ability to think for ourselves?
Illustration: Sophia Prieto
AI slop. Content soup. Brainrot. Just a few of the disparaging terms springing up in recent years that attempt to capture the ill effects of artificially generated content flooding social media and the wider web.
As Meta envisions a future with platforms populated by artificial users, and governments like the UK promising to ‘mainline AI in the veins’ of their populations, there’s a growing worry over the effects of overreliance on technology that does the critical and creative thinking for us. “Will Generative AI Make Us Dumber?” asks Forbes. “Will AI Make Us Stupid?” asks The Guardian. The ubiquity of generative AI tools, coupled with our incomplete understanding of their effects – from the neurological to the societal level – is causing a minor panic.
Broaden your horizons with a Futures Membership. Stay updated on key trends and developments through receiving quarterly issues of FARSIGHT, live Futures Seminars with futurists, training, and discounts on our courses.
become a futures memberAre worries over AI merely fear of the new, or is there genuine cause for concern that it’s making us forget how to think for ourselves?
New research suggests that the answer to the latter question may be “yes” – at least when it comes to our capacity for critical thinking. That’s the conclusion of a study published in the January 2025 issue of Societies, titled “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking.”
The study compared the critical thinking abilities and AI use of participants, with “cognitive offloading” – a reliance on AI for tasks typically requiring human judgment – as the key factor linking the two. The individuals who relied more heavily on AI for their reasoning also exhibited a reduced ability to evaluate information critically and engage in reflective problem-solving, indicating a possible association between the two.
One younger participant in the study remarked, “It’s great to have all this information at my fingertips, but I sometimes worry that I’m not really learning or retaining anything. I rely so much on AI that I don’t think I’d know how to solve certain problems without it.”
It’s too early to draw definitive conclusions. The study concerns short-term cognitive effects of AI-use but doesn’t show how enduring the observed impacts are. But the implications point to what many are suspecting: relying on AI to do too much of our work diminishes our ability to disengage with this ‘second brain’. As AI grows more capable, our own capabilities shrink, until we must use it as a crutch to think. That’s a risk in a future where we are told by the likes of Sam Altman and Bill Gates that anyone who fails to fully embrace AI will be outpaced by those that do.
Not all of this is entirely new, of course. The cognitive effects of new media technology have been observed and debated throughout history. Socrates said that writing introduced “forgetfulness of the soul” and discouraged dialogue and internal reflection.
Writing also led to a more literal kind of forgetfulness. Ancient Greece was an oral culture that practiced advanced techniques for long-term memory that would seem superhuman to us today. The ‘method of loci’, discovered by the Greeks, turns the mind into a memory palace – an imagined physical place in which facts can be stored in the form of objects for later retrieval – like a mental hard drive. This is how Greek and Roman orators taught themselves to learn lengthy speeches by heart. Writing as a tool for information storage eliminated the need for techniques like these.
Later, with the rise of print media, anxieties shifted in this new direction. The comments of 15th century Venetian editor Hieronimo Squarciafico read like a distant precursor to contemporary worries over the digital erosion of our mental faculties. Writing about the impact of the printing press, Squarciafico saw the power of the technology in safeguarding and spreading information, but he was also concerned that the overabundance of material now at the disposal of his contemporaries encouraged superficial reading and diluted scholarly rigour.
Squarciafico knew something that we have had to relearn since: scarcity of information is not a bad thing in itself. Too much information can ruin our ability to make sense of it.
If the printing press overloaded us with information, then the internet buried us in it. Books, however abundant, exist in the analogue, tangible realm, and their information requires a degree of focus and attention to dislodge and digest. The internet, by contrast, enables fragmented, borderless, surface-level, and openended activity – clicking, scanning, prompting – with endless mental distractions designed to derail the attention of users.
With generative AI, all the contents of the net become infinite malleable. It liquifies information and can explain advanced concepts so a ten-year-old can comprehend them. It can transform text to image to video to sound, write and spell and think of phrases, invent synonyms or metaphors, construct arguments, weigh pros and cons of decisions, or add a dose of politeness or sternness to an email. It’s a machine for removing the friction of thinking hard about problems and training your brain to come up with solutions.
There’s a ‘use-it-or-lose-it’ effect at play when we offload cognitive processes to technology. When GPS became a default feature in smartphones, habitual use was found to diminish our spatial memory. When the ubiquity of search engines made all the world’s information accessible, it also made us stop recalling the information itself and instead start recalling where to access it (known as the ‘Google effect’ or ‘digital amnesia’). The camera’s impact on recollection – by turning photos into “memory aids” for details that are otherwise forgotten – is also well documented.
Explore the world of tomorrow with handpicked articles by signing up to our monthly newsletter.
sign up hereOur tools also shape the output. The typewriter, allowing faster revisions and clearer manuscripts, influenced sentence structure and pacing in literature. The telegraph, with its cost-per-word pricing model, encouraged brevity in journalism and business. Twitter limited online speech on the platform to 140 characters, and by doing so greatly shaped public discourse. Generative AI is in the process of once again moulding the shape of human work, thought, and creativity.
It’s a process that’s endemic to our relationship with technology. Some cognitive philosophers believe that our minds do not reside exclusively in our bodies but extend into the physical world and the tools we use as well. Supporters of the ‘external mind’ thesis would see generative AI as merely another enhancement of our thinking selves, like the word processor, camera, or telegraph before it. As we invent new technologies, our plastic brains adapt around our use of them, and maybe those cognitive muscles are freed up to do other things. Technology is not an external corrupting force to the ‘human’ but an inseparable part of us.
That doesn’t mean our ability to maintain some cognitive discipline in the face of abundance couldn’t use helping hand. It’s Squarciafico’s dilemma updated for the age of AI: technology makes all the world’s media and information available to us, but abundance can ruin our ability to engage with it in a thoughtful manner.
As tools of creation, the impact of generative AI extends beyond injecting the internet with the neutered GPT style. Through near-instant prompt-to-output, it also removes many of the human limits on content production. Soon, AI-generated ‘liquid media’ will flow freely across platforms, blending formats and adapting to suit the context and needs of its audience. A long-form magazine article becomes a TikTok, a technical manual is explained to you by a pair of chirpy artificial podcast hosts. A future is in store where all the world’s content and information exist in a quantum state – potentially in all formats and on all platforms at the same time – and is generated at volumes and speeds that would have driven Hieronimo Squarciafico mad and made Socrates’ head explode.
In practice, this may not make much of a difference to most people. It’s more stuff piled on an already overabundant internet. But it does raise the question of what cognitive discipline and media literacy looks like in a completely AI-mediated and liquid information landscape. What’s a necessary ‘baseline’ capacity for things like critical thinking, and what do its safeguards look like? There’s a balance to be found somewhere, in between the extremes of total complacency on one end and setting fire to OpenAI’s HQ on the other.
Those questions become especially urgent in education, where the dead ends of digitalisation are starting to appear.
A digital frontrunner, Sweden has begun reintroducing textbooks, handwriting, quiet reading time, and other traditional ways of learning to schools after years of investments in tablets, touchscreens and digital learning devices, with the centreright government expressing a desire to completely end digital learning for children under the age of six.
This flip in policy comes on the back of a decline in Swedish children’s reading comprehension measured between 2016 and 2021. While difficult to directly attribute to digital learning, it’s suspected by Swedish lawmakers to at least have some connection with it.
“Technologies that make writing abundant always require new social structures to accompany them,” the writer Clay Shirky commented in the late 2000s in response to an ongoing debate over Google, the internet, and attention spans ignited by Nicholas Carr’s essay “Is Google Making Us Stupid?”, published in The Atlantic.
To retain cognitive discipline, perhaps the future of AI-mediated abundance should be accompanied by new structures of managed restraints, however those might look. That doesn’t necessarily mean reverting entirely to older, more cumbersome forms of media and banning the use of generative AI – although in some contexts it might – or promoting ‘digital detox’ culture and no-phone summer camps. There’s a way to use AI as a tool for cognitive enhancement without letting it wholly replace the hard work of thinking.
Embracing scarcity in learning and process over product in creation can counter calls to ‘mainline’ AI in order to ‘keep up’. It’s not too late to take back some cognitive control, and the science suggests you are not a luddite if you want to do so.
Get FARSIGHT in print.