In turn, we use cookies to measure and obtain statistical data about the navigation of the users. You can configure and accept the use of the cookies, and modify your consent options, at any time.
The Institute’s very own media and technology experts, Sofie Hvitved and Bugge Holm Hansen, share their thoughts on future of AI and the media industry.
Photo: Natalie Walker
A conversation with CIFS futurists Sofie Hvitved (Head of Media) and Bugge Holm Hansen (Head of Innovation and Technology) about AI anxiety, why we don’t hear much about the metaverse anymore, and the future of the media industry.
How do you see your role of futurists and foresight practitioners in helping people understand our future with AI?
Sofie: Personally, I think my role as a futurist is to ensure that we have a holistic understanding of AI development. One focused on direction, rather than the technologies themselves.
AI development will likely cause fundamental changes to life as we know it. It presents scenarios that are difficult for us to grasp. For example, AI is not just a tool for making things faster, cheaper, more efficient or (potentially) better. If every aspect of our lives becomes AI-mediated, it would fundamentally reshape the way we share knowledge, innovate, and engage in decision-making – the way we think.
Bugge: As futurists, we want to look beyond immediate technological advancements and understand broader systemic shifts. It’s not our job to predict AI’s trajectory, but ensure that we, as societies, prepare ourselves for all its possible directions.
This is why we shouldn’t just focus on AI as a singular tool, but instead, the relationship between AI and larger systems: something becoming an invisible layer across all aspects of society. It’s not just about automation or optimisation. AI influences governance, power structures, and even the way we perceive reality itself.
Broaden your horizons with a Futures Membership. Stay updated on key trends and developments through receiving quarterly issues of FARSIGHT, live Futures Seminars with futurists, training, and discounts on our courses.
become a futures memberHere, futures thinking is essential, because AI is evolving faster than our traditional frameworks for understanding change. If we let AI development be dictated solely by technological capability and market forces, we risk losing sight of the bigger picture.
Sofie: At the moment, there is a tendency towards picking sides in the public debate where people box themselves into extremes on the AI spectrum: either AI is a silver bullet solution or existential risk to humanity. Our job is to prompt neither techno-optimism nor fear but make people aware of the nuances in the discussion.
How do you, as futurists who specialise in media and AI and use these tools, avoid becoming walking talking human version of ChatGPT? Do you limit yourselves in how you use AI?
Sofie: Well, how do you know it’s not my AI futurist clone answering this question? And does it even matter if the message and intent is still mine?
Look, I get it. There’s a fear that we might lose agency to the machine. And of course, AI should not replace our thinking but be more like a collaborative assistant. If we’re mindful of how, when, and why we use it, then I think it can have a positive impact. It can help challenge our own perspectives, introduce new insights by reframing ideas, and simplify or contextualise our thoughts. Is that such a bad thing?
Of course, it’s easy to get carried away. I limit myself in many situations. It’s about figuring out when they’re meaningful to use. And of course, there are legitimate concerns about the striking lack of transparency from many AI companies and their environmental consequences. Plus, I don’t want to make myself dependent on AI infrastructure, either! Now, I don’t believe that we will completely lose human agency, but it will undoubtedly affect the way I work. It already is.

Bugge: There’s this difference between traditional slow foresight and AI-mediated fast foresight. AI excels at the latter, giving us the ability to enhance pattern recognition, analyse large datasets, summarise, and synthesise information. Is one better or more qualified than the other?
I believe foresight is about gaining a deep understanding. AI is great at a lot of things if we can guide it correctly, but it still tends to struggle with the messy, ambiguous, and speculative nature of futures thinking. That’s why we must be intentional in how we use it, ensuring that it doesn’t flatten our thinking or somehow limit our perspectives. Otherwise, we risk not just sounding like ChatGPT – but thinking like it too.
Would it be a bad thing to become a GPT?
Bugge: The real danger of ‘becoming a GPT’ isn’t about turning into a machine, but about outsourcing too much of our thinking to predictive systems. If AI is trained on what we already know, does it really help us imagine new futures, or does it simply reinforce existing narratives?
Sure, let’s experiment with AI versions of ourselves – but let’s make sure they push our thinking forward rather than simply mirroring us back.
Sofie: What does it even mean to become a Generative Pre-trained Transformer? I don’t think I can ever turn into a machine, but I do think that 2025 will be the year that I will try to make an AI clone of myself. So, if there’s anybody out there who’d like to interact with my clone or have a presentation delivered by my Futurist Sofie-clone, feel free to reach out!
We don’t hear much chatter about the metaverse anymore, even though just a few years ago it was proclaimed as the future of media by Meta. Is that because we’ve passed the peak of the hype cycle, and we’ll soon reach the ‘plateau of productivity’? Is it because the technology is not there yet? Or, is it because no one really wants it?
Sofie: Oh, the M-word. I feel frustrated. The word ‘metaverse’ was never meant to be colonised by Mark Zuckerberg’s vision. To me, what the word means is the continuing convergence of our physical and virtual lives. And that is still happening with content being able to move across platforms and mediums.
We can call it something else if you’d like. Immersive worlds. Spatial experiences. Regardless, the development of tools bridging digital and physical worlds is still evolving. Take immersive interfaces with XR, smart ecosystems, and even physical AI. It’s just not about Meta’s vision of the Metaverse in VR worlds. The convergence will never be ‘complete’, it’s an ongoing evolution with a long-term time horizon – i.e. the next 5-10 years.
Technology is one of the few domains that’s very susceptible to thinking in deterministic and linear ways. Technology is often thought of to be the sole shaping force in society, operating outside the bounds of politics and culture. How much room is there for imagining alternative futures when it comes to technology – specifically the kinds of technology being designed and pushed by some of the world’s biggest companies with the power to steer the direction on their own?
Bugge: Technology doesn’t exist in a vacuum. The idea that technology evolves outside politics and culture is a myth, often perpetuated by the very companies that stand to benefit from it.
When we talk about alternative futures for technology, we’re really talking about alternative power structures. Who gets to decide what technology is built, for whom, and to what end? Right now, much of AI’s development is concentrated in a handful of private companies with little transparency, accountability, or democratic oversight.
If we allow the future of AI to be dictated solely by profit motives or geopolitical rivalries, we risk locking ourselves into a future designed by others. That’s why foresight is so important – we need to expand the space of possibility and challenge dominant narratives, making sure that technology serves the human visions of the future, rather than the other way around.

What would you say to the people who are either anxious about AIs growing role in society or are trying to actively not use it? Are those people going to be left behind, or is there a possible flourishing future for those people?
Sofie: Hmm, it’s a difficult question. I mean, they are right to be anxious. I’m anxious too. But I’m mostly anxious because I think our fears may lead to the false belief that we can stop AI development with heavy regulations, which might lead us to not bother taking part in building these systems.
Understanding how to collaborate with these new tools and upskill to an AI-mediated world will obviously give you a competitive advantage. But I don’t think people will be ‘left behind’ if they don’t start now. The great thing is that these tools will become more accessible over time and eventually, they’ll simply become an integrated part of lives. Google Maps helps us navigate roads today. Tomorrow, AI-agents are likely to be a more natural part assisting us navigate all parts of our lives.
Bugge: We’ve seen this before. Not everyone became a programmer when the internet emerged, but understanding how digital technologies transformed society became essential. AI is following a similar path.
The real challenge isn’t whether individuals are left behind – it’s whether we collectively ensure that AI develops in ways that prioritise human needs over pure efficiency and economic gain in an AI-mediated world.
What is your take on the current media industry and its approach to AI?
Sofie: The media industry’s current approach to AI is too incremental and focused on short-term efficiencies. Yes, automating workflows and enhancing content personalisation is important, but we’re moving towards a future where AI agents actively curate, verify, and even shape the narratives audiences engage with. News and content becomes more liquid, and websites will not be built as they are now. The nature of journalism could shift from being content-driven to curationdriven, where trust is built not just through human reporting but AI’s ability to filter, contextualise, and validate information at scale. And this brings a need to look at completely new business models fit for a new AI-mediated information ecosystem.
Explore the world of tomorrow with handpicked articles by signing up to our monthly newsletter.
sign up hereBugge: The media industry is playing it too safe with AI, and if it doesn’t take control of the ongoing shift, tech platforms will take over their market. It’s about staying relevant. Those who see AI just as an efficiency tool might end up falling behind. Those who embrace it as a foundation for the future will be part of defining the industry.
In my opinion, the biggest question the media industry should be asking is this: who controls this incoming AI-mediated ecosystem?
If media organisations don’t take the lead in shaping the role of AI in news, for example, then other external forces will define the landscape for them. News organisations need to move beyond seeing AI as a tool and start thinking about what an AI-native media future looks like – one that ensures transparency, accountability, and trust in a world of infinite, liquid content.

This article was first published in Issue 13: The Generative Future