Share article

How the BBC Uses Foresight to Prepare for an AI-mediated Future

A Q&A with Antonia Kerle, Chief Technical Advisor in BBC R&D

Illustration: Sophia Prieto

How is the world’s leading public service broadcaster preparing for the future of AI-driven media? In collaboration with BBC R&D and the BBC’s GenAI Programme, CIFS led a foresight initiative exploring the future impacts of Generative AI on the media industry and its implications for public service media.

We spoke with Antonia Kerle, Chief Technical Advisor in BBC R&D, about how the insights from the project inform how the BBC is navigating the challenges and opportunities of AI.


First of all, it’s great that we’re able to share a glimpse of the work we’ve done together. Can you share a bit about your role as Chief Technical Advisor, and how it intersects with foresight and decision-making at the BBC?

My role as Chief Technical Advisor is a relatively new addition to BBC R&D, which has been around for almost as long as the BBC itself. My team has been set up to track trends and explore emerging technologies like AI, distributed ledgers, and quantum computing with the goal to not just to understand these technologies, but to figure out what they actually mean for the BBC and public service media. You can always buy a trend report, but they won’t reflect the BBC’s unique perspective – that’s where my team comes in.

My own background is actually in benchmarking and economic forecasting, but I’ve come to see that this is just one small part of thinking about possible futures. For me, it’s about combining a structured, data-driven approach with “futuring” – helping us build a bigger picture and a narrative of how things might evolve and fit together.

Subscribe to FARSIGHT

Subscribe to FARSIGHT

Broaden your horizons with a Futures Membership. Stay updated on key trends and developments through receiving quarterly issues of FARSIGHT, live Futures Seminars with futurists, training, and discounts on our courses.

become a futures member

I think one of the unique aspects of BBC R&D, is that we think about solving the BBC’s problems five or ten years out. For example, if we’re working on standards today, the goal is that in five years, when those standards are widely adopted, the BBC is already prepared.

If I’m honest, a big part of what we do is building relationships across the organisation. A lot of things happen in different pockets, so it’s about figuring out who’s actually making decisions and working with them as closely as possible. Rather than throwing information at people, we focus on being a resource – making sure they can come to us with questions and that we can help inform key decisions.

We’re really starting to see a shift. There’s a recognition of the challenges in the media and technology landscape and an increasing rate of change, so that things that looked far away are more immediate challenges. As a result we’re seeing a lot more people coming to us.

So how do you and your team see the media landscape evolving, and how do you see it affecting public service media?

I think that across public service media there is a real sense of existential threat and a concern that we will lose our direct relationship with audiences given the changes in the tech and media landscape. The BBC, by its nature, is slowmoving, and rightly so – we’re funded by the public and can’t just ‘move fast and break things’, but there’s a real need to better understand technology developments and figure out how to respond confidently and quickly.

Generative AI is a great example of this. R&D has been exploring it for years, but when ChatGPT launched and AI entered the mainstream conversation, our head of AI Research, Danijela Horak, was one of the few people that understood what was technically possible and what wasn’t. We were able to work with our colleagues across the organisation – most notably Peter Archer, who went on to establish the BBC’s Gen AI Programme which has become a major pan-organisational initiative.

Can you share how collaborations with external organisations like CIFS support your work and decision-making, and why you chose to use scenarios?

I think the most valuable aspect of scenario work is that – unlike traditional forecasting, where you’re constrained by specific data sets, proxies, and model assumptions – it allows for a broader exploration of possibilities. For an organisation like the BBC, which isn’t just focused on customers or shareholders but has a wide range of stakeholders, there are many different forces that could impact the way things turn out for us. One interesting learning from our scenario work was how even small tweaks – whether in policy, AI capabilities, or industry dynamics – can lead to significantly different outcomes.

What really stood out was how the process created space for people to voice concerns. Sometimes people fall into extremes – either assuming a technology will be the biggest disruptor ever, or that it won’t matter at all. Scenarios create space for shades of gray, to ensure a more nuanced and productive dialogue along the whole spectrum. The ‘critical uncertainty’ approach allows everyone to acknowledge that something could matter, even if we don’t yet know to what extent.

I know that you take the development of generative AI very seriously at the BBC and recently presented these three key principles: serving the public, prioritising creativity, and ensuring transparency when working with AI. How do these principles shape your approach?

In R&D our focus is on how we can ensure AI benefits creative industries and society. We are working on foundational AI capabilities that serve people, industry and the UK.

We want to offer our audiences the best experiences – some of which will be enabled by generative AI. But these experiences need to be accessible to everyone, wherever they might be. Whether you’re in London or a remote part of Scotland, everyone should have the same experience, which requires the right infrastructure and networks – across all devices.

Another major role we play is being a voice for the creative industries in discussions with the UK government and regulators around policy issues like copyright. Our AI Research team gives us the technical expertise to engage at a deep level and informs our colleagues in the policy team.

GET FARSIGHT DELIVERED TO YOUR INBOX

GET FARSIGHT DELIVERED TO YOUR INBOX

Explore the world of tomorrow with handpicked articles by signing up to our monthly newsletter.

sign up here

Ultimately, we want to set a gold standard for ethical, safe, and accessible use of generative AI in the UK’s creative industries. It’s about ensuring AI creates real value for people, is used responsibly, and remains universally accessible – not just for those who can afford it.

The BBC isn’t just focusing on how we use this technology, but also on how we can support the wider creator economy. It’s not only about teaching people how to use generative AI – it’s about lifting the entire UK creative sector through the BBC as a platform. Beyond applying AI in our newsroom, we have an opportunity to play a bigger role in empowering creators and driving innovation across the industry.

From a future-focused perspective, what do you see as the biggest risks of generative AI?

The biggest risk – and frankly, the one I’m least sure how we’ll navigate – is AI-driven gatekeepers controlling access to audiences. The media industry already struggled when social media became the main intermediary. With AI-generated search results, this challenge is intensifying, and AI-driven recommendations for streaming services put another barrier in the way of people finding and enjoying what we make. If audiences can’t find us, we don’t exist – posing an even greater risk than disinformation, sustainability issues, or other challenges.

How do you see the development of AI impacting how you approach foresight and strategy at the BBC? Do you already see internal changes?

It’s interesting you ask, because we’ve been having this exact conversation. Wedon’t have a fully formed view on how AI will reshape foresight, but within my team, we’re exploring how we can become ‘super users’ of generative AI.

We’re going through all the necessary BBC training, but even before that, we had already started experimenting with things like a tool for summarising research interviews. So rather than running away, we’re leaning in and looking at how AI can support what we do. Right now, we still feel a bit ‘Stone Age’ – writing papers in traditional ways – but we know there are ways to integrate AI more effectively. It’s an ongoing conversation.

Can you share some thoughts on how you gain internal support and buy-in for the foresight work you do at R&D?

I’ve always believed that the more people are involved from the start, the more likely they are to buy into the outcome. Our work with CIFS reinforced how important it is to bring people along in the process – giving them space to surface concerns, acknowledge uncertainties, and integrate their ideas. That made it much easier to take those ideas forward. The engagement in the workshops was incredible, and multiple colleagues told me it was one of the best sessions they’d ever had. The quality of that experience built trust in the process and the outcome.

Since the project we did together is an internal project, I can’t share much, but I can say we’ve already applied the insights from the project in several key areas. It has shaped our R&D strategy and guided the generative AI programme’s future work plan. It has been incredibly valuable in helping us with strategic planning, and we’ve gotten a lot out of it.

What are the most critical questions public service media should be asking about AI over the next 5 to 10 years?

From my perspective, one of the biggest questions is how the BBC can help the UK find its place in the global AI landscape. With an AI arms race between the US and China, how do we support and enable the UK to find its niche within the larger ecosystem? How do we carve out a spot for ourselves?

With new players like DeepSeek, we already see these traditional dynamics playing out where China may have low-cost models while the U.S. dominates with larger, more expensive solutions. Cultural verticals still remain, so the UK has to figure out who we are in that context. I think we have an opportunity to be that safe destination for creators – ensuring AI is used in ways that support creators, protects IP, and provide both the tools and safeguards they need.

Finally, there’s the challenge of discoverability, attribution, and the fragmentation of the information ecosystem. How do we deal with that, and can we still play a role in maintaining a shared reality and a shared truth? These are all critical questions, that we don’t have answers to yet.

Do you see any new shifts in the BBC’s approach to innovation?

One shift I see emerging is the BBC being clear about its role as a global digital public service organisation. As the British Broadcasting Corporation, our focus has traditionally been on serving UK audiences (although the reach for News, Sport and World Service has been far wider). But when we look at the macro landscape, the real economic growth is happening in markets in the Global South – particularly in Asia. There’s often an assumption that innovation will come from Silicon Valley or Europe, but countries like Nigeria and others are leapfrogging legacy systems and adopting entirely new ways of doing things. Public service broadcasters, including the BBC and European public service media, should consider how to engage with innovation happening beyond Europe and North America.

This also raises a bigger question: How can the BBC continue to serve the UK as the major public service broadcaster while also meeting the needs of a global audience, beyond the World Service and news? What would that look like? How do we connect with audiences in India as effectively as we do in Manchester? For example, in China, audiences can watch a show and instantly purchase what the main actress is wearing. That level of media-commerce integration is incredibly innovative. The question is, how do we participate in such trends in a way that aligns with our public service mission? It’s an exciting challenge, but one that won’t be easy.

Looking ahead, what do you see as the greatest opportunity for public service media in an AI-mediated future?

With the development of generative AI there are immediate, operational benefits such as AI helping us work faster, reduce costs, and handle background tasks like metadata tagging. But I think what we are trying to wrestle with is, what does a public service media organisation mean in the age of AI, moving into the 2030s? It’s not just about producing content but reimagining what the BBC could become. Could we move beyond broadcasting to become part of the UK’s civic infrastructure? Could we fill the space for a digital public sphere left by platforms like X and Instagram? If we see AI as a tool for enabling creators, supporting citizen journalists to be able to record stories, and helping surface the most truthful content out there, then the opportunity is much bigger than efficiency – it’s about shaping the future of public discourse.


This article was first published in Issue 13: The Generative Future