Share article

The Anything-to-Everything Future Is Here

How Google DeepMind’s Matthieu Lorrain sees liquid content reshaping storytelling.

Image: Elise Racine / Better Images of AI / Is This Even Real III

Only a few years have passed since large language models were first introduced to the broader public. Since then, there has been a shake-up of notions like authorship, ownership, and authenticity in creativity and content creation.

With every new model released, the limits of what’s possible are pushed further. For creators who utilise AI, what was once fixed and final has become fluid and adaptive. AI is enabling a world of ‘liquid content,’ where stories and images flow seamlessly across formats – in constant flux and always open to reimagination by creators and audiences alike.

For most of human history, storytelling was fluid, shaped by both the narrator and their audience, and evolving with each retelling. The invention of writing allowed stories to scale across time and space, but at the cost of adaptability. With mass media, creative output and storytelling solidified to even more static forms. What’s fluid became solid – and is now being liquified once more.

Subscribe to FARSIGHT

Subscribe to FARSIGHT

Broaden your horizons with a Futures Membership. Stay updated on key trends and developments through receiving quarterly issues of FARSIGHT, live Futures Seminars with futurists, training, and discounts on our courses.

become a futures member

It started with algorithms and social media, which allowed the automatic and responsive tailoring of any kind of online content to individual preferences. With generative AI, the output itself is reduced to its fundamental components and remixed into something else.

Services like ElevenLabs can turn any text into a conversational podcast staffed by AI hosts. Graphic models currently under development, like Google’s Genie 2, can generate interactive 3D environments from as little as a single image, turning an illustration into a playable video game, with procedurally rendered environments for the player to move around in and explore. Increasingly, AI becomes ‘multimodal’, allowing multiple types of input and enabling multiple forms of output as well.

What these tools suggest is that formats will no longer bind expression – a piece of writing becomes a podcast; an image becomes a video game. Length can be adjusted to audience preferences – from feature film to highlight reel. Audiences can provide feedback in real-time, making the process of media creation much more iterative. Art direction becomes fluid as well, transitioning seamlessly from bleak cyberpunk to bubbly children’s cartoon. Finally, story customisation becomes nearly endless; instead of Emily in Paris, a viewer in Spain might experience Paolo in Barcelona, with adaptations to locations, cultural references, and even character arcs.

What does this all mean for the role of human creators? Will we become co-directors in AI-empowered creative processes, or will the very definition of authorship change as AI agents aid our storytelling and creativity? What does creative ownership mean in a future where content flows between different states, narrative forms, and is adapted through interaction? Will the act of intention become the final frontier of creativity?

We reached out to Matthieu Lorrain, Creative Lead of GenMedia Research at Google DeepMind, for his take on these questions. Lorrain’s work at the intersection of AI and media shows us a future-that-might-be where the creative landscape expands in previously unimagined ways.

Text-to-everything

“We’re seeing quality jumps that are on par with the leaps made in LLMs a few years ago,” Lorrain says, referring to new graphic models like Genie 2 and Veo2, Google’s upcoming video generation model.

“The same kind of disruption we saw with LLMs is coming to video, storytelling, and immersive content.”

AI can increasingly act as collaborator, helping creators remix and restructure elements in real time. The new models are powerful, but they won’t take over the creative process, where Lorrain sees humans as continuing to play a key role. There will always be intention behind the output. But creativity, for visual story telling especially, might lean more towards an emphasis on systems thinking and iterative worldbuilding, using AI as a co-creator.

“I work with filmmakers, and film is an incredible craft. The process and pipeline of making a film is what ultimately brings it to life. People have always studied the ‘how’ to achieve the ‘what’ – and that won’t change,” Lorrain says. “But with AI, the process may eventually become more powerful. Navigating and directing these complex systems will be a craft of its own.”

The role of AI will be that of a creative partner, Lorrain believes, capable of curating aesthetics, shaping story structures, and generating ideas. He envisions AI-driven collaborators, with virtual agents modelled on artists like Brian Eno or Rick Rubin acting as creative guides. Human creators set the vision and intent, AI helps shape the output, while audiences act as co-creators.

The big ‘why?’

So why should we wish for more liquid, more customisable, more granular, and platform-agnostic media? Who is this ultimately for?

“The ‘why’ is always an important question – probably one of the most important ones to ask when it comes to creativity,” Lorrain says. “Ultimately, this is about enabling more people to express themselves. I have so many ideas, but a lot of them are lost because I don’t have the time to execute them. With AI, I can bring more ideas to life and share them with others. That, to me, is the value I also see in generative AI.”

GET FARSIGHT DELIVERED TO YOUR INBOX

GET FARSIGHT DELIVERED TO YOUR INBOX

Explore the world of tomorrow with handpicked articles by signing up to our monthly newsletter.

sign up here

Google, of course, is one among many other companies in the generative AI space. As developers compete to push out ever more powerful models, concerns grow about an overwhelming flood of low-quality content that pollutes the media and information ecosystems and comes with an environmental cost as well – a kind of ‘fast content’ phenomenon very alike the fast fashion industry. With the volume of AI-generated output increasing, the questions is whether this will lead to a decline in the quality of creative work, or whether the abundance of AI slop today will be looked back on as a product of imperfections in current models – a temporary problem that will go away as the technology improves.

“This isn’t just an AI challenge – it’s something we’ve already been dealing with in social media for a long time,” Lorrain comments. “Yes, there will be more low-quality content, also human generated, but there will also be much higherquality content because AI is making it easier to create it.”

“These technologies are constantly being improved and optimised. What it took to generate an image two years ago is very different from what it takes today, and it will be even better tomorrow. I’m optimistic that we’re moving toward a future where overall quality will improve, and creating content will become more resource efficient as well.”

Letting go of control

Generative AI has given rise to new questions of artistic control, which will only become more pressing as the models mature and their output becomes more liquid. Creators have long held authority over their work, defining its final form and intent. With liquid content, will the notion of a ‘finished’ piece begin to dissolve? Not necessarily, Lorrain says. “The finished state is what you’re going to see as an audience, even if it might be different for you and for me.”

It might still mean that the creator will need to let go of a certain amount of control. For artists, this may feel like an existential threat to their creative integrity – and in the context of news and journalism, it can even be dangerous.

Others may see it as an opportunity that invites collaboration and audience participation. The role of the creator shifts from sole author to facilitator, setting parameters rather than delivering fixed, unchangeable works.

Hallucinations as creative friction

Art and creativity thrive on ‘happy accidents’: “When I think about creation, I realise that you don’t always want full control. Sometimes, the best moments come from happy accidents,” Lorrain says.

Often considered unwanted flaws, AI hallucinations can also be creative sparks. “We often try to eliminate hallucinations in AI, but in creativity, maybe they are exactly what we need.”

Lorrain recalls working with a renowned filmmaker when an AI mistakenly placed a torchlight inside a helmet, creating a surreal effect. The filmmaker loved it. “That’s where the magic also comes from.” Creative agency will increasingly require setting boundaries – defining what can be adapted and what remains fixed.

“Constraints have always played a role in creativity. Technical constraints help artists push their limits, whether those constraints come in the form of the paint, the canvas, or the film,” Lorrain says.

“These limits force artists to find new ways to work around them, which I think is a good thing. Automation can free up time for new ideas, but I don’t think there’s a perfect link between time spent and quality – some take long and don’t achieve much, while others can quickly create something great.”

Media apathy vs. creative intention

Impressive as new models like Genie 2 may be, Lorrain doesn’t see liquid content replacing all forms of media and creativity. However, he is convinced that real-time adaptation is changing how we think about storytelling – and that AI tools can also help artists define limits.

“I might be fine with my story being between 20 minutes and three hours, but never less than 20. That maintains creative agency,” he says.

The challenge now is learning to use the tools to enhance artistic expression rather than dilute it. This requires mastering a creative language that is still being written.

“Cinema was invented in France over 130 years ago, in 1895, by the Lumière brothers – not far from where I’m from. They were engineers, not artists. It took pioneers like Méliès to turn their invention into an art form, shaping filmmaking and special effects,” Lorrain says.

“I believe we’re in a similar moment today. Researchers at places like DeepMind are developing groundbreaking tools, but it’s up to creators and artists to transform them into a new language of creativity.”


Get FARSIGHT in print.

Become a Futures Member now