Share article

Embodying Artificial Intelligence

As machine learning systems rapidly accelerate in their advancement, the need to ground their computational understanding in our physical world only increases.

Image: Britain’s first robot – Eric – taking breakfast with its inventor William Richards in 1930.
German Federal Archives.

When the makers of The Terminator film series decided to depict Skynet on screen – Skynet being the fictional super-AI system that achieved self-awareness and took unfavourably to the idea of being shut down – it was given the physical guise of some well-known actors. In part, this was a visual choice: a big metal box with flashing buttons would have proven both unexciting and unrelatable. But even within the story, Skynet wants to go beyond its static nature: as an extension of itself, it builds killer robots to go out into the world.

Might the conscious artificial intelligence of the real future have the same desires, or have them built in by its makers? Might they deem it an absolute necessity for AI to reach its full potential? That’s the idea behind the hypothesis of ‘embodied AI’ – that, as cognitive science and psychology suggest, human-level intelligence in AI programs cannot be fully realised unless they are able to interact with and learn from their physical and linguistic environments, much as human babies do. So much of human intelligence, it’s argued, is encoded in, and affected by our physiology. Perhaps the same is true for machines as well.

Subscribe to FARSIGHT

Subscribe to FARSIGHT

Broaden your horizons with a Futures Membership. Stay updated on key trends and developments through receiving quarterly issues of FARSIGHT, live Futures Seminars with futurists, training, and discounts on our courses.

become a futures member

AI needs a body too if it is to reach its full potential, beyond the limitations of the digital-only, large language model (LLM) AIs that, so far, account for the most widespread interactions with AI. Most machine learning software currently being implemented is completely invisible as back-end algorithms and automated processes, that many of us take for granted. LLMs work from patterns and structure in language-based input training data – and while there’s an awful lot of this data, once it’s been weighted, categorised, and randomised, it doesn’t connect in any way to a physically tangible, real-world perspective.

Embodiment will allow AI to learn by interacting with the 3D physical world by generating its own data and feedback loops. Experiencing mistakes and challenges firsthand is essential, as they may affect not just the machine learning model, but potentially the people around it. To draw a comparison, a purely digital artificial intelligence can extrapolate an overview that provides it with a map of an environment. Embodied AI will gradually learn about that environment through its exposure to it; it will have to explore to make its own map. This, it’s argued, will be transformative – it sounds like a step back to ignore that we humans exist in the physical, social, and linguistic world too. If AI is to work for us, it may be better that it perceives the world as we do.

“In my view, an agent needs to be able to push against the world and have the world push back. AI can’t understand cause and effect very well without being embodied,” argues Josh Bongard, a professor of computer science at the University of Vermont, Canada. Bongard is a pioneer in the development of ‘xenobots’, biological robots based on living cells, and a leading proponent of the embodied AI hypothesis. Like humans, he argues, AI needs to have a sense of its own fragility – a sense that informs all that we do. Being embodied automatically imposes constraints on all decisions.  In a similar vein, there are now moves to apply parameter guardrails to LLMs, too.

“In fact, I would argue that without embodiment, AI is unsafe,” Bongard stresses. “It’s a fantastic bullshit artist and will come to the wrong conclusions. Without embodiment it doesn’t have any skin in the game. A machine that doesn’t realise that it may be at risk is a dangerous machine”.

This doesn’t mean that embodiment would take the form of the kind of uncanny robots loved by science fiction. Self-driving cars are an instance of embodied AI, so too the kind of factory robot that builds them. A lot of agricultural equipment – autonomous harvesters, for instance – can be thought of as embodied AI. There’s a space – and demand – for embodiment in education, healthcare and entertainment. As Bongard notes, humans are differently embodied too, both at different stages of development and from each other.

So, The Terminator’s picture of intelligent, humanoid machines wandering the streets – preferably without machine-guns –is not necessarily the most efficient form to take, nor the most useful in the majority of cases. Bongard suggests the most convincing portrayal of intelligent machines of the future that he’s seen on screen is instead more like the ‘micro-bot’ swarm of ‘Big Hero 6’: “machines made of machines made of machines”.

Andrew Philippides, professor of bio-robotics at the University of Sussex, argues that the nature of embodiment will be more a question of purpose, the task to which the AI is set.  “Natural intelligence is embodied, and if we want to understand that then we need to think of AI in the context of embodied systems. We move through the world,” he says. But a machine can have what he calls ’situatedness’ – being in an environment and aware of that fact – without a body being important.

Since letting an embodied AI loose on the real world is likely a risky idea, giving AI the opportunity to learn by running a virtual body in a simulated world – one running at a rate much, much faster than reality – looks set to be the best way forward. In 2019, Meta unveiled AI Habitat, just such an open-source platform, where AI can learn to open doors, get objects and other similar tasks. Last October, the latest version began teaching AI how to interact with people as avatars.

GET FARSIGHT DELIVERED TO YOUR INBOX

GET FARSIGHT DELIVERED TO YOUR INBOX

Explore the world of tomorrow with handpicked articles by signing up to our monthly newsletter.

sign up here

Not everyone is convinced of this approach, not least because even the simplest real-world physics – like a bouncing ball – are incredibly hard to simulate, and the varying intentions, goals, and perspectives of humans even more so. This is called the ‘sim-to-real gap’. That such training is virtual and speedy may be just as well, not least because, as Philippides says, “embodied AI is totally happening” – it’s already being trialled by Amazon, BMW, and Mercedes-Benz, for example – “just so long as we don’t think of it in terms of a two-legged walking robot, which is a long way off,” he laughs.

Indeed, developments in robots tend to proceed at a much slower pace than artificial intelligence because hardware breaks down – given those hard knocks with reality – in ways software does not. The success of embodied AI will in large part depend on the development of technology that the disembodied kind isn’t challenged by – sensors, motors and the like – that would allow AI to interact with its surroundings.

“The ones we have aren’t very good yet. Just try to get a robot to handle objects with a sense of touch comparable to that of humans, for example. The same with olfaction. Or consider the complexity required to move through uncertain and dynamic terrain,” says Philippides. Or to be able to estimate where in a terrain you are, or how far you’ve travelled from a starting position, or to relate language to everything in that terrain, or to communicate with other (human) agents in it too, each a huge challenge for embodied AI developers. “Just think of the energy efficiency of our biological sensors alone. As it was once put to me, the problem with embodied AI is ‘batteries, batteries, batteries’,” he says.

He adds that the quality, kind and even the arrangement of sensory input matters too, since, as with humans, at the most trivial level all the information we have is about our environment is filtered through the body. As humans, we “outsource computation to the body.” Spiders, he suggests by way of example – not least because bio-mimesis will, he suggests, eventually prove to be “the only game in town” in robotics – can feel vibrations coming from all directions because they have eight legs placed around a central hub; likewise the evolution of an AI within something manufactured to carry it will be moulded by the right spatial arrangement of its sensors. We’re only beginning to consider such detail.

“But the interesting part of this is the matter of active perception: we move in order to [better] perceive, and that requires some kind of body,” he adds. “Then there’s the question of how the way we move affects the stream of information. Engineering now may be trying to iron out those changing signals rather than, as animals do, make use of them.”

Science-fiction may be chilling in its vision of intelligent, mobile machines, and especially when they go wrong – and from ‘Westworld’ to ‘Alien’, ‘RoboCop’ to ‘Ex Machina’, they often do go wrong. But perhaps scientific reality is already disturbing enough, because in the present day, we are still only dealing with non-conscious systems. A robotics start-up called Figure recently unveiled a demonstration of a robot that, after just 13 days of OpenAI training, could understand speech, execute verbal commands and even, to a point, explain why it’s doing what it’s doing. Meanwhile “Boston Dynamics [the US company behind Spot, an agile 4-legged robot] is doing a good job of terrifying people,” Philippides notes. “The drones used [in the Ukraine war] too.” As AI becomes more embodied – as more robots enter the professional, domestic, and military spheres – it’s clear we will need to combine a huge level of ethical and technological consideration to get deployment of this technology right.

Bring it on, says Bongard, of such concerns. He’s in favour of “the breakneck development” of embodied AI precisely because “we have to see just how scary it is” in order to put the right checks and balances in place. “The robots [of sci-fi] unsettle us and that’s a good thing. The genie [of AI] isn’t going back into the bottle. And humans need help,” says Bongard. “But safe AI is possible.” If we act in advance to make it so, that is.


Stay up-to-date with the latest issue of FARSIGHT

Become a Futures Member