Share article

is anyone there?

In 2024, tech journalist Evan Ratliff made an artificial clone of his voice and set it loose on the world. His podcast Shell Game probes the boundaries of AI and exposes how unreal our world is becoming.

In 2024, journalist Evan Ratliff made an AI clone of his voice, connected it to ChatGPT, and gave it access to his phone. He then set his Frankenstein creation loose on the world and recorded its interactions. The result, Shell Game, is a six-episode podcast that documents the exchanges between Ratliff’s doppelganger and a gallery of unwitting customer service workers, phone scammers, family members, colleagues and friends. The conversations are weird, often funny, and sometimes disturbing.

Shell Game goes beyond merely showcasing the latest in AI voice gadgetry. It probes the boundaries of artificial intelligence and shows us how unreal our world is becoming, offering a glimpse into a potential future where agential AI will make dinner reservations, attend work meetings, plan social events, or even offer counseling to friends in need on our behalf.

Subscribe to FARSIGHT

Subscribe to FARSIGHT

Broaden your horizons with a Futures Membership. Stay updated on key trends and developments through receiving quarterly issues of FARSIGHT, live Futures Seminars with futurists, training, and discounts on our courses.

become a futures member

Ratliff’s experiment explores the ethical and social implications of handing over core aspects of our lives to machines, and raises questions relating to authenticity, personal agency, and the trust that underpins human relationships. In doing so, it confronts us with the question of what kind of future we want with AI.

Shell Game was named one of the best podcasts of 2024 by Apple, The Atlantic, The Guardian, Vulture, The Information, and The Economist.

In this interview, we ask Ratliff to share his thoughts on the future: Will the usefulness of AI clones ever exceed their weirdness, and are there limits to how far we will let them encroach on our lives?


How did you get the idea for Shell Game?

I had messed around with voice cloning a couple of years before starting to work on the podcast, but I didn’t find it particularly compelling at the time. It mostly sounded like a robot trying to impersonate you.

I don’t remember how I got the idea of hooking the voice clone up to ChatGPT and a phone. But I found a plugin online that enabled me to do it, and then I spent a few weeks trying to make it work. I called my wife with it, and I don’t think we’ve ever laughed as hard before.

I then experimented with it for a while longer, played some pranks on friends, sent them messages, things like that. The story just emerged from playing with it.

I didn’t engage with large language models at the time. I was tired of hearing about them, I couldn’t see what I could do to cover them as a journalist, and I didn’t really want to use them. The reason why some of the new technology isn’t as appealing to me – including a lot of the AI – is that you can’t see under the hood to know what’s happening. It’s just a box you type things into, and it gives you the answers.

I’ve always loved cobbling things together and making them work. What got me hooked on this idea was that I was playing a role and making something new by combining these different things.

You describe Shell Game as “a podcast about things that are not what they seem.” From a listener’s perspective, there’s an eerie tension whenever the person on the other end of the call begins to suspect they might be talking to an AI but isn’t completely sure. It’s like a voice-based version of the uncanny valley, that unsettling space where something seems almost human but has a few quirks that reveal its artificiality.

If you had made Shell Game five or ten years from now, with a more advanced voice clone, I assume you would have been able to fool people more easily. But can we ever shake that uneasy feeling, or is there something inherently off-putting about conversing with an artificial ‘person’, no matter how advanced the technology becomes?

It’s not an easy question to answer because there’s such a wide variety of human communication styles. In most situations, especially with people who didn’t know me, the person at the other end of the call might have thought I was strange, but I think most of them did believe it was a human being they were speaking to.

Sometimes, the uncanniness was there. When I set up a call-in line to survey people about their views on AI as part of the show, most people who phoned in answered the questions and never noticed anything. But one guy, who’s also in the show, said “I don’t want to live in a world where I don’t know if something is human or not.” And then a minute later he said “Wait, is this AI right now?” The fact that we’re even now having conversations with voice clones where we’re unsure about whether or not they are AI indicates that if there is an uncanny valley to be traversed, we’re either up the far side of it already or there’s a wire stretched across that we’re walking over.

I’d be very surprised if, a few years from now, there aren’t a bunch of AI voices that only an expert can tell from a human being. Will we want to talk to them if we know they’re AI? We’re interested in human interaction, and not everything can be replaced by technology, no matter how good at imitation it becomes.

Will we want to talk to them if we know they’re AI? We’re interested in human interaction, and not everything can be replaced by technology, no matter how good at imitation it becomes.

In the show, you bring up Zoom’s CEO Eric Yuan as an example of someone in a position to influence the direction of technology who is pushing for a very particular vision for it. He’s been quoted saying his preferred future is one where we send our digital clones to meetings with other people.

The idea seems to be that Eric Yuan can stop spending time in Zoom calls and start spending more time at the beach instead. But he never asks whether his employees would want to sit in a meeting with his AI clone. It may be more ‘efficient’ in a narrow sense, but it will make the experience of working with other humans feel more alienating.

Is the problem really that many of the people in charge, like Eric Yuan, are pushing for a vision of AI that emphasises human replacement rather than empowerment?

I’m not a forecaster, but I think the best possible outcome is that we will be using voice clones as a tool like all the other tools we take for granted that have come before it. We’ll use voice clones to make ourselves happier and more productive rather than to substitute us.

I think the first problem with the way voice clones are currently being developed and deployed is that most of the people doing it are wealthy cloistered tech people from the US who have incredibly obscure desires and needs relative to everyone else in the world.

One of the examples I always hear them use is getting restaurant reservations. They’ll say things like: “It can call all the high-end restaurants in town and book a reservation.” But who is this for?

First, what percentage of humans on this Earth even get restaurant reservations? Second, what’s the issue with picking up the phone, or tapping your screen three times to book a table?

That is, to me, emblematic of the mentality that’s going into creating many of the AI assistant products that are being pitched as useful.

The other problem is that the voice clones are just being thrown into the market right now. Take something like call center workers. I’m not saying it’s the best job, but it’s a lot of people’s job. In that environment, even if the AI is not designed as a replacement, and even if it’s not an ideal replacement, the owners will happily replace all the humans that work for them with voice AIs as soon as they can. I don’t know if that balance has tipped already, but I think it will very soon.

I see these market imperatives, combined with the people in charge holding this very peculiar worldview, as driving a lot of developments in voice AI.

In the first episode of the show, your clone seeks help from call center employees. We hear it confidently reciting non-existent addresses, credit card details, and social security numbers to the bewildered customer service workers.

It’s very entertaining but also takes you into ethically dubious territory. Some reviewers have called Shell Game a mean-spirited show, saying you shouldn’t use your AI imposter to trick minimum wage workers who think they are interacting with a real human. What do you say to that?

I accept that criticism. But I mostly hear it from people who listen to the first episode and then stop.

I think my counter to that would be that we’re talking about someone who works in a call center and who endured two minutes of a conversation with what seemed like a difficult customer whose accounts weren’t in the system.

These workers probably got ten calls that same day with real people screaming at them. I don’t think anyone was harmed or traumatised by this interaction. So, I think that’s a valid criticism, but I don’t feel bad about it.

Were there any other ethical decisions you had to think hard about while directing your AI clone?

There was the overall decision to deceive people. The fundamental ethical framework you operate under as a reporter is that you tell people you’re a journalist, you tell them where you’re from, and then you ask them questions.

I instead presented people with something that was supposed to be me – even telling them in some cases they would be talking to me – but the thing they encountered wasn’t me.

GET FARSIGHT DELIVERED TO YOUR INBOX

GET FARSIGHT DELIVERED TO YOUR INBOX

Explore the world of tomorrow with handpicked articles by signing up to our monthly newsletter.

sign up here

Many years ago, I tried to disappear as part of a story, with Wired magazine offering a USD 5000 reward for anyone who could find me. We wrestled with the same question as I did when making Shell Game: was it acceptable to lie to everyone I encountered for the purpose of the story? Ultimately, we decided it was.

Later, when I went back to everyone involved, their response was universally along the lines of, “That’s insane – but sure, I’ll be in the story.”

The only person who didn’t want their voice in Shell Game was the therapist who asked us not to use my follow-up conversation with her.

The conversation between your AI clone and the therapist that made it into the show is very interesting. You can sense her confusion by the odd language and stiff replies, but she continues the session. From her perspective, it could be a person going through something rough or someone who has trouble expressing themselves. As you say, there are many different human communication styles, and the last thing you would want to do as a therapist is to accuse someone of being an AI if there’s even a one percent chance you really are speaking to a person.

One point you bring up after that conversation is how good voice AI has gotten at imitating what you call ‘therapy speak’, using phrases and words that you might find in a self-help book. They also master the non-descript language you might find in a corporate PowerPoint presentation or a company brochure. AIs love to ‘delve in’, ‘shed light’ on things, weave ‘rich tapestries’, or express itself in other similarly bland terms.

We know that the media we use changes how we think. The famous example is Nietzsche’s typewriter which changed not only how he wrote, but supposedly his philosophy as well. This must be true for AI too. After having used voice bots so intensely for an extended period, have you felt it changing you in any way – how you think, talk, or write?

If there are any changes, they are imperceptible to me so far. I think working with the voice clone more-so made me reflect on whether I sound like that.

The way the AI talks is a distilled version of how we talk. When it comes to the corporate clichés and self-help speak, it did make me wonder whether people who express themselves in that way are more prolific in their writing, meaning more of it exists on the internet and is used as training data. It could also be a combination of the guardrails pushing AI in the direction of safe and bland speech.

My concern is that we’re getting so many answers from it that we’re training ourselves on its language. By using it, we’re going to further distill a version of how it talks. It moves us more and more towards the mean through its prediction of what the average human would say. And it’s going to take us to an incredibly boring place.

We’ve had many discussions at our workplace about how to establish AI discipline and guidelines around using voice clones and LLMs – as many other workplaces have as well.

But how do you safeguard against the broader, cultural risk you are describing, where there’s so much AI content out there that we become inoculated with it regardless of whether we use it directly ourselves? It may not be an existential risk, but it’s nevertheless going to change human expression.

I’m not sure. I’ve been thinking that the most cutting criticism of something written today is to say it sounds like a language model produced it. If someone said that about an article I wrote, I would take that very harshly.

In journalism, AI still seems a bit dirty. A lot of people are using it on the sly but the norms around whether or not that’s “cheating” are changing.

There are many ways in which these models and agents could have been created. The direction the companies designing them have chosen to pursue is as human imposters. They want something that talks like a human, writes like a human but is smarter.

The AI companies make this argument that yes, jobs may go away, but it will free up so much time for humans to experience the world and be creative. Yet the first things they’ve released can do writing, music, moving images, photography, and illustration – those are the things were supposed to make time for! My hope with the show is that listening to these AI voices makes you want to talk to real people.

Have you retired AI Evan?

After finishing the show, I thought maybe that was the end. But I never really got sick of getting scam calls. And I guarantee you AI Evan is getting a scam call as we speak. He’s getting 30 or 40 calls a day.

So, no, I haven’t retired him – he’s still out there.


This article was first published in Issue 13: The Generative Future