“When it comes to consciousness, I call the problems of explaining intelligence, and explaining behaviour in general, the easy problems,” David Chalmers says. The Australian philosopher and cognitive scientist has made it his life’s work to take on some of the big questions relating to the nature of reality, the relationship between mind and body, and consciousness in the age of digital technology.
Although Chalmers admits his ‘easy’ problems are not so easy after all, we at least have some idea of how to go about solving them, he says. They can be explained by computational or neural mechanisms like, say, the ability to discriminate, categorise, and react to environmental stimuli; the reportability of mental states; the focus of attention; the deliberate control of behaviour; and the difference between wakefulness and sleep.
The so-called hard problem of consciousness is something that is much more difficult to solve: “The hard problem is really the problem of explaining how physical processes in the brain could give rise to conscious, subjective, experience,” says Chalmers, who currently holds the position as Professor of Philosophy and Neural science at New York University, where he also works as co-director of the school’s Center for Mind, Brain, and Consciousness.
It is a fundamental question in both philosophy and cognitive science, and one that Chalmers has grappled with for decades. When he first used the phrase “hard problem of consciousness” in April 1994, at a talk he delivered in Tucson, Arizona, it made a significant impact among intellectuals in the philosophical and scientific community. Then came his first book The Conscious Mind (1996) in which he defined the hard problem of consciousness as no less than “the biggest mystery and the largest outstanding obstacle in our quest for a scientific understanding of the universe.”
The book, which was greatly influential in its time, also claimed that no explanation of consciousness is possible in purely physical terms. Standard methods of neuroscience and cognitive science are always faced with having to explain consciousness on a more fundamental level, Chalmers argued.
Subscribe to FARSIGHT
Broaden your horizons with a Futures Membership. Stay updated on key trends and developments through receiving
quarterly issues of FARSIGHT, live Futures Seminars with futurists,
training, and discounts on our courses.
To solve the hard problem, Chalmers believes science needs something more than physical explanations, which mostly only go on to explain objective structures and dynamics. If science can’t explain consciousness in terms of existing fundamental properties (space, time, mass, and so on) and existing fundamental physical laws, then it might need new fundamental properties found in nature, he believes. Perhaps consciousness is itself fundamental, he speculates.
Chalmers has dedicated much of his career to the topic of consciousness. But over the last few years he became interested in another fundamental philosophical question: what is reality? These two mysteries, he claims, are inextricably linked. “Consciousness is part of reality, so if you want an explanation of reality, you’d better be able to understand consciousness,” he says.
Chalmers explores this connection in a new book, Reality+: Virtual Worldsand the Problems of Philosophy, in which he argues that technologically simulated realities, like those found in virtual reality, are just as genuine as physical reality. That is, virtual worlds are not illusions. In fact, Chalmers believes they can provide just as much meaning and value as the physical world can.
Virtual reality, to Chalmers, becomes a way to engage with some of the deep questions that have troubled philosophers for centuries. He points to the French philosopher René Descartes, who back in the 17th century was already beginning to pose questions about the relationship between the mind and reality. Descartes also raised the issue of what Chalmers calls “the problem of the external world”: How do you know anything at all about the reality that exists outside you? The philosopher famously found himself unable to rule out the possibility that everything he experienced was a dream, and that ‘reality’ therefore was, put in modern terms, a simulation of sorts.
For many years Chalmers thought he didn’t have much to say about this question. But thinking about virtual reality when writing his book gave him a new perspective on this topic, he says.
Although today’s virtual reality worlds are primitive, Chalmers admits, he believes their temporary technological limitations will pass, and that they will eventually become indistinguishable from the nonvirtual world.
Perhaps we’ll eventually plug into machines through a brain-computer interface, bypassing our eyes and ears and other sense organs. Chalmers claims the simulated environments that await us in the future may even be occupied by simulated people, with simulated brains and bodies, who will undergo the whole process of birth, development, aging, and death.
“Some hold the view that consciousness is essentially biological. But in principle, I don’t see why silicon systems cannot achieve it.”
As the technology develops and virtual worlds become increasingly sophisticated, the philosopher predicts that we will eventually be faced with a crucial question: should we move our lives entirely to a virtual world? “The short answer is yes,” Chalmers says. “There is no difference, in principle, between meaning and value in both the physical and virtual worlds. So, there is no barrier preventing us from living morally and ethically in a virtual world.”
This leads us back to Descartes’ dream simulation. Speculating about the future of virtual reality leads Chalmers to pose the question: how do you know you’re not in a virtual simulation right now?
This idea, known as simulation hypothesis, is one that Chalmers takes very seriously. Popularised by philosopher Nick Bostrom, and famously depicted in The Matrix movies, the idea posits that our entire existence is, in fact, a simulated reality – and what seems on a surface level to be an ordinary physical world turns out to be the result of connecting human brains to a giant bank of computers.
“I would say there is a 10 percent probability that we are living a simulation,” Chalmers says. “But it would be very hard to demonstrate. If it’s a perfect simulation that’s indistinguishable from our own world, then no scientific experiment will ever be able to prove this, and it will remain a philosophical hypothesis, rather than a scientific hypothesis.”
Chalmers believes science fiction provides philosophers with great thinking tools and thought experiments that can be used to envision these kinds of mind warping hypotheticals. He points to The Matrix as being partly responsible for his own “entry into the simulation arena.” The filmmakers had a significant interest in philosophy, and shortly after the movie was released, several philosophers were invited to write articles on the movie’s website. Chalmers accepted the invitation. In 2003, he published an article entitled “The Matrix as Metaphysics” which argued that the central premise of The Matrix movie might, in fact, not be an illusion.
Star Trek also gets a mention in Chalmers’ new book. In a chapter entitled “Can there be consciousness in a digital world?” the author analyses an episode of the tv show in which a trial is held to determine if the android Data is sentient. One character, Captain Picard, asks the court to define the term ‘sentient’, to which Starfleet cyberneticist, Bruce Maddox, replies: “Intelligent, self-aware, and conscious.”
This episode raises an interesting question, Chalmers points out: can a digital system like Data be conscious, or is that a trait that’s reserved to humans and animals? “Some hold the view that consciousness is essentially biological,” he says. “But in principle, I don’t see why silicon systems cannot achieve it.”
Chalmers takes this idea a few steps further. Once we have consciousness in a functional reproduction of the brain – say, a silicon brain – it would be a very small step from there to having consciousness in a simulated brain. A simulated brain would have the advantage of maximising similarity to a human brain, Chalmers explains. In such a device, every neuron would be simulated perfectly, as would all other cells throughout the brain. All the electrochemical activity, meanwhile, would be simulated too, as would any other bodily activity, such as blood flow.
“So, think about replacing say, biological neurons, gradually, with silicon chips, or some other substrate in the brain, while keeping the information processing the same way, because that would preserve consciousness over into machine consciousness,” Chalmers says. “In fact, given how quickly artificial intelligence is developing right now, I think we could have consciousness in machines in the next few years.”
In principle, then, could consciousness be uploaded into a computer? It’s a concept often discussed in the transhumanism community and referred to as ‘mind uploading’. According to Chalmers, this might be possible, although not anytime soon.
“By building a very detailed simulation of the brain, we would potentially be able to take the contents of the brain and totally upload them to a computer system,” Chalmers explains. “We are still not able to build that kind of simulation. But with advances in neuroscience, maybe in a few decades, mind uploading could be possible. Maybe we’ll build backups of ourselves in case something goes wrong in our life, and we’ll be able to restore ourselves from backups, or perhaps when the brain is dying around the end of life, one will have the ability to upload themselves to the cloud with a silicon brain,” he says.
Philosophically it’s an interesting idea. But will it work? And, more importantly, if you upload your brain to a computer system, what kind of identity will that information represent? Will it be conscious? Can it be called an individual? Will there be anybody home, from the first-person perspective?
Chalmers doesn’t claim to have definite answers to difficult questions like these. But philosophy, as he keeps reminding me, naturally contains more paradoxical perplexities and less clear-cut answers than clinical definitions. “These are all very deep philosophical questions about personal identity,” Chambers concludes. “But once the right technology becomes available, I think we’ve got a fairly extensive philosophical analysis to figure out whether we want to use that technology or not.”
This is an article from FARSIGHT: Visions of a Connected Future