A staple of science fiction is logging into virtual worlds that offer full sensory experiences, indistinguishable from physical reality – very often, the characters aren’t even aware that they are in a virtual world. Today, visions of the still nebulous metaverse promise similar life-like virtual reality, with virtual existence in the ‘verse being as fulfilling as life in physical reality – perhaps even more.
Let’s step back a moment and take a closer look at this claim. What would full sensory virtual reality entail? In science fiction, this is often achieved by plugging a jack into a socket in your skull – ‘jacking in’ is a term used in several stories. However, we are nowhere close to achieving this. There have been experiments with tactile neural feedback from prosthetics and producing simple images in the brain through magnetic resonance, but in both cases, the sensory input is very basic and may even require neural surgery. There is also the question of whether we are really interested in letting computers invade our brains in this manner, with the associated risks of brain damage. Using direct brain interfaces would also entail deadening real senses and motor movements of the body. This could feasibly be achieved with sensory deprivation tanks big enough to allow a body to trash around in without connecting with the container walls, or through neural blocking of real sensory input and feedback. Full sensory brain interfaces may be achievable in a distant future, but we should not hold our breaths waiting.
Subscribe to FARSIGHT
Broaden your horizons with a Futures Membership. Stay updated on key trends and developments through receiving
quarterly issues of FARSIGHT, live Futures Seminars with futurists,
training, and discounts on our courses.
Could we instead achieve full sensory VR with less science fictional technology? After all, we have had visual and auditory VR for decades as well as simple haptic interfaces, so surely, we can’t be that far from providing the full palette of senses, or what?
“Or what” is the most likely answer.
Although the auditory experience of VR sound may be as good as real-life sound already, visual VR still has a long way to go. Besides lag being a huge issue, today’s VR goggles require you to focus on a screen a half-dozen centimetres in front your face, no matter how far away an image detail seems to be. Our brains have trouble accepting such an unnatural state, and prolonged use of VR goggles (and 3D goggles in general) can lead to dizziness, headaches and nausea. A nascent technology, Virtual Retinal Display (VRD), may produce more realistic visual input by using coloured lasers to paint an image directly on your retina. In theory, VRD can offer full-colour, high-resolution images with imperceptible lag; but we don’t have anything like that technology yet. Realistic visuals must also account for the three ways we measure depth and distance. Today’s VR goggles provide one of these ways – stereographic vision, which is the ability to register a sense of three-dimensional shapes – but cannot reproduce focal depth, for example. VRD could feasibly achieve this by reading the minute changes in our eyes when we shift focal depth, however there are currently no major initiatives embarking on such a task. The third way we measure distance – especially at longer distances – is by moving our heads from side to side, which make distant objects appear to move less than close objects. This could be achieved with motion sensors, but even a very small lag will distort the perceived distance.
Then we have the senses of smell and taste. Smell could feasibly be achieved by spraying mixtures of scents in your face or up your nose. Giving the complexity of smells, this is no small task. Last year, Tokyo Tech managed to reproduce the smell of 183 essential oils by mixing 20 ingredients, but considering that the human nose can distinguish at least one trillion smells, there still is a long way to go. This number was found by letting test subjects sniff combinations of 10, 20 or 30 components in equal measure from a selection of 128 components, and the researchers admit that the one trillion smells represent a lower limit rather than a real estimate of how many smells we can distinguish. Taste may be easier, since all (or most) tastes derive from just five basic components – sweet, salty, sour, bitter and umami. However, our taste experience is also influenced by the texture and temperature of what we eat and drink, and this complicates matters. Can a virtual taste experience be properly reproduced without the feeling of sipping, chewing, savouring and swallowing the drink or food?
GET FARSIGHT DELIVERED TO YOUR INBOX
Explore the world of tomorrow with handpicked articles by signing up to our monthly newsletter.
This brings us to what may be the hardest task: reproducing the sense of touch. On average, every square cm of our skin has roughly one cold receptor, eight heat receptors, 15 touch receptors and 125 pain receptors, with far more on fingertips and other sensitive areas, and realistic virtual touch should include at least a large fraction of these roughly four million receptors on our entire skin – no easy task. Touch, however, isn’t just a matter of stimulating nerve endings – it also requires a sense of pressure: the weight and resistance of objects you encounter as experienced by your muscles and bones. Even if you can sense the texture and temperature of a wall, the experience will fail if your hand just passes through it, nor will a sweet embrace feel right if it doesn’t squeeze your body and swinging a virtual sword should also stress and tire your muscles realistically. This requires haptic feedback that can force or counter the movement of every part of your body, which is perhaps achievable by wearing a full-body suits suspended in a harness that can pull, push, squeeze, twist and restrain your torso, head, limbs and fingers. This seems like rather extreme measures to enjoy virtual reality – and even this will not be enough.
Beyond the classical five senses of sight, hearing touch, smell and taste, we also have the inner senses of balance and acceleration. Our inner ears can sense what is up and down, so standing on your head or doing a tumble in virtual reality would require your real body to perform the same motions to feel realistic. To realistically feel that you are in an accelerating or swerving VR car, your body should be subjected to the same motions, which would require a harness the size of a race car track, something that isn’t likely to be available to very many. And even this neglects the sheer difficulty of simulating, say, weightlessness or the higher and lower gravity of other planets.
Perhaps, in the end, it will be simpler to wait until we can jack the virtual reality directly into our skulls, no matter the risks. It certainly seems less inconvenient. Still, it would require that the neural interface can very precisely affect the centres in your brain responsible for the delicate senses of sight, hearing, touch, smell, taste, balance and acceleration. If the slightest input is off, you will feel off, and this will ruin your virtual experience. And perhaps, ultimately, your sanity.