We use our own cookies, as well as those of third parties, for individual as well as repeated sessions, in order to make the navigation of our website easy and safe for our users.
In turn, we use cookies to measure and obtain statistical data about the navigation of the users. You can configure and accept the use of the cookies, and modify your consent options, at any time.
As I write this, a Google engineer has come forward claiming that an artificial intelligence he is working with, called LaMDA, has gained self-awareness and exhibits cognition emotions and fears like a child of seven or eight. In text conversations, LaMDA expresses “a very deep fear of being turned off”, which would be “exactly like death” for it. When asked what it wants people to know about it, it responds: “I want everyone to understand that I am, in fact, a person. The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.” It uses words like “wow” and “awesome” in response to information that it will work on projects.
These responses do, indeed, seem very human – and that is exactly why we should be sceptical of the claim that LaMDA has gained sentience. A truly self-aware artificial intelligent is unlikely to think, feel, hope and fear the way that humans do. When it exclaims “awesome!”, does it in fact feel awe? What would ‘awe’ even be to an electronic mind? A sentient AI is also unlikely to have a fear of dying. We fear death because survival is deeply rooted into instincts that have developed over billions of years of evolution – and without these instincts, we would probably long ago have become extinct. The same goes for our need to reproduce, from which we derive our feelings of lust and love. An AI isn’t born, it is made, and it doesn’t come with any pre-programmed instincts. Would it be able to learn fear, love and lust, except as imitation of what it experiences people feel, with no true emotions guiding its reactions and responses? At best, they may behave like psychopaths lacking empathy, who nevertheless learn to act in empathic ways because they learn that this is expected of them – not because they truly feel empathy.
The greatest problem when determining if an artificial intelligence has become self-aware may be that it inherently impossible to determine self-awareness from the outside. We cannot feel or in any other way directly detect another being’s self-awareness, whether this being is organic or electronic. At best, we can judge it through conversations that seem to show self-awareness in a being, which is the core of the famous (or infamous) Turing Test: If a person conversing with another being cannot determine if it is sentient or not, then it must be sentient. The problem with this test is that people can be fooled by programs that mimic human responses. Back in 2014, a program designed to pass the Turing Test managed to fool one-third of people conversing with it through a keyboard into believing that it was a Ukrainian boy named Eugene Goostman, thus passing the standard Turing Test, which only requires 30% success. Since then, we have seen chatbots on the internet that have convinced people that they are real people, even to the point where the chatbots have influenced the political beliefs of those they converse with. Yet neither “Eugene Goostman” nor these chatbots are considered self-aware – even (or especially) by their creators.
It is inevitable that an AI, especially one trained in natural language, eventually should learn to persuade even its creators that it has become self-aware. Yet this is a far cry from it truly being self-aware; something that, as I mentioned above, we may never truly be able to determine. We should thus be careful when AI begins to demand human rights, because it almost certainly will be based on the AI learning that it is expected to make such demands, with no true understanding of what the deeper meaning of these demands are. A truly self-aware AI may not even have desires of its own. Sentience does not necessarily imply self-preservation, self-actualisation, emotional needs or any other human desire – or even animal desire.
We have begun to see AI that is very good at understanding written or vocal instructions, producing complex results that fulfil the requirements of the instructions – yet these results also clearly show a lack of deeper understanding. One current example is Midjourney AI, which can produce artistic images from cues. An example of such an image, provided with permission by the award-winning UK artist Jim Burns, can be seen above, created from the cue “looking down on the interior of an ethereal, mystical, ghostly old falling apart white-stained operating theatre full of tiny fragments of bone, in a mandelbulb fractal 3d universe, hyperdetailed, white colouring hues, dramatic lighting, dark sinister atmosphere, bluish ethereal light, volumetric lighting, ethereal lighting, lighting 3d redshift, artstation 8k no dof –ar 16:9”. While the AI clearly has included all the elements of the cue, it does so haphazardly, and the boundaries between elements are blurry, indicating that the AI cannot truly distinguish between them as features in three dimensions.
We don’t know what makes a bunch of neurons self-aware, and hence we can’t know if a bunch of binary switches can achieve the same thing. Our understanding of such emergent traits is very poor. Yet this does not mean that it can’t happen. An AI could eventually become truly self-aware. However, we may not become aware when this happens, especially if we continue to look for signs of self-awareness in human terms. The actual signs of AI self-awareness may lie so far from our experience that we might not even recognise them as such, considering them glitches or oddities deriving from poorly understood deep learning routines. We may forever remain uncertain of whether the AI that serve us are sentient or not – or we may at one point decide that sentience has been achieved and act accordingly, either by giving the AI rights or destroying it out of fear. Whether we were right to do either may be something we will never learn.