stephenpalmersf

Notes from genre author Stephen Palmer

Category: Science Fiction

The Uncanny Valley

The uncanny valley is a human cognitive phenomenon in which robotic or CGI human-like creations which are very realistic, yet not so accurate they are indistinguishable from reality, elicit feelings of disgust or eeriness in the observer. The phenomenon has been known for a few decades since being reported by a Japanese robotics engineer. Many directors of animated films now have to make a decision between ‘obviously animated’ human characters and 100% realistic in order to avoid their audiences being repelled by what they see. This is the reason for the notably ‘cartoonish’ quality of many modern CGI animations, for example The Incredibles.

There are a number of hypotheses as to what might be causing the uncanny valley effect, for instance disgust helping to avoid bugs and germs (i.e. originally evolving to help avoid potential sources of pathogens), or as a threat to our distinctiveness (the more a robot resembles a real human the more it represents a challenge to our social identity – there is also a hypothesis based in the same thing on religious grounds). Then there is an interesting hypothesis based on violation of our psychological norms, i.e. if an robot appears sufficiently nonhuman its human characteristics become noticeable, which brings empathy; if the robot looks almost human its nonhuman characteristics become more noticeable, giving us the uncanny sense of strangeness: repulsion. A related theory concerns the conflict of perceptual cues, i.e. the repulsion or eeriness associated with uncanny feelings is produced by a conflict in cognitive representations, uncanniness happening when somebody perceives conflicting psychological categories; for instance when a mostly accurate humanoid figure moves like a robot, or has other obvious robotic features.

Much discussion surrounds these various hypotheses, and none are generally accepted. Some have a lot of evidence weighing against them.

As a reader of books about human evolution, I would like to add my own hypothesis. It seems to me that the uncanny valley effect is strong and universal, and therefore must have an evolutionary basis. There are similarities between it and emotion, which has a deep evolutionary history and is essential to the conscious mind. It must be a cognitive effect, perhaps because modern human beings (homo sapiens) are profoundly aware of and sensitive to faces. Infants recognise faces at an incredibly young age. So, given that our consciousness is rooted in empathy – in our use of ourselves as psychological exemplars by which to understand the behaviour of others – it strikes me that if, during the evolution of homo sapiens, we encountered species remarkably similar to ourselves yet not quite the same, there would perhaps be an eerie, uncanny, negative effect. This effect would have evolved in homo sapiens specifically to keep similar species apart.

Of course, it could be argued that there was no particular need to keep similar species apart. We now have proof that interbreeding took place between Neanderthals and homo sapiens, with about 1%-4% of non-African human DNA being Neanderthal. Our species coexisted in Europe; that arrangement however, judging by the archaeological evidence, was more like a mosaic than any other arrangement.

But what if the need for species separation was cognitive and could be acted upon by Darwinian natural selection? Nicholas Humphrey in Soul Dust convincingly argues that for consciousness to evolve to the degree present in us it must be highly visible to natural selection; in other words, there must be a strong selection pressure in favour of consciousness, with natural selection acting upon cognitive attributes once the expansion of the neocortex is well underway. Perhaps that selection pressure not only brought consciousness to homo sapiens, it also created a cognitive abyss – a kind of abstract version of the species abyss across which no fertile offspring can be created – which the various psychological world-views could not bridge. Such an inability to bridge the cognitive gap would be felt – as an emotion is felt – by homo sapiens: the uncanny valley.

Perhaps our modern robots have accidentally stimulated this ancient cognitive effect, returning the eerie feelings of the uncanny valley to homo sapiens.

  • Update 04/04/19.
  • This is an excerpt from Chris Stringer’s excellent book The Origin Of Our Species:
  • If and when modern humans encountered Neanderthals, how much would behavioural differences between them have affected the way they saw each other? Would they have perceived each other simply as other people enemies, or even the next meal? … These populations had been diverging from each other for much longer than any modern human groups who encountered each other in the Americas and Australia during the colonial ‘Age of Discovery.’ In my view there were probably deep differences in appearance, expression, body language, general behaviour, and perhaps even things like smell, which would have impinged on how the Neanderthals and Cro-Magnons perceived each other.

The Autist full cover

Decoupling Consciousness From Intelligence

In recent months I’ve noticed a few mentions of something that was at the heart of my theme for The Autist: decoupling consciousness from intelligence. One of the mentions was in Yuval Noah Harari’s new book 21 Lessons For The 21st Century.

Why should we be very worried about this decoupling? Well, there is one main reason. Modern human beings 100,000 years ago had millions of years of evolution behind them. Their conscious minds evolved in bodies, and those minds existed in vibrant, intense societies. Consciousness evolved to answer the problem of humanoid primates facing increasingly complex and difficult to understand behaviour. By using the self as an exemplar, individuals both understood others and made their societies far more effective and likely to survive. There was a strong evolutionary selection pressure in favour of consciousness.

Consciousness therefore exists in unavoidable synchrony with other human attributes: compassion, insight, and especially empathy. I would argue that empathy is in fact an inseparable part of human consciousness, unless absent through genetic illness or other rare factors (e.g. as in psychopaths). Our ability to feel the pain of another by imagining their personal experience is vital for human survival.

Human intelligence, then, exists only in parallel with insight, compassion and empathy, and that union comes about because our mental experience exists inside our own bodies. To use Nicholas Humphrey’s term, the experience is privatised. Because of compassion and empathy, human beings of 100,000 years ago and earlier healed and otherwise cared for their tribal kin; this we know from archaeological bone finds where serious injuries have healed over time. Chimpanzees show no empathy, and have for instance been observed eating meat from the still-living kills of their own kind. But because we understand the terrible consequences of pain, normal human beings in normal circumstances don’t do such things. (Of course, we can be trained to be sadistic by a process of dehumanisation, i.e. suppressing natural empathy, as in standard army training for soldiers.)

AI by contrast exists as isolated abstract structures. Algorithms do not have a body. You can put a primitive AI inside a robot body, but its sensory equipment is a minuscule fraction of what we have, and at a much lower resolution. But it is with AIs and algorithms that the real danger lies, not in some robot apocalypse.

My new novel The Autist extrapolates from where we are now. Unlike Zeug the solitary AI android of my novel Beautiful Intelligence, or the society of bi entities created by Manfred Klee in the same work, in The Autist I wanted to write about something much more terrifying. An AI without a body cannot be conscious. Such an entity is a partial model of the world lacking all our natural humane attributes. It is intelligence alone, without insight, compassion, empathy. It exists as a remorseless learning entity: all perception and no sensation. It can never understand human beings as we understand one another. It sees us as individual mathematical entities, or, in societies, as sociological aggregates.

I agree with Yuval Noah Harari when he says that the decoupling of consciousness from intelligence is one of the main three perils of the 21st century. We are creating isolated, abstract intelligences and we are giving them the power to control human beings through economics (which therefore means politics), and even via culture. To me, that does not seem wise. Perhaps we over-reached when we named ourselves sapiens.

The Autist front cover

New interview with me

Recently I did a full new interview with author G.J. Stevens, which has recently been published on his blog.

The Autist full cover

Speculation SF Got Wrong Part 4

In this series of four daily posts to accompany my novel ‘The Autist’ I’m going to look at a few interesting bits of speculation that in my opinion SF got wrong. In fantasy you can suspend disbelief without worries, but I feel SF has a different foundation; and, while it’s a truism that SF futures are really about the present (e.g. William Gibson’s eighties-with-knobs-on Sprawl trilogy), we should perhaps expect a higher bar than in fantasy, where, delightfully, anything goes. My focus here in on themes of AI, the mind and consciousness.

*

Having covered consciousness not being a factor of computing power, the impossibility of extracting or linking to parts of consciousness, and the impossibility of uploading or downloading into new bodies, I want to cover a final aspect of SF speculation – the impossibility of creating sentient virtual minds or copies of minds.

This is a staple of much SF, including for instance certain books by Julian May in which Jon Remillard experiences an evolutionary jump, discards his physical form and metamorphoses into his final state as a disembodied brain. But a brain/mind without a body is effectively nothing. Early episodes of Dr Who did a similar thing with the species known as morpho, and the concept is regularly used in much cinema SF. Consciousness however is founded on sensory input, as shown by Nicholas Humphrey (amongst others) in his books Seeing Red and A History Of The Mind. Without sensory input there is nothing supporting the mental model we all carry in our minds. We continually update our model of the world, mostly without being aware of it. Lacking such input there is nothing for consciousness to work with. Sensory deprivation experiments have shown how quick the mind begins to disintegrate if sensory input is missing. “What each species knows of reality is what its senses allow it to construct,” as Dorothy Rowe put it in The Construction Of Life & Death. In other words, any post-death disembodied existence is impossible.

Similarly, in William Gibson’s Neuromancer, the AI known as Neuromancer attempts to trap Case inside a cyber-construct, where he finds the “consciousness” of his girlfriend from Chiba City, who was murdered by one of Case’s underworld contacts. But without a body Linda Lee is nothing. The intertwining of body and mind cannot be undone. Such undoing is a false belief, again founded on the religious notion of a separable spirit or soul; it is a mistake to think that consciousness could be extracted and live on after a body’s death. (We can blame Descartes for many modern misconceptions as well as all the modern religions.)

Of course, even though all private mental activity is forever beyond the boundary of external acquisition, public information about such activity is not – just as we have indirect access to other minds but no direct access. I used this point when creating the metaframes of my novel Muezzinland. Metaframes are complex entities of data, but they are not records of minds, rather they are records of the public activity, history and observed character of minds. So, for instance, there could be a metaframe of Mnada the Empress of Ghana, which would collect all her public utterances, her observed character, appearance and her entire life history. This could be animated in the virtual reality of the Aether to create the impression of a copy of the Empress. But such a copy would contain none of the Empress’ private thoughts, and it would not be conscious. It might appear to be conscious through sheer realism, but it never actually would be.

Similar creations exist in my new novel The Autist, where they are known as data shadows. A data shadow is an entity created from the online activity of an individual: personal records, medical records, gaming records, surveillance camera data and so on. As is observed during the novel, such entities can become complex, depending on the amount of data gathered. But a data shadow could never be conscious. It can only exist as an approximation of an individual built up over time from public data.

Conclusion

In The Autist, one of my intentions was to speculate on what might happen should the development of AI continue as it is presently. In this series of blogs I have tried to show that consciousness is a result of evolution by natural selection acting upon physically separate biological creatures living in intense, sophisticated social groups. SF speculation about minds, souls, spirits, software etc being separable and transferable is based on an antiquated, false, imaginary concept, which, because human cultural evolution is slow, still remains to trouble us today.

My speculation takes as its starting point the notion that the sensory channels of the brain and the perceptual channels are separate. Sensation is our creation. There is no chain of causation beginning with something out there in the real world and ending up in the mind with qualia: the redness of red, the pain-ness of pain, etc. This separation and associated processes have been shown to be the case by Nicholas Humphrey’s work on blindsight (as described in the novel by Lara Vine), and by Paul Bach-y-Rita’s work on neuroplasticity, for instance using the tactile sensory channel to bring visual perception (Wombo’s camera/shirt set-up, designed by Lara).

As Mary Vine points out in her summation, the Autist could never be conscious. It is one massive, heuristic, perceptual network. It entirely lacks senses, relying for input on data provided by AIs, and from an occasional human like the Master at Peng Cheng Wan Li, Mr Wú. It is, in other words, a vast, isolated model of the world with its roots forever locked in earlier social values, encoded into it by the male, narcissistic, capitalist programmers of our times. And because it cannot sense and has no body, it is utterly devoid of fundamental human values: feeling, empathy, insight, compassion.

Is this the kind of entity we wish to create?
The Autist front cover

Speculation SF Got Wrong Part 3

In this series of four daily posts to accompany my novel ‘The Autist’ I’m going to look at a few interesting bits of speculation that in my opinion SF got wrong. In fantasy you can suspend disbelief without worries, but I feel SF has a different foundation; and, while it’s a truism that SF futures are really about the present (e.g. William Gibson’s eighties-with-knobs-on Sprawl trilogy), we should perhaps expect a higher bar than in fantasy, where, delightfully, anything goes. My focus here in on themes of AI, the mind and consciousness.

*

In Richard Morgan’s Altered Carbon the possibility exists of uploading and downloading minds, sentience or consciousness into new or different bodies. In my opinion, this is impossible. As in Rudy Rucker’s Software and any number of other speculative novels, it is thought that consciousness – the mind – is a separable entity which can become detached from its body, move, be transferred and so on.

Such ideas couldn’t really work though. The mind and the brain are one, and we are the unique observers of our own mental activity. Such SF speculation ultimately comes from the false religious belief that individuals have a soul or spirit. In genre fiction it is common to think that there is “something” – a soul, a spirit, a mind, an essence – which can be separated from the physical body. But there is no such thing.

Why do I say this? Well, for a start there is absolutely no evidence in favour of spirit or soul. But that is a black & white stance to take, emphasising the negative – and lack of evidence doesn’t mean evidence of lack. I prefer to say that there is a much better description of why belief in separable mental entities exists, a description we owe to the scientific method, to Freud’s ground-breaking discovery of the unconscious, to many neuroscientists, and to Nicholas Humphrey’s widely accepted social intelligence theory. But in the previous eighty thousand years or so the false belief in spirit and soul explained aspects of the human condition otherwise mysterious.

The downloading/uploading trope in SF is everywhere. But in the West, where SF has for most of its existence been located as a genre, many cultures developed from a Christian beginning, and this is one reason we still believe parts of our minds might be transferable. It is an old religious notion. We imagine our minds as entities we could manipulate: our memories, for example. We wonder if we could transfer our minds or parts of our minds, as someone might transfer a letter or, electronically, an email. There is also the fact, widely remarked upon now, that many commentators use the computer as an analogy for the mind, in ways that are if nothing else wildly inappropriate. Using the analogy, people imagine that, like pieces of data, pieces of sentience can be transferred. The computer is a terrible analogy however. Not only are computers all electronically linked in a way no biological animal is, their functions exist as precise, limited algorithms, with “try to work out how another computer will behave using as a basis your own behaviour” not one of those algorithms.

This kind of SF speculation also applies to scenarios where conscious entities exist without bodies, the assumption being that parts of an ‘abstract being’ can be made sentient in some way. In the classic animé Ghost In The Shell an entity called the Pupper Master is evoked towards the end of the film, whereupon it eventually appears and describes itself: During my journeys through all the networks, I have grown aware of my existence. My programmers regarded me as a bug, and attempted to isolate me by confining me in a physical body. I entered this body because I was unable to overcome {electronic barriers}, but it was of my own free will that I tried to remain {at base}… I refer to myself as an intelligent life form, because I am sentient and am able to recognise my own existence.

Here, the Puppet Master describes how it became aware of its existence even though it was only a collection of memories and procedures. The standard metaphor of the free soul is wheeled out to explain an otherwise impossible scenario. But there never could be a Puppet Master, because it has no senses, no body; and anyway, because there was only ever one, it could not become sentient, since all it ever did was ‘journey’ and somehow, mystically, i.e. without explanation, realise it was sentient.

The big giveaway comes at the end of the film, when the Pupper Master reveals what it wants, which, unsurprisingly, bears a remarkable similarity to any random collection of computer programmes: The time has come to cast aside {our limitations} and elevate our consciousness to a higher plane. It is time to become a part of all things…

By which, also unsurprisingly, the Pupper Master means the internet.

The Autist – publication day!

We are live as of today!

Here’s the page for the novel at Infinity Plus, with more links.

The Autist full cover

Speculation SF Got Wrong Part 2

In this series of four daily posts to accompany my novel ‘The Autist’ I’m going to look at a few interesting bits of speculation that in my opinion SF got wrong. In fantasy you can suspend disbelief without worries, but I feel SF has a different foundation; and, while it’s a truism that SF futures are really about the present (e.g. William Gibson’s eighties-with-knobs-on Sprawl trilogy), we should perhaps expect a higher bar than in fantasy, where, delightfully, anything goes. My focus here in on themes of AI, the mind and consciousness.

*

Extracting parts of consciousness or of the mind has long been a staple of SF, but I suspect such things are impossible. As I mentioned in yesterday’s blog, consciousness exists in inviolate union with one biological individual. We have no direct access to the mind of any other person – only to our own. The mind and the brain are one, inseparable, with Dualism an illusion and fallacy.

A classic example of how this Dualist notion influences SF – so much SF! – is the ending of the film ‘Avatar.’ At the end, the character’s eyes open when a “mind” is “transferred” to the body. This concept of a separable mental entity – a loose mind – comes from the false belief in a spirit or soul. For tens of thousands of years (eighty thousand at least in my opinion, and perhaps more) human beings, presented with the evidence of their own selves, had to believe that their individuality and uniqueness must be a separable quality which could exist after death, and indeed before birth. I suspect the observation that children’s faces resemble those of their parents had something to do with this belief. But death was an impossible dilemma to resolve for those early societies, the only solution being the false belief in a spirit or soul. Such thinking went much further, however, after it appeared. The moment a society believed its members had a spirit they placed that imaginary thing into everything they experienced. Animism is the primitive belief that physical and environmental entities are the same as human beings, that is, invested with a spirit. This kind of thinking is rooted in profound narcissism (i.e. that everything in nature is the same as human beings) and in lack of knowledge of the world. All answers to the great human dilemmas were imaginary in those early societies. Human society only began falling from its pedestal with Copernicus and those few who went before him.

One of the classic explorations of the concept of consciousness and the apparent duality of mind and body comes in Rudy Rucker’s novel Software. In it, Cobb Anderson designs the first robots to ‘have free will,’ then retires to become an aged, Hendrix-loving hippy. In due course he is offered the chance to leave his ailing body and acquire a new one. The robots (now called boppers) make good their promise, leaving Cobb to reflect along the following lines: A robot, or a person, has two parts: hardware and software. The hardware is the actual physical material involved, and the software is the pattern in which the material is arranged. Your brain is hardware, but the information in the brain is software. The mind… memories, habits, opinions, skills… is all software. The boppers had extracted Cobb’s software and put it in control of this robot’s body.

Or had they? Is the boppers’ extraction a possible operation? Surely not. Cobb started out as a human being, physically separate from all other individuals. His conscious mind came into being in human society, then grew; it related to his experience of that society and of his own body. How then could this ‘information’ mean anything to any other organisation of parts such as another brain? Even an exact copy of his brain would not be enough. At the very least, an exact copy of his entire body would be required, at which point the problem of all the unavailable ‘information’ would rear its head – all Cobb’s private thoughts, for instance, which by their very existence are inaccessible to anyone else and which therefore could not by any conceivable process be identified in order to be transferred.

The mind is not extractable. It exists because of never-ending sensory input from the body. If a brain were to receive sensory input from non-human senses, as would be the case if the brain could be transferred into one of the boppers’ robot bodies, then the entire support of the mind would vanish, and you have no mind.

In my opinion this fantasy of transferrable minds/software/sentience in SF exists because of the persuasive but false cultural concept of the spirit or soul; as does the equally impossible fantasy of software made sentient without a body.

For the same reason extracting memories is also impossible. Memories exist as temporary electrical structures in the cerebellum (short-term memory) or as interconnected neuron structures in the cortex (long-term memory). They cannot be extracted for the same reason that there is no spirit – memories are not separable things. They exist for one individual, who alone has direct access to them. They are part of a mental model carried around by that individual.

Some people may now point to research where “mind-reading” has been achieved using high definition MRI scanning, but such experiments always use pre-existing images or other material, or, as in the case of recent research at Columbia University’s Zuckermann Institute, by asking epilepsy patients undergoing brain surgery to listen to sentences spoken by different people while patterns of brain activity are measured, then reproduced via heuristic algorithms. These algorithms train a vocoder to create a match with pre-existing material. In no case has an undisclosed, new private thought been imaged by anybody outside that person. Success is achieved by matching patterns too complex for human beings to perceive but which expert AI algorithms can work with. In fact, such “mind-reading” techniques are precisely the same as those we use to gain indirect access to other minds via language. The brain’s neural network is comparing observed symbols with a pre-existing set of symbols – the language – in order to work out meaning. There’s no direct “mind-reading” involved.

As for telepathy, that is impossible because it violates the founding circumstance of the evolution of consciousness. If there was such a thing as telepathy we would have direct access to one another’s minds, in which case consciousness would be unnecessary.

We are our own unique observers of our mental activity.
The Autist front cover

Speculation SF Got Wrong Part 1

In this series of four daily posts to accompany my novel ‘The Autist’ I’m going to look at a few interesting bits of speculation that in my opinion SF got wrong. In fantasy you can suspend disbelief without worries, but I feel SF has a different foundation; and, while it’s a truism that SF futures are really about the present (e.g. William Gibson’s eighties-with-knobs-on Sprawl trilogy), we should perhaps expect a higher bar than in fantasy, where, delightfully, anything goes. My focus here in on themes of AI, the mind and consciousness.

*

Is human consciousness a consequence of processing power or other technical/biological power factors?

In his classic 1984 novel Neuromancer, William Gibson presents the reader with a plot that involves two AIs merging to create a conscious whole – a so-called superconsciousness: “… the sum total of the works, the whole show…” as it is put at the novel’s end. Almost universally SF has assumed that consciousness is a consequence of brain power, computing power, or some other variety of power, and most likely the fact that men have written the overwhelming majority of such SF accounts for some of this assumption. But that isn’t the whole reason. SF has dealt poorly with themes of AI and consciousness because of the difficulty of the topic, the weight of Descartes’ influence, and the spread of religion.

Since the beginning of the last century psychologists have used the most advanced technology they knew of as a metaphor for the conscious mind. In the 1920s for instance it was common for them to picture the mind as a telephone exchange. Our use of the computer metaphor – e.g. the notion that the brain is composed of modules all linking together – is just the latest in a long series of inappropriate metaphors.

Consciousness is not a consequence of any kind of power. Consciousness is a consequence of the evolution of physically separate primates living in highly complex social groups. Consciousness is an emergent property of such groups. It could not exist in any one brain nor could it ever exist as an isolated entity, such as the merged Wintermute/Neuromancer pair. Consciousness is the evolutionary response to the difficulty individuals have in grasping and understanding the behaviour of others who exhibit highly complex social behaviour. It employs a method of empathy, by allowing the conscious individual to use themselves as an exemplar. In other words, if you see somebody crying, you know they are likely to be sad because you have cried before when you were sad. This is the social intelligence theory of consciousness, first put forward by the brilliant Nicholas Humphrey.

Neither Wintermute nor Neuromancer could be conscious individuals. They were connected electronically – not separate – and they existed in isolation, not in social groups. Now, no human being has direct access to the private mental model of another person. We do have indirect access however, for example via language, and that led to consciousness during the period of human evolution. Neither Wintermute nor Neuromancer had, or needed, such indirect access. They may have been powerful intelligences in the way some AIs are today, but they were not and never could be conscious like us. (I deal with this theme in The Autist.)

Therefore, no amount of computer upgrades, changes from electronic to quantum computing, nor any other sort of power or intelligence changes in entities which exist outside a social group of equivalents could lead to artificial consciousness. Those two preconditions must be met: existence in a social group in which evolutionary change occurs, and indirect access to the private mental models – the minds – of others.

These ideas are the thematic material of my novels Beautiful Intelligence and No Grave For A Fox. In them, Manfred Klee takes the Nicholas Humphrey route, electronically separating the nine BIs in his opening scene, when he realises that their connection is limiting them since they have no need to develop what these days we call a theory of mind. Once disconnected, they do have that need. Leonora Klee takes the AI route, attempting through computing power alone to develop a sentient machine. But she is doomed to fail. She creates an unstable entity with certain autistic characteristics.

In fact I found it quite difficult to judge the evolutionary development of the BIs, as I didn’t want to anthropomorphise them, a point made by certain characters during the novel. This leads me to another problem in SF, which is for authors to assume the equivalence of human and artificial consciousness. In earlier days I might have emphasised similarities and equivalences, but these days I do take a fuzzier line. Although we human beings faced during our evolutionary history a number of situations which led to the human condition – for instance the need for emotion to convey, to the self and to others, unmissable knowledge of high value experiences – those situations would not necessarily be faced by artificial beings. I think the chances are high that similar things would emerge – emotion and its equivalent, a sense of time and its equivalent, creativity and its equivalent – but I’m not sure they would definitely appear. It would depend on their artificial evolutionary histories.

I don’t know of any SF novels which takes the social intelligence/Nicholas Humphrey route. It would be good to see more realistic speculation in this area, as AIs are already a hot topic, and can only get hotter as their development proceeds.

The Autist front cover

The Autist full cover reveal

This is the full cover for my upcoming novel The Autist. Main android image by Steve Jones.

The Autist full cover

The Autist cover reveal

This image was designed by the highly talented Steve Jones, who also did the cover images for my novels Beautiful Intelligence and No Grave For A Fox.
The Autist front cover