Speculation SF Got Wrong Part 1
by stephenpalmersf
In this series of four daily posts to accompany my novel ‘The Autist’ I’m going to look at a few interesting bits of speculation that in my opinion SF got wrong. In fantasy you can suspend disbelief without worries, but I feel SF has a different foundation; and, while it’s a truism that SF futures are really about the present (e.g. William Gibson’s eighties-with-knobs-on Sprawl trilogy), we should perhaps expect a higher bar than in fantasy, where, delightfully, anything goes. My focus here in on themes of AI, the mind and consciousness.
*
Is human consciousness a consequence of processing power or other technical/biological power factors?
In his classic 1984 novel Neuromancer, William Gibson presents the reader with a plot that involves two AIs merging to create a conscious whole – a so-called superconsciousness: “… the sum total of the works, the whole show…” as it is put at the novel’s end. Almost universally SF has assumed that consciousness is a consequence of brain power, computing power, or some other variety of power, and most likely the fact that men have written the overwhelming majority of such SF accounts for some of this assumption. But that isn’t the whole reason. SF has dealt poorly with themes of AI and consciousness because of the difficulty of the topic, the weight of Descartes’ influence, and the spread of religion.
Since the beginning of the last century psychologists have used the most advanced technology they knew of as a metaphor for the conscious mind. In the 1920s for instance it was common for them to picture the mind as a telephone exchange. Our use of the computer metaphor – e.g. the notion that the brain is composed of modules all linking together – is just the latest in a long series of inappropriate metaphors.
Consciousness is not a consequence of any kind of power. Consciousness is a consequence of the evolution of physically separate primates living in highly complex social groups. Consciousness is an emergent property of such groups. It could not exist in any one brain nor could it ever exist as an isolated entity, such as the merged Wintermute/Neuromancer pair. Consciousness is the evolutionary response to the difficulty individuals have in grasping and understanding the behaviour of others who exhibit highly complex social behaviour. It employs a method of empathy, by allowing the conscious individual to use themselves as an exemplar. In other words, if you see somebody crying, you know they are likely to be sad because you have cried before when you were sad. This is the social intelligence theory of consciousness, first put forward by the brilliant Nicholas Humphrey.
Neither Wintermute nor Neuromancer could be conscious individuals. They were connected electronically – not separate – and they existed in isolation, not in social groups. Now, no human being has direct access to the private mental model of another person. We do have indirect access however, for example via language, and that led to consciousness during the period of human evolution. Neither Wintermute nor Neuromancer had, or needed, such indirect access. They may have been powerful intelligences in the way some AIs are today, but they were not and never could be conscious like us. (I deal with this theme in The Autist.)
Therefore, no amount of computer upgrades, changes from electronic to quantum computing, nor any other sort of power or intelligence changes in entities which exist outside a social group of equivalents could lead to artificial consciousness. Those two preconditions must be met: existence in a social group in which evolutionary change occurs, and indirect access to the private mental models – the minds – of others.
These ideas are the thematic material of my novels Beautiful Intelligence and No Grave For A Fox. In them, Manfred Klee takes the Nicholas Humphrey route, electronically separating the nine BIs in his opening scene, when he realises that their connection is limiting them since they have no need to develop what these days we call a theory of mind. Once disconnected, they do have that need. Leonora Klee takes the AI route, attempting through computing power alone to develop a sentient machine. But she is doomed to fail. She creates an unstable entity with certain autistic characteristics.
In fact I found it quite difficult to judge the evolutionary development of the BIs, as I didn’t want to anthropomorphise them, a point made by certain characters during the novel. This leads me to another problem in SF, which is for authors to assume the equivalence of human and artificial consciousness. In earlier days I might have emphasised similarities and equivalences, but these days I do take a fuzzier line. Although we human beings faced during our evolutionary history a number of situations which led to the human condition – for instance the need for emotion to convey, to the self and to others, unmissable knowledge of high value experiences – those situations would not necessarily be faced by artificial beings. I think the chances are high that similar things would emerge – emotion and its equivalent, a sense of time and its equivalent, creativity and its equivalent – but I’m not sure they would definitely appear. It would depend on their artificial evolutionary histories.
I don’t know of any SF novels which takes the social intelligence/Nicholas Humphrey route. It would be good to see more realistic speculation in this area, as AIs are already a hot topic, and can only get hotter as their development proceeds.
Related to the theory of mind and social development is the question of self. Would it even be possible for a distributed intelligence to have a sense of self? How would the existence of numerous, identical copies affect the “self”? How about constant “upgrades” and edits? Would the replacement and modification of hardware change an AI’s “self”?
These quesitons suggest that any form of consciousness in an AI would be radically different from what humans experience. We might have to acknowledge some kind of awareness or consciousness in them, possibly akin to some animals, possibly on a par with humans, but very alien from us.
These are all good speculative points! Thanks.
My novel ‘The Autist’ uses what we know at the moment, which is quite a bit more than people realise. I draw a distinction between the distributed nature of electronic creations and the isolated nature of biological creatures.
If you read Thomas Nagel’s essay ‘What Is It Like To Be A Bat?’ you’ll realise the philosophical difficulty of your asking about radically different types of ‘consciousness’. Awareness and consciousness have to be very carefully defined in discussions.
My intention in my series of posts was to play with this, relating it to the book.
Something new – and terrible – does emerge in ‘The Autist,’ but it’s not consciousness…
So is it your belief (based on the social conciousness theory) that socially isoalted animals have no need of the sense of self/narrative generation that conciousness provides?
For me the self-analysis of conciousness is the primary purpouse, and the emulation/empathy is the emergent secondary property. Conciousness allows you to model yourself and make theories about what will happen if you do X or Y. That doesn’t have to involve significant future planning. Imagine a creature stuck in a hole. To work out how to get out a model of yourself is useful, and conciousness can provide that. Then, later in the evolutionary process, you can extend that to modeling other people, and that enables society.
Of course the extensions of that is that far more animals than are normally credited with it have conciousness. Having a number of exotic pets — insects, arthropods, reptiles, amphibians, rodents — I can believe that.
Your hypothesis wouldn’t work from the perspective of natural selection. Having a large brain and doing all the incredibly complicated things – and even dangerous, like speaking/eating – which we do would not be selectable. There needs to be a *reason* for all this to evolve. That reason is provided by the complexity of social living. Don’t forget too that our ability to self-analyse is actually a kind of illusion, because sensation sourced in the real world and perception sourced in our brains seem to occur simultaneously. You are correct to say however that socially individual animals have no need for consciousness, since they do not face the complexity of social living.