Speculation SF Got Wrong Part 1

by stephenpalmersf

In this series of four daily posts to accompany my novel ‘The Autist’ I’m going to look at a few interesting bits of speculation that in my opinion SF got wrong. In fantasy you can suspend disbelief without worries, but I feel SF has a different foundation; and, while it’s a truism that SF futures are really about the present (e.g. William Gibson’s eighties-with-knobs-on Sprawl trilogy), we should perhaps expect a higher bar than in fantasy, where, delightfully, anything goes. My focus here in on themes of AI, the mind and consciousness.

*

Is human consciousness a consequence of processing power or other technical/biological power factors?

In his classic 1984 novel Neuromancer, William Gibson presents the reader with a plot that involves two AIs merging to create a conscious whole – a so-called superconsciousness: “… the sum total of the works, the whole show…” as it is put at the novel’s end. Almost universally SF has assumed that consciousness is a consequence of brain power, computing power, or some other variety of power, and most likely the fact that men have written the overwhelming majority of such SF accounts for some of this assumption. But that isn’t the whole reason. SF has dealt poorly with themes of AI and consciousness because of the difficulty of the topic, the weight of Descartes’ influence, and the spread of religion.

Since the beginning of the last century psychologists have used the most advanced technology they knew of as a metaphor for the conscious mind. In the 1920s for instance it was common for them to picture the mind as a telephone exchange. Our use of the computer metaphor – e.g. the notion that the brain is composed of modules all linking together – is just the latest in a long series of inappropriate metaphors.

Consciousness is not a consequence of any kind of power. Consciousness is a consequence of the evolution of physically separate primates living in highly complex social groups. Consciousness is an emergent property of such groups. It could not exist in any one brain nor could it ever exist as an isolated entity, such as the merged Wintermute/Neuromancer pair. Consciousness is the evolutionary response to the difficulty individuals have in grasping and understanding the behaviour of others who exhibit highly complex social behaviour. It employs a method of empathy, by allowing the conscious individual to use themselves as an exemplar. In other words, if you see somebody crying, you know they are likely to be sad because you have cried before when you were sad. This is the social intelligence theory of consciousness, first put forward by the brilliant Nicholas Humphrey.

Neither Wintermute nor Neuromancer could be conscious individuals. They were connected electronically – not separate – and they existed in isolation, not in social groups. Now, no human being has direct access to the private mental model of another person. We do have indirect access however, for example via language, and that led to consciousness during the period of human evolution. Neither Wintermute nor Neuromancer had, or needed, such indirect access. They may have been powerful intelligences in the way some AIs are today, but they were not and never could be conscious like us. (I deal with this theme in The Autist.)

Therefore, no amount of computer upgrades, changes from electronic to quantum computing, nor any other sort of power or intelligence changes in entities which exist outside a social group of equivalents could lead to artificial consciousness. Those two preconditions must be met: existence in a social group in which evolutionary change occurs, and indirect access to the private mental models – the minds – of others.

These ideas are the thematic material of my novels Beautiful Intelligence and No Grave For A Fox. In them, Manfred Klee takes the Nicholas Humphrey route, electronically separating the nine BIs in his opening scene, when he realises that their connection is limiting them since they have no need to develop what these days we call a theory of mind. Once disconnected, they do have that need. Leonora Klee takes the AI route, attempting through computing power alone to develop a sentient machine. But she is doomed to fail. She creates an unstable entity with certain autistic characteristics.

In fact I found it quite difficult to judge the evolutionary development of the BIs, as I didn’t want to anthropomorphise them, a point made by certain characters during the novel. This leads me to another problem in SF, which is for authors to assume the equivalence of human and artificial consciousness. In earlier days I might have emphasised similarities and equivalences, but these days I do take a fuzzier line. Although we human beings faced during our evolutionary history a number of situations which led to the human condition – for instance the need for emotion to convey, to the self and to others, unmissable knowledge of high value experiences – those situations would not necessarily be faced by artificial beings. I think the chances are high that similar things would emerge – emotion and its equivalent, a sense of time and its equivalent, creativity and its equivalent – but I’m not sure they would definitely appear. It would depend on their artificial evolutionary histories.

I don’t know of any SF novels which takes the social intelligence/Nicholas Humphrey route. It would be good to see more realistic speculation in this area, as AIs are already a hot topic, and can only get hotter as their development proceeds.

The Autist front cover