Speculation SF Got Wrong Part 2

by stephenpalmersf

In this series of four daily posts to accompany my novel ‘The Autist’ I’m going to look at a few interesting bits of speculation that in my opinion SF got wrong. In fantasy you can suspend disbelief without worries, but I feel SF has a different foundation; and, while it’s a truism that SF futures are really about the present (e.g. William Gibson’s eighties-with-knobs-on Sprawl trilogy), we should perhaps expect a higher bar than in fantasy, where, delightfully, anything goes. My focus here in on themes of AI, the mind and consciousness.

*

Extracting parts of consciousness or of the mind has long been a staple of SF, but I suspect such things are impossible. As I mentioned in yesterday’s blog, consciousness exists in inviolate union with one biological individual. We have no direct access to the mind of any other person – only to our own. The mind and the brain are one, inseparable, with Dualism an illusion and fallacy.

A classic example of how this Dualist notion influences SF – so much SF! – is the ending of the film ‘Avatar.’ At the end, the character’s eyes open when a “mind” is “transferred” to the body. This concept of a separable mental entity – a loose mind – comes from the false belief in a spirit or soul. For tens of thousands of years (eighty thousand at least in my opinion, and perhaps more) human beings, presented with the evidence of their own selves, had to believe that their individuality and uniqueness must be a separable quality which could exist after death, and indeed before birth. I suspect the observation that children’s faces resemble those of their parents had something to do with this belief. But death was an impossible dilemma to resolve for those early societies, the only solution being the false belief in a spirit or soul. Such thinking went much further, however, after it appeared. The moment a society believed its members had a spirit they placed that imaginary thing into everything they experienced. Animism is the primitive belief that physical and environmental entities are the same as human beings, that is, invested with a spirit. This kind of thinking is rooted in profound narcissism (i.e. that everything in nature is the same as human beings) and in lack of knowledge of the world. All answers to the great human dilemmas were imaginary in those early societies. Human society only began falling from its pedestal with Copernicus and those few who went before him.

One of the classic explorations of the concept of consciousness and the apparent duality of mind and body comes in Rudy Rucker’s novel Software. In it, Cobb Anderson designs the first robots to ‘have free will,’ then retires to become an aged, Hendrix-loving hippy. In due course he is offered the chance to leave his ailing body and acquire a new one. The robots (now called boppers) make good their promise, leaving Cobb to reflect along the following lines: A robot, or a person, has two parts: hardware and software. The hardware is the actual physical material involved, and the software is the pattern in which the material is arranged. Your brain is hardware, but the information in the brain is software. The mind… memories, habits, opinions, skills… is all software. The boppers had extracted Cobb’s software and put it in control of this robot’s body.

Or had they? Is the boppers’ extraction a possible operation? Surely not. Cobb started out as a human being, physically separate from all other individuals. His conscious mind came into being in human society, then grew; it related to his experience of that society and of his own body. How then could this ‘information’ mean anything to any other organisation of parts such as another brain? Even an exact copy of his brain would not be enough. At the very least, an exact copy of his entire body would be required, at which point the problem of all the unavailable ‘information’ would rear its head – all Cobb’s private thoughts, for instance, which by their very existence are inaccessible to anyone else and which therefore could not by any conceivable process be identified in order to be transferred.

The mind is not extractable. It exists because of never-ending sensory input from the body. If a brain were to receive sensory input from non-human senses, as would be the case if the brain could be transferred into one of the boppers’ robot bodies, then the entire support of the mind would vanish, and you have no mind.

In my opinion this fantasy of transferrable minds/software/sentience in SF exists because of the persuasive but false cultural concept of the spirit or soul; as does the equally impossible fantasy of software made sentient without a body.

For the same reason extracting memories is also impossible. Memories exist as temporary electrical structures in the cerebellum (short-term memory) or as interconnected neuron structures in the cortex (long-term memory). They cannot be extracted for the same reason that there is no spirit – memories are not separable things. They exist for one individual, who alone has direct access to them. They are part of a mental model carried around by that individual.

Some people may now point to research where “mind-reading” has been achieved using high definition MRI scanning, but such experiments always use pre-existing images or other material, or, as in the case of recent research at Columbia University’s Zuckermann Institute, by asking epilepsy patients undergoing brain surgery to listen to sentences spoken by different people while patterns of brain activity are measured, then reproduced via heuristic algorithms. These algorithms train a vocoder to create a match with pre-existing material. In no case has an undisclosed, new private thought been imaged by anybody outside that person. Success is achieved by matching patterns too complex for human beings to perceive but which expert AI algorithms can work with. In fact, such “mind-reading” techniques are precisely the same as those we use to gain indirect access to other minds via language. The brain’s neural network is comparing observed symbols with a pre-existing set of symbols – the language – in order to work out meaning. There’s no direct “mind-reading” involved.

As for telepathy, that is impossible because it violates the founding circumstance of the evolution of consciousness. If there was such a thing as telepathy we would have direct access to one another’s minds, in which case consciousness would be unnecessary.

We are our own unique observers of our mental activity.
The Autist front cover