Speculation SF Got Wrong Part 4
In this series of four daily posts to accompany my novel ‘The Autist’ I’m going to look at a few interesting bits of speculation that in my opinion SF got wrong. In fantasy you can suspend disbelief without worries, but I feel SF has a different foundation; and, while it’s a truism that SF futures are really about the present (e.g. William Gibson’s eighties-with-knobs-on Sprawl trilogy), we should perhaps expect a higher bar than in fantasy, where, delightfully, anything goes. My focus here in on themes of AI, the mind and consciousness.
Having covered consciousness not being a factor of computing power, the impossibility of extracting or linking to parts of consciousness, and the impossibility of uploading or downloading into new bodies, I want to cover a final aspect of SF speculation – the impossibility of creating sentient virtual minds or copies of minds.
This is a staple of much SF, including for instance certain books by Julian May in which Jon Remillard experiences an evolutionary jump, discards his physical form and metamorphoses into his final state as a disembodied brain. But a brain/mind without a body is effectively nothing. Early episodes of Dr Who did a similar thing with the species known as morpho, and the concept is regularly used in much cinema SF. Consciousness however is founded on sensory input, as shown by Nicholas Humphrey (amongst others) in his books Seeing Red and A History Of The Mind. Without sensory input there is nothing supporting the mental model we all carry in our minds. We continually update our model of the world, mostly without being aware of it. Lacking such input there is nothing for consciousness to work with. Sensory deprivation experiments have shown how quick the mind begins to disintegrate if sensory input is missing. “What each species knows of reality is what its senses allow it to construct,” as Dorothy Rowe put it in The Construction Of Life & Death. In other words, any post-death disembodied existence is impossible.
Similarly, in William Gibson’s Neuromancer, the AI known as Neuromancer attempts to trap Case inside a cyber-construct, where he finds the “consciousness” of his girlfriend from Chiba City, who was murdered by one of Case’s underworld contacts. But without a body Linda Lee is nothing. The intertwining of body and mind cannot be undone. Such undoing is a false belief, again founded on the religious notion of a separable spirit or soul; it is a mistake to think that consciousness could be extracted and live on after a body’s death. (We can blame Descartes for many modern misconceptions as well as all the modern religions.)
Of course, even though all private mental activity is forever beyond the boundary of external acquisition, public information about such activity is not – just as we have indirect access to other minds but no direct access. I used this point when creating the metaframes of my novel Muezzinland. Metaframes are complex entities of data, but they are not records of minds, rather they are records of the public activity, history and observed character of minds. So, for instance, there could be a metaframe of Mnada the Empress of Ghana, which would collect all her public utterances, her observed character, appearance and her entire life history. This could be animated in the virtual reality of the Aether to create the impression of a copy of the Empress. But such a copy would contain none of the Empress’ private thoughts, and it would not be conscious. It might appear to be conscious through sheer realism, but it never actually would be.
Similar creations exist in my new novel The Autist, where they are known as data shadows. A data shadow is an entity created from the online activity of an individual: personal records, medical records, gaming records, surveillance camera data and so on. As is observed during the novel, such entities can become complex, depending on the amount of data gathered. But a data shadow could never be conscious. It can only exist as an approximation of an individual built up over time from public data.
In The Autist, one of my intentions was to speculate on what might happen should the development of AI continue as it is presently. In this series of blogs I have tried to show that consciousness is a result of evolution by natural selection acting upon physically separate biological creatures living in intense, sophisticated social groups. SF speculation about minds, souls, spirits, software etc being separable and transferable is based on an antiquated, false, imaginary concept, which, because human cultural evolution is slow, still remains to trouble us today.
My speculation takes as its starting point the notion that the sensory channels of the brain and the perceptual channels are separate. Sensation is our creation. There is no chain of causation beginning with something out there in the real world and ending up in the mind with qualia: the redness of red, the pain-ness of pain, etc. This separation and associated processes have been shown to be the case by Nicholas Humphrey’s work on blindsight (as described in the novel by Lara Vine), and by Paul Bach-y-Rita’s work on neuroplasticity, for instance using the tactile sensory channel to bring visual perception (Wombo’s camera/shirt set-up, designed by Lara).
As Mary Vine points out in her summation, the Autist could never be conscious. It is one massive, heuristic, perceptual network. It entirely lacks senses, relying for input on data provided by AIs, and from an occasional human like the Master at Peng Cheng Wan Li, Mr Wú. It is, in other words, a vast, isolated model of the world with its roots forever locked in earlier social values, encoded into it by the male, narcissistic, capitalist programmers of our times. And because it cannot sense and has no body, it is utterly devoid of fundamental human values: feeling, empathy, insight, compassion.
Is this the kind of entity we wish to create?