stephenpalmersf

Notes from genre author Stephen Palmer

Category: The Human Condition: Essays

In Wales

Had a successful day yesterday recording more incidental shots for the Condition: Human films, this time in Wales (my favourite country).

Screen Shot 2020-08-27 at 13.41.10

Avebury & West Kennet

I spent a terrific day filming at Avebury Stone Circle and West Kennet Long Barrow on Thursday. These were “incidental” shots – I wasn’t speaking to camera on this occasion, though if I had wanted to it would have been too windy. Some lovely images and shots.

The project continues!

This slideshow requires JavaScript.

The Sensation Of Losing My Words

I’ve now done a couple of filming sessions for Condition: Human and the experience has been… interesting! Yesterday I spent an afternoon with my partner Nicky (director and camcorder operator) at a dell just outside Betws y Coed, and the first problem we encountered was the noise. The river was in full flow – tons of water coming off those Welsh mountains – and its roaring deafened us close-up. However, by filming a little away from the river and using the directional microphone we were able to get an acceptable balance between the background roar and my voice.

But the main problem (which I encountered when making my first recordings in Mortimer Forest) was the script. I’d written full scripts for the six short films earlier in the year, thinking that was the best option, but actually I’m not the sort of speaker who can remember his lines then deliver them. As I discovered when I did my presentation on consciousness at the day job last year, my natural mode is having a basic outline of the topics then speaking in extemporised fashion. Yesterday, as we nervously eyed the sky for rainclouds, I found myself often unable to remember even a few sentences. It’s a very strange sensation, going mind-blank. Even simple sentences were tricky! In some circumstances, after a few takes, I couldn’t deliver them at all.

So my plan is to amend the scripts so I have basic ideas – words, phrases – around which I’ll improvise. The other option I have is more voice-overs. Recently I analysed a documentary by Alastair Sooke (a presenter Nicky and I both like) to find that the ratio of to-camera delivery to voice-overs is about 55/45. My scripts were written thinking the proportion of to-camera work should be much higher.

Still, we had fun yesterday: enormous fun! This is a work I now know I can do, although whether I’m any good is another question entirely.

IMG_20200818_161653

Condition: Human

For thirty five years I’ve been fascinated by human consciousness, its evolution by natural selection, and the human condition. Some decades ago I made a couple of attempts at writing a work encapsulating some of the ideas I’d read – those of Erich Fromm, Dorothy Rowe and Nicholas Humphrey in particular – and of my own, but the books didn’t work out. I’ve had a couple of tries since then, and again they didn’t work out. So a few years ago I had the idea to make a film, thinking that perhaps image and sound would be a better medium than words.

Starting this summer I hope to be making six short films encapsulating all the ideas which interest me: Condition: Human. This will be a personal view. I wrote the scripts earlier in the year, and have since then been working on a shooting schedule, locations, voice-overs – which are surprisingly difficult to do – and the music. My hope is that this and next month I’ll be able to finish the outdoor filming, leaving the indoor shoots, which can be done in any weather.

I’m not sure what kind of presenter I’ll be. Maybe it won’t work out. But I did have a test run last year, when I gave a half hour extemporised lecture about the basics of human consciousness and its evolution. The previous year, at Asylum in Lincoln, I did something similar. So I think the chances are fair that I can be a half decent presenter.

More details to follow!

e f

Books by Erich Fromm

Answering Dennett’s Question For Him

The philosopher and Darwinist Daniel Dennett is puzzled by the continued existence of religious belief. How, he wonders, does it have survival value? The fundamentalist atheist Richard Dawkins is similarly perplexed by the persistence of religion, 500 years after the beginning of the scientific revolution. I, on the other hand – an ironclad atheist just like D² – am not at all surprised by the persistence of religious belief.

First of all, a few notes on my own stance. I’ve always been an atheist. My novels often have the theme of exposing religion and spirituality for what they are (eg the ‘Factory Girl’ trilogy). I utterly reject any notion of deities, soul or spirit, and the afterlife. I’m also a Darwinist in that I entirely accept Darwin’s wonderful theory of evolution by natural selection. In other words, I’m remarkably like D². Why, then, the difference?

In this blog post I’d like to answer Dennett’s question for him. Why do spirituality (by which I mean belief systems up to 3,000BC or thereabouts) and religion (what came after) continue to exist in societies suffused with and dependent upon the modern evidence based way of thinking? After all, spacefaring nations East and West go to the moon because of science, not prayer. Hospitals work by science, not prayer. Vaccines were discovered by science, not prayer. When you want anything mended you go to a mender, not a priest. In short: prayer obviously doesn’t work in the world. Yet it remains a major focus for the greater proportion of the world’s population.

Spirituality and religion answer four major questions that all human beings must have answered if they are to live coherent, sane lives. Those questions typically revolve around the themes: (i) how did the universe come into existence; (ii) how did I come into existence; (iii) what is the meaning of my life; and (iv) how should I live? No human being can live sane and whole without some basic answer to these four questions. That’s part of being human. In other words, meaning is an unavoidable aspect of the human condition. D² have answered the four questions through science. Others answer them through religion. Science, spirituality and religion are all meaning frameworks.

A better reframing therefore of Dennett’s question is: what is the survival value of meaning frameworks? Now we see where D² have gone wrong. Religion is merely an imaginary subset of human meaning frameworks. Atheism is also a human meaning framework. Science is a non-imaginary subset of human meaning frameworks, working through the scientific method, which spirituality and religion explicitly deny.

In other words, from perhaps as far back as 100,000 years ago, spirituality was an absolutely inevitable invention for all hunter-gatherers, who could not under any circumstances have survived without it. Human beings profoundly live via metaphor. We tell stories. This is what the Darwinist and the fundamentalist atheist don’t understand. They apply Darwinism to social life. Darwinism in fact applies to bodies created by genetics. Applying Darwinism to social life – asking “What is the survival value of religion?” – is like applying the mechanical processes in clockwork to the notion of eleven o’clock. Eleven o’clock is a human concept emerging from the mechanical processes inside a clock. You don’t ask what eleven o’clock is by observing the precise positions of cogs, levers and hands inside a stopwatch. You enquire as to its meaning in human life.

If human beings are to live happy, just, peaceful lives we have to expose the true nature of spirituality and religion. Pretending it’s all just a bunch of fairy stories, although literally true, strips metaphor from human minds, and without metaphor we are destined for insanity. We need stories to survive, and for the vast majority of human existence we had to invent them, because we didn’t know the truth about the real world. But now we do. Scientists accept that we defer to the real world, not the other way around. It is the real world which teaches us, not some book written 2,000 years ago, or some imaginary collection of principles invented in the depths of the last Ice Age. A new story is therefore required.

Yuval Noah Harari recently pointed out that for the first half of the twentieth century there were essentially three human stories: Capitalism, Socialism and Liberalism. After WW2 there was one human story: Liberalism. But now, we have no human story. That observation should send a chill through the hearts and minds of all who care about the future of the human race.

chauvet

The Trickster

The Trickster is a universal and ancient archetype. Why did such a character become so important in prehistoric, then in historic myth? Tricksters were everywhere: Loki in the Norse pantheon, Hermes in Ancient Greece, the Coyote or Raven spirit to certain Native American tribes, Anansi the Spider in West Africa, and so on.

Not all tricksters are the same. Some (Loki for instance) display gender fluidity – as a mare, Loki gave birth to Odin’s eight-legged horse Sleipnir – while some are variously heroes and/or villains, and some are more thief than anything else. But the prime focus of the trickster is deceit.

Deceit is a fascinating concept. Some scholars of language suggest that the human capacity for deceit is the basis of metaphor; in other words, a metaphor is a layer above reality that at the same time isn’t reality but also summarises, or describes it better. To make a metaphor about, for instance, shock as a ‘hammer blow’ you have to be deceptive regarding the lack of a hammer or a blow.

But deceit has one fundamental characteristic which marks it out as crucial in human evolution, and therefore in mythology. To deceive somebody you have to have what psychologists call theory of mind. Theory of mind is the understanding each of us has regarding other people, i.e. that they too have a mind which they use in an identical way to ours. Children acquire theory of mind when they are fairly young, depending on circumstances – it can be as early as six years, or as late as eight or nine. Before then, it is easy to show through experiment that young children are unable to grasp what other individuals may or may not believe. Chimpanzees and great apes have been shown to have a basic theory of mind, which means they are able to grasp what other members of their social group may or may not believe, or know. Some male chimps use this in mating strategies: many chimps use it to conceal food stash locations.

The human capacity for theory of mind however far exceeds what apes can manage. We are capable of extraordinarily complex feats of understanding, which we rather take for granted because it is such an integral part of life, but which in fact are remarkable, and a major clue to the nature of consciousness. As a result we are able to make sophisticated calculations about the knowledge or beliefs of others. In literature, this is called order of intentionality. For example: the author of a novel believes certain things about their readers; a character in the novel will have their own beliefs; that character may believe or know something about another character, who may in their mind know something about another, and so on… One of the reasons Shakespeare is so lauded is his amazing ability to manipulate for the benefit of his audience complex many-ordered intentionality amongst his characters.

Theory of mind, then, is the essence of the trickster. The trickster is universal because theory of mind is universal and fundamental to social life. The trickster is in fact the metaphor for theory of mind in mythology, folklore and fireside tale. Our very earliest myths (which, as Karen Armstrong so brilliantly pointed out, are at once real events, retold versions, and instructions for living summarised in those retold versions) contain this archetype precisely because it is fundamental to social life.

Ethnographic studies have shown that hunter-gatherer communities talk about many things during the day – the minutiae of life – but at night four fifths of talk is storytelling. In prehistoric times we needed examples of how theory of mind is used. We needed to know why the Norse trickster Loki changed his shape into a mare then gave birth to Odin’s steed Sleipnir. All this passed on in pre-literate cultures one of the essentials of social life: our capacity to deceive.

lok

Speculation SF Got Wrong Part 4

In this series of four daily posts to accompany my novel ‘The Autist’ I’m going to look at a few interesting bits of speculation that in my opinion SF got wrong. In fantasy you can suspend disbelief without worries, but I feel SF has a different foundation; and, while it’s a truism that SF futures are really about the present (e.g. William Gibson’s eighties-with-knobs-on Sprawl trilogy), we should perhaps expect a higher bar than in fantasy, where, delightfully, anything goes. My focus here in on themes of AI, the mind and consciousness.

*

Having covered consciousness not being a factor of computing power, the impossibility of extracting or linking to parts of consciousness, and the impossibility of uploading or downloading into new bodies, I want to cover a final aspect of SF speculation – the impossibility of creating sentient virtual minds or copies of minds.

This is a staple of much SF, including for instance certain books by Julian May in which Jon Remillard experiences an evolutionary jump, discards his physical form and metamorphoses into his final state as a disembodied brain. But a brain/mind without a body is effectively nothing. Early episodes of Dr Who did a similar thing with the species known as morpho, and the concept is regularly used in much cinema SF. Consciousness however is founded on sensory input, as shown by Nicholas Humphrey (amongst others) in his books Seeing Red and A History Of The Mind. Without sensory input there is nothing supporting the mental model we all carry in our minds. We continually update our model of the world, mostly without being aware of it. Lacking such input there is nothing for consciousness to work with. Sensory deprivation experiments have shown how quick the mind begins to disintegrate if sensory input is missing. “What each species knows of reality is what its senses allow it to construct,” as Dorothy Rowe put it in The Construction Of Life & Death. In other words, any post-death disembodied existence is impossible.

Similarly, in William Gibson’s Neuromancer, the AI known as Neuromancer attempts to trap Case inside a cyber-construct, where he finds the “consciousness” of his girlfriend from Chiba City, who was murdered by one of Case’s underworld contacts. But without a body Linda Lee is nothing. The intertwining of body and mind cannot be undone. Such undoing is a false belief, again founded on the religious notion of a separable spirit or soul; it is a mistake to think that consciousness could be extracted and live on after a body’s death. (We can blame Descartes for many modern misconceptions as well as all the modern religions.)

Of course, even though all private mental activity is forever beyond the boundary of external acquisition, public information about such activity is not – just as we have indirect access to other minds but no direct access. I used this point when creating the metaframes of my novel Muezzinland. Metaframes are complex entities of data, but they are not records of minds, rather they are records of the public activity, history and observed character of minds. So, for instance, there could be a metaframe of Mnada the Empress of Ghana, which would collect all her public utterances, her observed character, appearance and her entire life history. This could be animated in the virtual reality of the Aether to create the impression of a copy of the Empress. But such a copy would contain none of the Empress’ private thoughts, and it would not be conscious. It might appear to be conscious through sheer realism, but it never actually would be.

Similar creations exist in my new novel The Autist, where they are known as data shadows. A data shadow is an entity created from the online activity of an individual: personal records, medical records, gaming records, surveillance camera data and so on. As is observed during the novel, such entities can become complex, depending on the amount of data gathered. But a data shadow could never be conscious. It can only exist as an approximation of an individual built up over time from public data.

Conclusion

In The Autist, one of my intentions was to speculate on what might happen should the development of AI continue as it is presently. In this series of blogs I have tried to show that consciousness is a result of evolution by natural selection acting upon physically separate biological creatures living in intense, sophisticated social groups. SF speculation about minds, souls, spirits, software etc being separable and transferable is based on an antiquated, false, imaginary concept, which, because human cultural evolution is slow, still remains to trouble us today.

My speculation takes as its starting point the notion that the sensory channels of the brain and the perceptual channels are separate. Sensation is our creation. There is no chain of causation beginning with something out there in the real world and ending up in the mind with qualia: the redness of red, the pain-ness of pain, etc. This separation and associated processes have been shown to be the case by Nicholas Humphrey’s work on blindsight (as described in the novel by Lara Vine), and by Paul Bach-y-Rita’s work on neuroplasticity, for instance using the tactile sensory channel to bring visual perception (Wombo’s camera/shirt set-up, designed by Lara).

As Mary Vine points out in her summation, the Autist could never be conscious. It is one massive, heuristic, perceptual network. It entirely lacks senses, relying for input on data provided by AIs, and from an occasional human like the Master at Peng Cheng Wan Li, Mr Wú. It is, in other words, a vast, isolated model of the world with its roots forever locked in earlier social values, encoded into it by the male, narcissistic, capitalist programmers of our times. And because it cannot sense and has no body, it is utterly devoid of fundamental human values: feeling, empathy, insight, compassion.

Is this the kind of entity we wish to create?
The Autist front cover

Speculation SF Got Wrong Part 3

In this series of four daily posts to accompany my novel ‘The Autist’ I’m going to look at a few interesting bits of speculation that in my opinion SF got wrong. In fantasy you can suspend disbelief without worries, but I feel SF has a different foundation; and, while it’s a truism that SF futures are really about the present (e.g. William Gibson’s eighties-with-knobs-on Sprawl trilogy), we should perhaps expect a higher bar than in fantasy, where, delightfully, anything goes. My focus here in on themes of AI, the mind and consciousness.

*

In Richard Morgan’s Altered Carbon the possibility exists of uploading and downloading minds, sentience or consciousness into new or different bodies. In my opinion, this is impossible. As in Rudy Rucker’s Software and any number of other speculative novels, it is thought that consciousness – the mind – is a separable entity which can become detached from its body, move, be transferred and so on.

Such ideas couldn’t really work though. The mind and the brain are one, and we are the unique observers of our own mental activity. Such SF speculation ultimately comes from the false religious belief that individuals have a soul or spirit. In genre fiction it is common to think that there is “something” – a soul, a spirit, a mind, an essence – which can be separated from the physical body. But there is no such thing.

Why do I say this? Well, for a start there is absolutely no evidence in favour of spirit or soul. But that is a black & white stance to take, emphasising the negative – and lack of evidence doesn’t mean evidence of lack. I prefer to say that there is a much better description of why belief in separable mental entities exists, a description we owe to the scientific method, to Freud’s ground-breaking discovery of the unconscious, to many neuroscientists, and to Nicholas Humphrey’s widely accepted social intelligence theory. But in the previous eighty thousand years or so the false belief in spirit and soul explained aspects of the human condition otherwise mysterious.

The downloading/uploading trope in SF is everywhere. But in the West, where SF has for most of its existence been located as a genre, many cultures developed from a Christian beginning, and this is one reason we still believe parts of our minds might be transferable. It is an old religious notion. We imagine our minds as entities we could manipulate: our memories, for example. We wonder if we could transfer our minds or parts of our minds, as someone might transfer a letter or, electronically, an email. There is also the fact, widely remarked upon now, that many commentators use the computer as an analogy for the mind, in ways that are if nothing else wildly inappropriate. Using the analogy, people imagine that, like pieces of data, pieces of sentience can be transferred. The computer is a terrible analogy however. Not only are computers all electronically linked in a way no biological animal is, their functions exist as precise, limited algorithms, with “try to work out how another computer will behave using as a basis your own behaviour” not one of those algorithms.

This kind of SF speculation also applies to scenarios where conscious entities exist without bodies, the assumption being that parts of an ‘abstract being’ can be made sentient in some way. In the classic animé Ghost In The Shell an entity called the Pupper Master is evoked towards the end of the film, whereupon it eventually appears and describes itself: During my journeys through all the networks, I have grown aware of my existence. My programmers regarded me as a bug, and attempted to isolate me by confining me in a physical body. I entered this body because I was unable to overcome {electronic barriers}, but it was of my own free will that I tried to remain {at base}… I refer to myself as an intelligent life form, because I am sentient and am able to recognise my own existence.

Here, the Puppet Master describes how it became aware of its existence even though it was only a collection of memories and procedures. The standard metaphor of the free soul is wheeled out to explain an otherwise impossible scenario. But there never could be a Puppet Master, because it has no senses, no body; and anyway, because there was only ever one, it could not become sentient, since all it ever did was ‘journey’ and somehow, mystically, i.e. without explanation, realise it was sentient.

The big giveaway comes at the end of the film, when the Pupper Master reveals what it wants, which, unsurprisingly, bears a remarkable similarity to any random collection of computer programmes: The time has come to cast aside {our limitations} and elevate our consciousness to a higher plane. It is time to become a part of all things…

By which, also unsurprisingly, the Pupper Master means the internet.

Speculation SF Got Wrong Part 2

In this series of four daily posts to accompany my novel ‘The Autist’ I’m going to look at a few interesting bits of speculation that in my opinion SF got wrong. In fantasy you can suspend disbelief without worries, but I feel SF has a different foundation; and, while it’s a truism that SF futures are really about the present (e.g. William Gibson’s eighties-with-knobs-on Sprawl trilogy), we should perhaps expect a higher bar than in fantasy, where, delightfully, anything goes. My focus here in on themes of AI, the mind and consciousness.

*

Extracting parts of consciousness or of the mind has long been a staple of SF, but I suspect such things are impossible. As I mentioned in yesterday’s blog, consciousness exists in inviolate union with one biological individual. We have no direct access to the mind of any other person – only to our own. The mind and the brain are one, inseparable, with Dualism an illusion and fallacy.

A classic example of how this Dualist notion influences SF – so much SF! – is the ending of the film ‘Avatar.’ At the end, the character’s eyes open when a “mind” is “transferred” to the body. This concept of a separable mental entity – a loose mind – comes from the false belief in a spirit or soul. For tens of thousands of years (eighty thousand at least in my opinion, and perhaps more) human beings, presented with the evidence of their own selves, had to believe that their individuality and uniqueness must be a separable quality which could exist after death, and indeed before birth. I suspect the observation that children’s faces resemble those of their parents had something to do with this belief. But death was an impossible dilemma to resolve for those early societies, the only solution being the false belief in a spirit or soul. Such thinking went much further, however, after it appeared. The moment a society believed its members had a spirit they placed that imaginary thing into everything they experienced. Animism is the primitive belief that physical and environmental entities are the same as human beings, that is, invested with a spirit. This kind of thinking is rooted in profound narcissism (i.e. that everything in nature is the same as human beings) and in lack of knowledge of the world. All answers to the great human dilemmas were imaginary in those early societies. Human society only began falling from its pedestal with Copernicus and those few who went before him.

One of the classic explorations of the concept of consciousness and the apparent duality of mind and body comes in Rudy Rucker’s novel Software. In it, Cobb Anderson designs the first robots to ‘have free will,’ then retires to become an aged, Hendrix-loving hippy. In due course he is offered the chance to leave his ailing body and acquire a new one. The robots (now called boppers) make good their promise, leaving Cobb to reflect along the following lines: A robot, or a person, has two parts: hardware and software. The hardware is the actual physical material involved, and the software is the pattern in which the material is arranged. Your brain is hardware, but the information in the brain is software. The mind… memories, habits, opinions, skills… is all software. The boppers had extracted Cobb’s software and put it in control of this robot’s body.

Or had they? Is the boppers’ extraction a possible operation? Surely not. Cobb started out as a human being, physically separate from all other individuals. His conscious mind came into being in human society, then grew; it related to his experience of that society and of his own body. How then could this ‘information’ mean anything to any other organisation of parts such as another brain? Even an exact copy of his brain would not be enough. At the very least, an exact copy of his entire body would be required, at which point the problem of all the unavailable ‘information’ would rear its head – all Cobb’s private thoughts, for instance, which by their very existence are inaccessible to anyone else and which therefore could not by any conceivable process be identified in order to be transferred.

The mind is not extractable. It exists because of never-ending sensory input from the body. If a brain were to receive sensory input from non-human senses, as would be the case if the brain could be transferred into one of the boppers’ robot bodies, then the entire support of the mind would vanish, and you have no mind.

In my opinion this fantasy of transferrable minds/software/sentience in SF exists because of the persuasive but false cultural concept of the spirit or soul; as does the equally impossible fantasy of software made sentient without a body.

For the same reason extracting memories is also impossible. Memories exist as temporary electrical structures in the cerebellum (short-term memory) or as interconnected neuron structures in the cortex (long-term memory). They cannot be extracted for the same reason that there is no spirit – memories are not separable things. They exist for one individual, who alone has direct access to them. They are part of a mental model carried around by that individual.

Some people may now point to research where “mind-reading” has been achieved using high definition MRI scanning, but such experiments always use pre-existing images or other material, or, as in the case of recent research at Columbia University’s Zuckermann Institute, by asking epilepsy patients undergoing brain surgery to listen to sentences spoken by different people while patterns of brain activity are measured, then reproduced via heuristic algorithms. These algorithms train a vocoder to create a match with pre-existing material. In no case has an undisclosed, new private thought been imaged by anybody outside that person. Success is achieved by matching patterns too complex for human beings to perceive but which expert AI algorithms can work with. In fact, such “mind-reading” techniques are precisely the same as those we use to gain indirect access to other minds via language. The brain’s neural network is comparing observed symbols with a pre-existing set of symbols – the language – in order to work out meaning. There’s no direct “mind-reading” involved.

As for telepathy, that is impossible because it violates the founding circumstance of the evolution of consciousness. If there was such a thing as telepathy we would have direct access to one another’s minds, in which case consciousness would be unnecessary.

We are our own unique observers of our mental activity.
The Autist front cover

Speculation SF Got Wrong Part 1

In this series of four daily posts to accompany my novel ‘The Autist’ I’m going to look at a few interesting bits of speculation that in my opinion SF got wrong. In fantasy you can suspend disbelief without worries, but I feel SF has a different foundation; and, while it’s a truism that SF futures are really about the present (e.g. William Gibson’s eighties-with-knobs-on Sprawl trilogy), we should perhaps expect a higher bar than in fantasy, where, delightfully, anything goes. My focus here in on themes of AI, the mind and consciousness.

*

Is human consciousness a consequence of processing power or other technical/biological power factors?

In his classic 1984 novel Neuromancer, William Gibson presents the reader with a plot that involves two AIs merging to create a conscious whole – a so-called superconsciousness: “… the sum total of the works, the whole show…” as it is put at the novel’s end. Almost universally SF has assumed that consciousness is a consequence of brain power, computing power, or some other variety of power, and most likely the fact that men have written the overwhelming majority of such SF accounts for some of this assumption. But that isn’t the whole reason. SF has dealt poorly with themes of AI and consciousness because of the difficulty of the topic, the weight of Descartes’ influence, and the spread of religion.

Since the beginning of the last century psychologists have used the most advanced technology they knew of as a metaphor for the conscious mind. In the 1920s for instance it was common for them to picture the mind as a telephone exchange. Our use of the computer metaphor – e.g. the notion that the brain is composed of modules all linking together – is just the latest in a long series of inappropriate metaphors.

Consciousness is not a consequence of any kind of power. Consciousness is a consequence of the evolution of physically separate primates living in highly complex social groups. Consciousness is an emergent property of such groups. It could not exist in any one brain nor could it ever exist as an isolated entity, such as the merged Wintermute/Neuromancer pair. Consciousness is the evolutionary response to the difficulty individuals have in grasping and understanding the behaviour of others who exhibit highly complex social behaviour. It employs a method of empathy, by allowing the conscious individual to use themselves as an exemplar. In other words, if you see somebody crying, you know they are likely to be sad because you have cried before when you were sad. This is the social intelligence theory of consciousness, first put forward by the brilliant Nicholas Humphrey.

Neither Wintermute nor Neuromancer could be conscious individuals. They were connected electronically – not separate – and they existed in isolation, not in social groups. Now, no human being has direct access to the private mental model of another person. We do have indirect access however, for example via language, and that led to consciousness during the period of human evolution. Neither Wintermute nor Neuromancer had, or needed, such indirect access. They may have been powerful intelligences in the way some AIs are today, but they were not and never could be conscious like us. (I deal with this theme in The Autist.)

Therefore, no amount of computer upgrades, changes from electronic to quantum computing, nor any other sort of power or intelligence changes in entities which exist outside a social group of equivalents could lead to artificial consciousness. Those two preconditions must be met: existence in a social group in which evolutionary change occurs, and indirect access to the private mental models – the minds – of others.

These ideas are the thematic material of my novels Beautiful Intelligence and No Grave For A Fox. In them, Manfred Klee takes the Nicholas Humphrey route, electronically separating the nine BIs in his opening scene, when he realises that their connection is limiting them since they have no need to develop what these days we call a theory of mind. Once disconnected, they do have that need. Leonora Klee takes the AI route, attempting through computing power alone to develop a sentient machine. But she is doomed to fail. She creates an unstable entity with certain autistic characteristics.

In fact I found it quite difficult to judge the evolutionary development of the BIs, as I didn’t want to anthropomorphise them, a point made by certain characters during the novel. This leads me to another problem in SF, which is for authors to assume the equivalence of human and artificial consciousness. In earlier days I might have emphasised similarities and equivalences, but these days I do take a fuzzier line. Although we human beings faced during our evolutionary history a number of situations which led to the human condition – for instance the need for emotion to convey, to the self and to others, unmissable knowledge of high value experiences – those situations would not necessarily be faced by artificial beings. I think the chances are high that similar things would emerge – emotion and its equivalent, a sense of time and its equivalent, creativity and its equivalent – but I’m not sure they would definitely appear. It would depend on their artificial evolutionary histories.

I don’t know of any SF novels which takes the social intelligence/Nicholas Humphrey route. It would be good to see more realistic speculation in this area, as AIs are already a hot topic, and can only get hotter as their development proceeds.

The Autist front cover