Notes from genre author Stephen Palmer

Tag: AI

AI Art & All That

I’ve been very interested in the current debate about AI art, which has kicked off in recent months. I’ve read lots of thought-provoking blogs, Facebook posts and more, which have pushed the debate this way and that, making me wonder about my own opinion on the matter. Generally, as in my novel The Autist, I’ve been pretty pessimistic about how the worlds of writing, music and art are being smashed to pieces by the internet, social media, and digital life generally – not least AIs. With the arrival of Midjourney however, that pessimism has been uplifted a little by the stunning weirdness and quality of, to take the most numerous examples, Jim Burns’ images.

I call them images, but most people call them art. I’m not so sure however that distinguishing between the two is sophistry. That’s because my understanding of creativity is a particular one. (See also my blog series on Imagination.) Let’s have some definitions then.

Creativity is the response of conscious human minds – and only conscious ones – to the real world. Chimps, crows and dolphins might show limited creative responses to certain real world stimuli, but no observer comparing them and us could fail to see the qualitative difference. Above all, we delight in creativity for its own sake. Some higher order animals can be creative, but only to solve real world problems. Picasso on the other hand painted because something in his mind made him. He loved it. He couldn’t be anything other than an artist. There will never be a chimp Picasso.

Picasso was responding to the real world when he painted. His art was not sourced in his mind, it was sourced in reality, which he then interpreted using his mind.

Art, then, is the visual form of human creativity. Artists respond to the real world. Da Vinci, Monet and Matisse all wrote about how Nature was their inspiration – reality, in other words. Moreover, they realised that they were compelled to interpret reality, not just to copy it. Artistic creativity is a natural, inevitable consequence of a conscious human mind using its mental model.

In addition, there is a causal link between sensitivity and creativity. The more sensitive an artist is, the more creative they are likely to be. Creativity is proportional to sensitivity. We have a cliche of the tortured artist not fitting in as they suffer in their garret, but in fact that suffering and isolation has nothing to do with creativity and everything to do with sensitive people being ostracised because they simply don’t fit in with social norms.

From this position, it’s only one step to observing that no AI is conscious (a state of affairs portrayed in my novels Beautiful Intelligence and No Grave For A Fox). AIs have no mental model in the sense that we do. They are art zombies, in fact. What they are creating is images.

Now, these images can and usually do have artistic merit. We as human beings are entitled to make that judgement because we live in a cultural milieu. I’ve been struck by the extraordinary verity, detail and even beauty of many Midjourney images. My Canadian friend Peter Hollinghurst is showing amazing three dimensional colour images of seated dragons reading books based on his own much earlier pencil drawings. Quite extraordinary to look at. He’s also just provided me with an image for my upcoming book I Am Taurus based on a photo of an object. Also extraordinary!

In my view then, the role of the artist in using AI software such as Midjourney is choice. Midjourney offers iterations – a number of image variations. A sufficiently experienced and talented artist can choose the “best” ones. That is an artistic act in my view. It could be argued that anyone can make such a choice, and that would be correct, but the artistic eye not only has to be rooted in true creativity, which not everyone has in useable amounts, but it also has to be trained. There is therefore a spectrum of curation available, depending on the artist’s gift.

The ethical dimensions are a different matter altogether – I won’t deal with them here. Suffice to say that creativity in all its forms is being stolen by impersonal corporations of one sort or another. I sit in the “AI is more bad than good, and has the potential to enslave us” camp. Not everyone is quite so melancholic however.

I expect interesting times. Personally, I think of Midjourney as in effect a particular type of brush, which doesn’t work like a real brush. When it begins working with moving images I may wish to get involved. I’ve long wanted to make films. I have the visual and editing skills, but not the money for a film crew. Perhaps Midjourney 2.0, the moving images version, will appeal to me.

A chat with Toby Frost

Today I’m talking with author Toby Frost about the themes behind his new novel The Imposters, recently reviewed here…

Toby: We’ve both written about robots and intelligent computers, but I don’t think either of us is solely trying to predict the future of artificial intelligence. Are stories about robots and AI always really about people?

Steve: I think that in the great majority of cases they are about people, for the same reason that stories about aliens are almost always really about people. I only got into writing AI novels because of my background in the field of consciousness and the evolution of the human mind, and because SF has dealt so badly in the past with the subject of AI. I had a particular reason for writing Beautiful Intelligence and No Grave For A Fox, which is that every AI novel and film I’ve ever read or seen assumes the separable existence of “something” inside our brains, which by implication can be subject to all sorts of speculative transformations in fiction. Uploading minds, uploading memories, downloading minds, and so on and so on… Peter Godfrey-Smith’s new book Metazoa has a few paragraphs specifically berating genre authors for their crap AI speculations, something I was so pleased to read I emailed him my congratulations. How do you approach speculation and tales about AIs in your novel? Are you teasing out interesting parts of the human psyche through the use of such concepts?

Toby: I should start by saying that I know absolutely nothing about computers, and less than that about AI. I think instead of looking at how AI would realistically work, I was interested in the way that robots and androids are portrayed in stories, and in trying to do something new with that. Many stories about androids are stories about slavery, and I wanted to talk about what might happen when you’ve accepted that these machines are people and have rights of their own. What happens next? Likewise, female androids often exist to make some kind of satirical point about male sexuality (or for straight-out titillation) but what if you take that element out, and just let them do their own (strange) thing?

I think that once you take out the idea that humans are “special” in some mystical way, more interesting options appear. For one thing, why would any robot want to be human? I can imagine Pinocchio thinking “Hey, I’m made of wood! I’m immortal! Why the heck would I want to be a real boy?” It also makes it harder to indicate a tipping point where the AI becomes “one of us.” William Gibson has some interesting AIs in the Sprawl novels: they can think and even have citizenship, but they’re very alien.

(Just as an aside, I wonder if people in science fiction stories ever read science fiction. They might learn something…)

Steve: I also know nothing about computers, which is why I only use Apple Macs! But seriously… I agree that it is presumed by most, if not all genre authors and film directors that a “human” android is the ultimate end goal. I think this relates a lot to the standard human delusion of becoming a deity by creating a human being. It’s a bog-standard cliché mostly rooted in religious dogma. I wrote my last AI novel The Autist partly in an attempt to portray an AI future in which human-ness was not a goal, or, in one case, even a plausible option. You are so right to ask the question: why would any robot want to be human? A lot of what we are, including mentally, is based in our physical form, and that form does not have to be mimicked by androids. I have to state my different position on the question of human-ness however. I think human beings are special, just not in a mystical way but in a way that is perfectly (and easily) explicable. This in itself however leads to some very interesting paths of genre speculation, which I think authors have hardly touched.

William Gibson, brilliant though he was and is, he got it massively wrong when presenting two AIs becoming “conscious” just because of conjoined computing power. Thinking and having citizenship: yes. Consciousness: no. But, to be fair, portrayal of AI in fiction is not easy. We SF authors have the right to speculate, but in doing so we should in my opinion stay within reason as currently agreed. Anything else counts as fantasy. (I’m aware that this is not a terribly popular view these days, not least because of Clarke’s Third Law.) For me, it is useful and desirable in portraying AI to distinguish between speculation based in current knowledge and that based elsewhere.

I’ve finished reading The Impostors now, and I noted that your take on this is rooted in identity. Would that be a correct reading?

Toby: That’s an interesting idea, that robot SF is anchored to ideas of a deity. Presumably, by that logic, the closer one gets to human, the closer one gets to divine… And of course you’ve got Frankenstein usurping God in creating life. I wonder if, like the whole “female robot as male fantasy” angle, it’s one of those concept that you need to jettison before you can start approaching the idea from a new angle. Helen says at one point that her personality is purely a construct, placed on top of a powerful computer to make it easier for her to fit in with humans. I’m not sure she’s truly conscious – but then, how do you prove it, and does it really matter?

For me, sometimes the SF serves the entertainment – although it’s still got to feel right. Helen, the android in The Imposters, is the way she is partly to provide an amusing and entertaining character, but I think she works within the criteria of the story. I don’t think for a moment that we’d get anything like her in the future.

“Identity” is a wide term, especially now, but I agree with you. Both lead characters don’t quite fit in: Helen is synthetic, and Richard has gaps in his memory. I find something appealing and vaguely eerie about that. They’re both “faking it” in a way. I find it quite hard to sum it all up in a sentence, but it’s about the balance of being yourself and being someone who functions smoothly within society – which feels a bit pretentious given that it’s an action comedy. Do you write to explore an idea, like AI, or is it more a case of coming up with the story and then realising what it’s really about in the editing stage?

Steve: Proving Helen’s status as conscious or otherwise is impossible, but it would be possible to make an educated guess, given knowledge of the circumstances of her creation.

Helen works very well in your story, and actually I think that kind of character in the future is possible. We humans are so easy to confuse and delude, especially when it comes to anthropomorphism, at which we are painfully good. You could read The Imposters on the presumption that Helen is faking human behaviour really well. Alas, I think we will come up with such machines (or programmes) quite soon – deep personality fakes, you could say. It will be exceptionally difficult to interact with such programmes and not fall into the trap of believing they are human. Richard does tell himself a few times what Helen is, but does he really believe his own conclusion? I’m not so sure…

My AI novels always have a strong theme first, then the characters and a few basic ideas arrive, then the story based on those characters. In Beautiful Intelligence & No Grave For A Fox it was about the possibility of AC (Artificial Consciousness), while The Autist takes the opposite view and posits a non-conscious AI dystopia.

Toby: One of the things this conversation keeps reminding me is how difficult it is to work out what an AI would actually want. The film Ex Machina is largely about a robot trying to escape from a room – but it would have to be programmed to want that, or else to have developed the wish to do so, perhaps by extrapolating the existing data. If I remember rightly, Skynet from the Terminator films had decided that it wasn’t safe while humans could shut it down, and the only way to be really sure was to destroy all humans. Drastic, but logical.

I’m glad that Helen works: even a story that’s tongue-in-cheek has to ring true in some way, or to work within its own rules. I think comedy becomes weaker when it misses that. Helen is sincere, but she is also faking being human. Richard definitely does think of Helen as human at points. His feelings towards her are probably rather a mess! I suspect this humanising of AI is the opposite of the uncanny valley principle: if it looks like a person, it surely must be, right? I wonder if that’s linked to the instinct many people have to give others the benefit of the doubt, and to try to explain away what looks like madness or outright villainy. A sort of false optimism.

I think you write in a different way to me. The characters, story and underlying ideas are very intermeshed in my mind. During 2020, I wrote a novel to take my mind off the heatwave, the pandemic and politics. It ended up with a mob trying to storm the planet’s parliament! That seemed like the natural end of the story (and it was) but I’m sure it was a response to the American election, too. I think I probably start with a gut feeling that something would be fun to write about, and it spirals off from there. I do find the creative process both very interesting and hard to explain!

Steve: Yes, I think it’s easy for us humans to forget that in most cases we’re not using reason, let alone logic, at all, though we do convince ourselves that we are being reasonable… We have this sometimes cute, sometimes fatal urge to anthropomorphise everything, and that applies to genre work as much as anything else, usually to the detriment of the novels in question. Of course, the problem with being realistic about AI is the same problem as being realistic about aliens. Where do you draw the line between readable/understandable and accurate/true? An AI could in theory be impossible to understand. The famous question, “What is it like to be a bat?” could be replaced with, “What is it like to be an AI?”

Your novel is absolutely consistent, which is admirable, and works well. The Uncanny Valley is a strange phenomenon, one people are still trying to understand. In a way it is the antidote to anthropomorphism, in that it shows us that when something is on the border between definitely human and definitely not human there is something exceedingly peculiar waiting for us. I recently wrote a novel based on the Uncanny Valley which perhaps illustrates how I approach writing. One thing to take into account is that I almost never read fiction, it’s 99% non-fiction. But I’ve always been like this. Theme first, then people in that theme, then all the rest; and they do intermesh later on in development. I’ve read a lot about the Uncanny Valley and human evolution, so it was a natural theme for me. Only later did the three main characters come along, with their respective viewpoints, flaws and idiosyncrasies. My writing style could be summarised as plotter and pantser, in that my plots are pretty tight right from the beginning, but within that there is room for pantsing – the details. Even my crazy novel Hairy London, which I wrote off the top of my head without self-editing, had a basic theme underlying the surreality.

Have you got anything else coming up in your Helen/Richard milieu?

Toby: I think there’s a line in The Imposters where Helen says that the question she always wonders is “What’s it like to be you?”, which really can’t be answered. But I suppose that’s what fiction tries to do: to see what other people would do. There’s something about the uncanny valley that I find fascinating, which feeds back into that idea of robots as imitation people and the difficulty of passing yourself off as normal.

I read quite a lot of non-fiction, too. Part of it is time and lifestyle: I used to read a lot on the train to work, but I don’t use the train half as much now. I agree that I’ve never quite figured out that whole plotter/pantser thing: it’s never felt very relevant to how I write. I turn ideas over in my mind in quite a lot of detail, but I often don’t know exactly how they’re going to fit together until I start writing.

I would love to write more about Richard and Helen. I think a second book would be less about being a robot and more about different aspects of their setting: I’d quite like to write a story about how you end a war and start a peace, for instance. I think a war story can be quite clean-cut, with clear heroes and villains, but a spy story is often more nuanced and “grey”. But I’ve got a lot of other things going on right now, not least the third book of my Renaissance fantasy trilogy, which I’m hoping to release later this year. Lots to do! What have you got planned next?

Steve: At the moment, very little. I planned to have a lengthy writing break in 2020, but then Covid struck… Last year was a bit of a nightmare for me, and this year is about personal consolidation. In all honesty I’m not sure what comes next. Thanks ever so much for this conversation, it’s been really interesting. I especially like your line about humans presuming that an AI would want to be like them. An excellent point!