Five Upcoming Mental Health Crises 2/5
This week I’m going to post a series of five pieces about the connection between online life – and social media in particular – and poor mental health. In recent years the public perception of the damage social media is doing to our mental health, and to that of young people in particular, has become clear. My pieces explore some possible consequences of the way giant, unaccountable corporations are exploiting human foibles for their own gain. I’m far from being the only person to think that this sustained, relentless psychological attack is going to cause mental health crises in the not-too-distant future, but perhaps my thoughts on the issue come from a slightly different perspective.
It’s commonly thought that AIs are value-less, or, at least, value-neutral. But this is not the case, as has been demonstrated recently by such commentators as Jamie Bartlett in a pair of television programmes, which were expanded into his book The People Vs Tech: How the internet is killing democracy. AIs do in fact have values, but they are old values, retrogressive values, because the technological systems which support them are inherently conservative.
After narcissism (yesterday’s topic), the greatest danger we face is idiocy.
Early computer scientists thought that they could design top-down AI systems, because they assumed that faculties such as intellectual ability and reason were susceptible to design. But it turned out that all the ‘simple,’ ‘easy’ and ‘obvious’ things which human beings do – like for instance reach out to choose a banana from a fruit bowl – are extremely complex. And so, more recently, a new method has been used, bottom-up AI design, which has latterly, with Orwellian bleakness, been named Big Data. This method uses heuristic design and so-called neural networks to facilitate deep learning. It is because such techniques are so powerful that the present AI revolution is happening.
However, all AIs so far created are expert systems, limited in function. There is no general intelligence AI – no AGI – yet. But despite the obvious implications for a thinking humanity, the tech corporations, unrestrained by legal or ethical control, and with no checks and balances whatsoever unless they happen to have a vaguely concerned CEO, are trying to develop AGIs. And even if they do have an aware CEO they are going ahead with AGI development regardless. This is the insanity of the unregulated West.
Future AIs will have values, but those values won’t be humane, caring or liberal. They will be conservative: capitalist, patriarchal, hierarchical, sequential, logical, analytical. The reason for this is that the individuals and systems creating AGIs have values themselves, values which tend to a greater (occasionally to a lesser) extent towards conservatism. Corporations are masculine places. Corporations are capitalist places. Corporations wish to expand regardless of the consequences for the environment or humanity. Corporations and the legal environment they exist in are inherently conservative. So are the AGIs they will create.
So, what social and psychological consequences might AGI’s have? With an expert medical AI, for instance, the benefits are obvious and have for a few years now been demonstrated. Diagnosis rates are better than those of experienced doctors – an incredible result. AIs can drive cars reasonably well, and in a few years will be driving cars very well. But an AGI is a different thing entirely. An AGI will act against thinking human beings by thinking for us.
We are already seeing the mental health implications of AGIs however in the way the internet and technology more generally is being used now. Implicit in the operation of so much present use is that thinking is done for us. Google searches bring up the results google wants you to see, or which its algorithms choose. Devices such as amazon echo or google home mini are advertised as helpers, but their function is to do the thinking for you: to remember, to choose, to prepare.
This use of technology, especially when AGIs appear, will have a profound effect on our minds. Human beings should be thinking for themselves – they must. We should be independent, autonomous, flexible, aware. We should not be relying on vast, anonymous, unrestricted, impersonal intelligences designed by bloated technology corporations. For that is what those corporations want. They want individuals to respond to artificially created desires; and only via their products. They want to do the thinking for you, because that will make you their servant, if not their slave. They want you to be an idiot.
Idiocy is our future if we don’t take care. In my as-yet-unpublished novel The Autist (which should be out in 2019 from Infinity Plus Books) I use some of the above ideas to paint a gloomy picture of humanity’s increasing dependency on callous, unregulated, automatic systems. It used to be the case that in dystopian SF our technological masters were imagined as robots, Terminator style. The truth is, those masters will be unregulated algorithms, designed to work their way into the human psyche using brutal psychological techniques. That is a dystopia already on the horizon. We could, in theory, stop it. My guess is we won’t.