Nick Bostrom described a thought experiment in 2003 about a paperclip maximiser, an artificially general intelligent system whose utility function is to maximise paper clip production. The paperclip maximiser would lack the social values of humans, and so would continue to make paperclips even as it destroyed humanity and turned the entire Earth into a huge pile of paperclips.
Academic thought experiments are one thing, but surely the musings of intellectuals are irrelevant to the real problems we face today? The field of artificial intelligence ethics has been largely ignored as esoteric primarily because it was framed as relevant only when machines achieved the miracle of ‘general intelligence’.
But let us turn away from the considering the possibilities of the future to the reality of today. Today we have huge sprawling data centres that consume vast volumes of data collected from people like you and me. The software in these data centres is designed essentially for one purpose; to keep you attention on the platform for as long as possible. By capturing your attention they capture advertising revenue.
Just like with the paperclip maximiser the neural networks making decisions about what to show you online have only one goal, to keep you engaged. It has no malice towards you, no real awareness of any kind in fact. It is simply a neural network whose world is constrained to clicks and likes. It is rewarded for maximising your engagement on their platform.
Human beings are motivated primarily by emotion. The drive to action might be informed or even inhibited by reason, but motivation for the overwhelming majority of people is emotional. The neural networks of social media companies have learned to exploit us by showing us things that invoke an emotional response to keep us engaged and involved.
The neural networks of social media do not care about whether the post it shares is true, or whether peoples response is positive or negative. It cares only about the level of engagement. Consequently the posts that go viral are the ones that most effectively trigger our individual emotional responses. Ever wondered why there are so many YouTube videos featuring kittens?
Never before has the information you consume been so heavily determined by your pre-existing bias. And so whatever bias you might have is reinforced and amplified. Everything you read online only serves to confirm your point of view. The groups you join agree with your point of view. The posts in these groups will do better when the cause engagement, whether or not the posts are actually positive.
Sadly I see this first hand in the forums I manage, even though ‘Rationalist’ is in the name of the group. Posts which cause anger and division flourish while those trying to create unanimity and community flounder. This isn’t because the people are without compassion, but because the algorithm itself is preying on the division. Externally we have seen the rise of the Anti-Vax movement and flat earth belief almost exclusively thanks to the power of these Engagement Maximisers.
The argument from the platform operators would be that they are only giving people what they want to see. The problem is that humans are a learning neural network as well. The information they are shown is training their own natural neural network. That is why advertisers are paying money after all; to influence people. The combination of Engagement Maximiser and learning humans is a positive feedback cycle that drives people toward the extremes. This is the phenomenon we experience today on social media.
We have seen divisions even within progressives, with some calling for increasingly restrictive censorship of social media. Advertisers have forced YouTube to restrict advertising no non controversial material, meaning the livelihoods of creators that cover issues like homosexuality and politics have been affected. Increasingly different views have been polarised to the extent they refuse to hold discourse. The effect has not been to eradicate hate, but rather push it out to dark corners where it was able to fester.
A week ago that festering boil exploded in Christchurch. The casualties were not virtual. From all over New Zealand we saw a new kind of viral communication, this time not driven by algorithms but rather genuine human empathy for those who have had their loved ones taken from them. How can any human fail to have compassion for the victims of such a senseless crime?
There has of course been understandable calls for controls on social media. How can we experience what we have without demanding the causes be addressed?
But perhaps it is necessary to see things more broadly than just the tip of the ice burg. We need to examine whether we really should be giving the Engagement Maximisers the power to influence us. With measles outbreaks around the world perhaps we need to be looking at how the Engagement Maximisers have empowered small inconsequential fringe groups to grow their followings.
Last year I gave a talk to the Unitarians about how we need to be careful in how we teach our AI systems. We need to ensure that human values are incorporated into the neural networks by ensuring that their utility functions include human well being and evidence as well as engagement. Ultimately what this means is that we will need to ensure artificial intelligence is developed in such a way that factors other than engagement are included in the utility functions of artificially intelligent systems. This applies far beyond social media.
Artificial intelligence is now being used to guide judges in sentencing. It is being used to decide who gets a job. It is being used for DNA analysis. We are really only on the cusp of the artificial intelligence revolution yet already questions around ethics and how we incorporate human values into our machines are of vital importance for the health of society.