BACK TO ALL ARTICLES

Relating to AI – keeping it in the family?

Dr Amanda Rees

In 1909, the author E M Forster published a story called ‘The Machine Stops’ in the Oxford and Cambridge Review. In it, he describes a world in which humans live isolated lives, their every physical need provided by the omniscient and omnipotent Machine. Human contact is mediated through video screens: physical comfort comes from possession of the Book of the Machine, held reverently, bowed to, kissed. As the story unfolds, it becomes plain that in this globalised, atomised, future world, where things are brought to people, rather than people travelling to do things, humanity has become profoundly alienated from the physical and conceptual products of their own labour. They do not recognise the Machine as something humans have created, nor do they seek out a connection with the divine: there is, according to the heroine, ‘no such thing as religion left. All the fear and superstition that existed once have been destroyed by the Machine’.

It’s not clear from the story what the Machine gets out of all this – there’s no sense, for example, of humans being farmed as batteries to power the Machines as in the Matrix franchise, nor is there any indication that the Machine actually wants to be worshipped. But it’s interesting to think about this story in the wider context of debates on the morality and ethics of AI, as well as the way that human agency – the capacity to make choices and take action on your own behalf – is understood.

Forster’s Machine is clearly aiming at creating a near-universal beneficent framework of support and sustenance, in which all of humanity can equally share, with privacy and (at least initially) autonomy respected – key principles underlying AI ethics guidance surveys. Trouble arises as a result of the unintended consequences of these principles. It’s very clear over the course of the story that the Machine is actually progressively destroying humanity in two main ways.

The first is by forced homogeneity – the experience of diversity and difference is obliterated in favour of the imposed convergence necessitated by equality of access. Second, by providing for every physical human need, the Machine is actually denying its people the opportunity for self-realisation through struggle, as well as the pain and pleasure of embodied, sensory experience (‘by the sweat of your brow you will eat…’ Genesis 3:19). In many ways, the Machine is actually acting as a rather poor – or at any rate, very over-protective (helicopter?) – parent, unwilling to grant their children the capacity to learn from their mistakes, and incapable of seeing their offspring as individuals unique in their own right and in their own experiences. Human agency withers away as every individual need is instantly met.

But if the Machine, through meeting its obligations, is actually harming its human population, can this be explained or contextualised as a previous moral failing on the part of its human creators? Has the Machine itself been badly taught? There is a long history, after all, of denying the significance of diversity by championing a particular (white, male, European) version of humanity as the pinnacle of human evolution – certainly it’s the experiences of this group that were the model for the original Enlightenment notion of human agency, and it’s this group that is the model for humanity in this story. But what about the responsibility of the creator to the created? Dr Frankenstein’s child became a monster because Dr Frankenstein fundamentally failed to care for and instruct his creation adequately. Did the builders of Forster’s Machine also fail to give satisfactory training? Were they unable to conceive of humanity appearing in different colours and flavours? How might this failing be exacerbated, given the capacity for unintended consequences inherent in every technological innovation? And how can we build in opportunities for self-correction?

Parent/child, creator/created, master/servant – all of these ways of imagining the relationships between human and human-created artificial intelligence imply relationships of dominance and control – of the effort by one set of agents to control the agency of another. Are there other ways of thinking about this? What happens, for example, within families when children grow up? How do relationships of responsibility and care evolve over time? What happens if participants in the relationship don’t, can’t, or won’t change? The mechanically embodied Machine had components that needed regular attention. At the story’s end, it breaks down due to lack of maintenance: with adequate care, it could have functioned forever – but it would do so without either maturing or growing old.

Can learning – maturing – be built into AI? Can machine learning go beyond figuring out how to identify patterns in order to make predictions or suggestions, or generalising across patterns, to enable self-programming and self correction? Evolutionary computation was first proposed in the 1950s – in the future could this enable AI to grow, to recognise and correct its own errors? Critical here might be the role of gaming and play. Play, after all, is considered by behavioural biologists to be one of the most crucial areas through which morality – ethics – itself has evolved – and the experience of Gamergate certainly suggests that issues of inclusivity and community should be central to the very serious area of play.

AI as a toddler? Or a teenager, mistakenly convinced of their own maturity? Also in the 1950s, Isaac Asimov wrote a short story about a robot with a damaged positronic brain. Unable to function and with the mental capacity of a baby, its owners decide to destroy it – until the robo-psychologist, Susan Calvin, realises both that it can be taught, and that through teaching it she can herself learn how to teach other robots. At the heart of this is the question of reflexivity: being able to critically assess one’s own knowledge and assumptions, and to adjust or add to them as necessary, a cognitive process that is completely intertwined with emotion, creativity, maturity and morality.

Forster’s story ends with the protagonists emerging from the Machine’s womb back onto the surface of the Earth, a return to nature that implies both the acceptance of physical decay and death, and the promise of rebirth through the awesome sight of the night sky. Experiencing the human life-cycle is central to the human apprehension of wisdom. Scientists at MIT and elsewhere are trying to use models of childhood learning to improve AI cognition. If considerations of emotional learning, or the long term benefits of playing fairly, could be added to these models, what would that do our understanding of AI ethics? Or to our sense of responsibility for the ongoing care for our creations?

Further Reading

Marc Bekoff and Jessica Pierce (2009) Wild Justice: The Moral Lives of Animals, Chicago: Chicago University Press

Ruha Benjamin (2019) Race After Technology: Abolitionist Tools for the New Jim Code, Cambridge: Polity

Jonathan Merritt (2017) ‘Are you there, God? It’s I, robot’, The Atlantic, February 3

Amanda Rees & Charlotte Sleigh (2020) Human, London: Reaktion Press

Oliver Sacks (2019) ‘The Machine Stops’, The New Yorker, February 4

PREV BACK TO ALL ARTICLES NEXT
Article By Dr Amanda Rees

YOU MAY ALSO LIKE

Church calendar Nativity Narratives

Even in a normal December, it’s hard to get through the month without encountering a Nativity play crisis. In 2021, it’s Covid that’s leading to cancellations; in earlier years, columnists habitually cited multiculturalism as the...

LEARN MORE
SCIENCE AND RELIGION Superman’s Methodist Roots: What superheroes can tell us about the...

This article was originally published in the April 16th edition of the Methodist Recorder. Everyone knows that the Man of Steel arrived on Earth as a refugee from the doomed planet Krypton. Fewer people are...

LEARN MORE
ECLAS ANTI-RACISM Racism, AI, and a question of justice

There is no such thing as an ethical robot.

LEARN MORE