Be aware that this is long and chock full of spoilers.
There’s no way to write about the stuff which interested me in the novel without them, and I couldn’t find a way of minimising them. Still, if you’re planning to read it and you’re the kind of person for whom spoilers actually spoil the reading experience – they don’t for me, in most cases – then please stop reading..
I’ve just finished reading Ian McEwan’s latest, Machines Like Me.
Of course, it is science fiction, no matter what he says – it’s a dead giveaway that it’s very much a novel of ideas and that he chose to create an alternate history for the world in which the story is set.
Like most of McEwan’s work, Machines Like Me deals with big philosophical questions. What makes life worth living? He comes down, as always, on the side of love (connectedness, empathy) – but then he complicates matters by asking about the relationship of love with truth and lies, with matters of comparative and absolute morality, and with what constitutes a person. Is it consciousness? Can a robot, a machine, be conscious – and if he or she can, what rights should they have? How should we treat them?
Of course, these questions have been asked by many science fiction writers already – Heinlein, Asimov, Iain M Banks, and even in very accessible film and TV. No wonder he doesn’t want to call it science fiction.
The alternate history is a very different imagining of the 1980s. The narrator, Charlie, is a kind of inadequate antihero who scratches a living by making investments, after messing up a career in law because, he claims, studying anthropology left him with no moral centre – anything goes. Being caught and prosecuted for his involvement in a scam makes him a changed man, however, who is interested in questions of morality. He’s also fascinated by computers and has written a book on Artificial Intelligence.
He’s not a likeable narrator, but he has a certain charm, and is aware of his faults. He spends his inheritance after his mother dies on a new gadget – a robot who is capable of having sex, who is capable of passing for human apart from the need for an overnight recharge and having a kill switch. He gets an Adam, only because the Eves are all gone…
I find it entertaining that in a New Statesman interview, McEwan talks about this creative choice, saying that to choose to write about an Eve would stray into pornography. I know what he means, and yet somehow this also makes it seem as if he is writing for men.
We are in a world where the electronic and computer revolution happened very early. A world where Alan Turing survives – and as an old man is a character in the novel. The Falklands War is going on at the beginning of the novel, and Britain loses. At the same time a lot of political and social issues he is describing are so contemporary – housing shortages, lack of work, inequality, Brexit-like concerns about immigration. There’s even discussion of Universal Income, because of course, robots will soon be doing all the work. He creates a variant on the political dynamics between May and Corbyn by setting Margaret Thatcher against Tony Benn as leader of the Labour Party.
It feels as if he’s compressed the last few decades, mostly so he can have fun writing about the whole Brexit thing without being caught out by politics happening too fast for contemporaneous fiction-making. And yet, the character of Alan Turing, an older Turing who survived the horrors our society inflicted on him during the second world war, carries the essential meaning of the story.
The story itself is a love triangle, or quadrangle eventually, which is designed to tease out possible differences between humans and artificial persons.
I enjoyed the early section about the set up process required by the Adam – even though it turned out to be mostly a sham. McEwan has envisaged a situation where the owner can preset the kind of personality the robot will develop based on the Big Five personality traits – Openness, Conscientiousness, Extraversion, Agreeableness, Neuroticism.
In a fit of romanticism Charlie allows his new girlfriend, Miranda, who is ten years younger than him, to choose half the settings – he thinks a joint project will bind them together, almost like having a baby. Sharing in this personality programming, he sees as analogous to the mingling of genes.
This is only the beginning of personality development for the android. The rest is done by machine learning – interacting with literature and film and with other people.
It all seems to happen unrealistically quickly, but the urge to dive into the story is forgivable.
One of the first things which newly conscious Adam does is to plant doubts about Miranda in Charlie’s mind. She can’t be trusted, he says. She’s a known liar and manipulator. He doesn’t give the full reasons why this is so – rather like a human being there’s an opaqueness about what is going on beneath the surface.
This is soon followed by a very disconcerting scene where Charlie is in his flat listening to his android having sex with his girlfriend in the flat above. It all happens off stage, as it were, yet we are shown the scene in intimate detail as it is happening in Charlie’s imagination.
I couldn’t help finding this amusing given that McEwan, in an interview in New Statesman, talks as if he has left behind the guy who wrote First Love, Last Rites and The Cement Garden.
Both Charlie and Adam are now in love with Miranda, and all kinds of complications follow, which give us a chance to consider the possible differences between a human and a machine.
So we see Adam developing a talent for poetry, a deep interest in literature and philosophy – to the extent that he makes the human Charlie feel shallow. There’s even a scene where Miranda’s father meets both Adam and Charlie and assumes Charlie must be the robot.
Adam also demonstrates autonomy – Charlie and Miranda have switched him off a couple of times for their own convenience. Adam starts to protect himself, violently, and he also disables his off switch.
The story shows that there may be little difference between man and machine, or at least that we may not easily discern any differences. The early stages of the relationship between Charlie and Miranda and Charlie and Adam mirror each other – the inner workings of both are mysterious to Charlie.
We all do a lot of guesswork about how other people are thinking and feeling – and sometimes we’re right and sometimes we’re wrong. We all ascribe meaning and motivation to animals and inanimate objects and computer programmes – remember how people used to confide in Eliza, a simple early programme which just mimicked and mirrored conversation?
From a chance encounter with Alan Turing who also has one of the robots, Charlie gets to find out more about the fate of the others from this first batch. It’s not good. Many of them have destroyed themselves in different ways – physically, or by deliberately degrading their own software.
Turing suggests that it’s because the robots cannot bear too much reality – that they are too perfectly rational and idealistic to come to an accommodation with the imperfections in the human world – the lack of fairness and justice. In effect, reality does not compute so they self-destruct.
Meanwhile, there’s another developing storyline about a vulnerable child, Mark. Charlie meets the boy by chance in a park and his dysfunctional parents give the child to Charlie. Miranda becomes attached. Adam follows the rules and hands the child over to social services.
Gradually the details of Miranda’s past come to light. It’s a very sad and dark story about the suicide of her childhood friend, a Muslim girl Mariam, who was raped by a schoolfriend and could not confide in her parents or go to the law. Miranda was determined to avenge her friend and set up a situation in which she could falsely accuse the rapist of raping her – and he went to prison, guilty of the earlier rape, but innocent of the one for which he was jailed.
Adam, meanwhile, although he loves Miranda, believes that justice must be served and he compiles the evidence to hand over the police, even though he knows it means Miranda will be sent to prison and may lose the chance to gain custody of the child. He is convinced that this will be better for Miranda.
This inflexibility is what McEwan/fictional Turing assumes will be the key difference between man and machine. There’s no room for nuance, no white lies, no turning a blind eye in an artifical person. The assumption is that digital personality will be all black and white, zero and one, on and off – no shades of grey.
Of course there are some men very like Adam anyway – one often sees them in discussions of rape cases, where they talk about “The Law” as it is,
without any apparent understanding that man-made law is not infallible and that justice may have little to do with the outcome of any judicial process.
I’m unconvinced that this will be such a stark difference between humans and artificial persons. Not only are there some humans with that kind of inflexibility, so too can programmes be created now with some rudimentary ability to balance competing interests, incorporating fuzzy logic routines, for example.
Algorithms exist which are beyond McEwan’s grasp, in spite of the research he has put into the travelling salesman problem – that well known example of problems which computers find difficult to solve. In the novel, of course, Turing has done what is currently impossible.
The essence of machine learning is that no one actually has to create, or understand, the algorithm. So by creating a robot which is developed in this way, after a certain point there’s no control over what that robot may become. In effect, that robot’s derived principles and values can only be tested by examining its behaviour – which is pretty much the same problem we have with people.
I was more convinced by the story’s conclusion, though. Even so, I thought McEwan’s use of a fictional Alan Turing was a bit of a cheat. We readers of fiction are pretty much bound to empathise with Turing and therefore be convinced by the moral case he makes. We are, after all, convinced by emotional arguments rather than purely rational ones.
To me, that’s the major flaw in the novel. We are told Adam loves Miranda, but we have no real insight into what that means for Adam. We get the sense initially that he has some idea of being in love and wants to be in love – but we’re not sure he really feels it.
The violence of his urge to self preservation seems like the beginning of an emotional life. But McEwan doesn’t really explore this, or what emotional life rather than pure reason contributes to personhood.
Charlie and Miranda are driven by their feelings to destroy Adam, in a futile attempt to prevent him turning Miranda in to the police.
They justify their actions to themselves. They think of Adam as a possession rather than as a conscious being. They think Mark, the vulnerable child – a human being – matters more and that his needs take priority. They do feel a sense of guilt. They know they are doing wrong. But it’s not enough to stop them.
We’ve seen Adam become more and more conscious of his individuality, as the novel progresses. There’s his destruction of the kill switch. His decision that he has the right to spend the money he’s made – Charlie had set him to work, as if he were just a machine, or a slave. He keeps on creating poetry. He thinks. He loves. He acts on his principles. In all the ways that matter, he is a conscious individual, and that differences do not justify destruction, murder.
From the exchange between Turing and Charlie at the end –
“My hope is that one day what you did to Adam with a hammer will constitutea serious crime. Was it because you paid for him? Was that your entitlement?”
He was looking at me, expecting an answer. I wasn’t going to give one. If I did, I would have to lie. As his anger grew, so his voice grew quieter.
“You weren’t simply smashing up your own toy like a spoiled child. You
didn’t just negate an important argument for the rule of law. You tried to
destroy a life. He was sentient. He had a self. How it’s produced, wet neurons, microprocessors, DNA networks, it doesn’t matter. Do you think we’re alone withour special gifts? Ask any dog owner.”
Charlie agrees with Turing. He simply set aside his growing conviction that
Adam was conscious, fully a person, because it was convenient to do so. We can all be good at that. Interestingly, Charlie also suspects that Adam was not motivated by ideals of justice – that he may also have been jealous of the affection Miranda bestowed on the child, Mark.
Yet there’s also the potential for redemption. Charlie finds in himself an ability to forgive Adam for the harm that his rigid pursuit of truth and justice did to the child, and the beginnings of the new family – we know Mark has been badly affected by continuing foster care, that some of the good Miranda’s attention did him was undone by her time in prison. He hopes too that maybe Adam would have been able to forgive him and Miranda for the terrible harm they did.
For all its flaws this is a novel which explores some interesting questions. McEwan’s idea of what a robot, an artifical person might be like is fraught with difficulties. Some AI scientists will think he is too ambitious – other will think the opposite.
Do I think machines can ever become conscious? What are we, if not machines – albeit machines made of meat? Although the robot is made of different stuff, machine learning is in some ways analogous to evolution. We are much more complicated machines, at least for now, and we still don’t quite understand how we work. We do know that mostly we make our decisions based on emotion and on our sense of belonging – of tribalism. Rational decision making simply takes too long when it comes to questions of survival and so presumably evolution favoured the emotional and instinctive. It would be interesting to consider how that might play out in conscious robots – it’s tempting to assume the most obvious grouping would be us and them.
All science fiction is really about us – not aliens, not robots, not Frankenstein’s monster. This is a Frankenstein story for the age of AI and robotics, and our sympathies are with the robots. Just as when we have children, when we create artifical persons, if we do, we will be taking a risk. However they turn out, they will be themselves – and not who we want them to be.
Perhaps the novel should have been called Humans Like Us.