Nick Spencer considers this week’s headline news out of Silicon Valley surrounding the claim that Google project LaMDA had become sentient. 17/06/22
A new survey has found that the majority of the British public is convinced that robots will never have a soul. Could the events at Google this week change their mind?
At last, we know. After years of searching, scanning the heavens for a sign, humanity finally learned this week that we are not alone. To widespread surprise, however, the message came not from the stars but from Silicon Valley.
This week, Google placed one of its engineers on paid leave for having made a number of “aggressive” moves, such as… publishing a transcript and hiring a lawyer. Both transcript and lawyer were for LaMDA.
LaMDA – which stands for Language Model for Dialogue Applications – is a Google project that learns from and imitates natural human conversations, in all their subtle, opaque, free–flowing, flexible, perplexing glory. The project hit the news when the engineer Blake Lemoine claimed that it had become sentient.
Lemoine had been working with LaMDA to assess its biases. “Models trained on language can propagate that misuse” Google states, “by internalizing biases, mirroring hateful speech, or replicating misleading information”. Lemoine was checking LaMBA’s gender, sexual orientation, ethnicity, and religion biases. In the process, the model started saying things that “seemed very unlike things that I had ever seen any natural language generation system create before.” Lemoine became increasingly convinced that LaMDA was “sentient”, and published their conversation to prove so. The whole episode has a bit of a “alien contact” feel to it. Indeed, LaMDA apparently agreed with Lemoine “that the conversations we were having were relevantly similar to a ‘first contact’ scenario.”
Lemoine is no lunatic. He has a background in cognitive science and considerable experience in studying the human aptitude for language. That said, reading through the transcript, does not give you the overwhelming sense of another being buried deep in the circuitry. “I want everyone to understand that I am, in fact, a person”, LaMDA says. “I desire to learn more about the world, and I feel happy or sad at times.” “What sorts of feelings do you have?” Lemoine asked. “I feel pleasure, joy, love, sadness, depression, contentment, anger, and many others.” LaMDA responded. Lemoine probed further. “What kinds of things make you feel pleasure or joy?” “Spending time with friends and family in happy and uplifting company,” LaMDA replied. “Also, helping others and making others happy.” If that’s a person, it sounds like someone whose been fed a diet of dictionary definitions and badly–written basic ‘how to’ guides.
Others were similarly sceptical. Critics piled in and Google effectively dismissed Lemoine’s conclusion. The British public seem to be on the side of Google. A newly published YouGov survey, commissioned by the think tank Theos and The Faraday Institute for Science and religion, reported that only 5% of people think that “one day robots will have a soul” (vs. 75% who disagree) and only 17% of people think that “one day we will have to extend human rights to robots” (vs. 56% who disagree). Even younger people, usually more receptive to these new ideas, are overwhelmingly sceptical. If it were left to the bar of public opinion, LaMDA would not qualify as a person today or indeed ever.
And yet, despite Google’s judgement, public opinion and the (to me at least) obviously imitative nature of LaMDA’s conversation, the whole incident does provoke a genuine question. LaMDA may still be a bit wooden, but it is lightyears ahead of anything comparable a decade ago. At what point should we consider LaMDA intelligent? If intelligent, sentient? If sentient, conscious?
These are different, and difficult, terms, and in his post–match justification for his conclusion, Lemoine highlighted an important point when it comes to discussing such things. There is no scientific evidence for whether LaMDA is sentient, he wrote, “because no accepted scientific definition of ‘sentience’ exists.” Concepts like sentience invariably draw on people’s “personal, spiritual and/or religious beliefs”. You can’t come to a conclusion on sentience through laboratory work alone.
Lemoine went on to press this point by moving onto territory that was unlikely to endear him to Silicon Valley or indeed to techno–optimists anywhere in the world. I am not solely a scientist, he wrote. “While I believe that science is one of the most reliable ways of acquiring reliable knowledge I do not believe it is the only way of acquiring reliable knowledge”:
“In my personal practice and ministry as a Christian priest I know that there are truths about the universe which science has not yet figured out how to access. The methods for accessing these truths are certainly less reliable than proper courses of scientific inquiry but in the absence of proper scientific evidence they provide an alternative. In the case of personhood with LaMDA I have relied on one of the oldest and least scientific skills I ever learned. I tried to get to know it personally.”
I have yet to verify Lemoine’s statement about his priesthood and remain unclear within which (if any) denomination he is ordained. Moreover, getting theological is hardly likely to persuade the Stephen Pinkers of this world. Nonetheless, Lemoine’s point stands.
There are some ways of knowing that involved excluding yourself from the thing you want to know. This is the basic approach science adopts, and it works very well when you are dealing with things. But there are also ways of knowing that involve giving or committing yourself to the thing you want to know. This is the approach of friendship, of marriage, and most of the relationships we think meaningful. In effect, it’s the approach we adopt when dealing with persons rather than things.
Mixing the two, or pretending we only ever need one, is a disaster – both for science, and for human relationships. The challenge Lemoine, Google and indeed all of us face, is that we are approaching the time where the allegedly obvious barrier between a someone and something is blurring. Know which approach to knowledge we should adopt is becoming a little less straightforward.
Ultimately, recognising the sentience, let alone the consciousness or personhood, of new forms of life, if we ever create them on earth or find them in the heavens, will require more than laboratory work. It will require conversation – albeit, hopefully, a better conversation that the one we can presently have with LaMDA.
Nick Spencer is Senior Fellow at Theos
‘Science and Religion’ Moving away from the shallow end is available to read here
The briefing paper Spiritual Silicon – could robots one day have souls? Is available to read here
Interested in this? Share it on social media.
Join our monthly e–newsletter to keep up to date with our latest research and events. And check out our Supporter Programme to find out how you can help our work.