Here are a couple of articles looking at the issues raised around Google’s AI chatbot, LaMDA (language model for dialog applications), because Blake Lemoine, an engineer working with the system, has claimed LaMDA is sentient.
Google was not pleased – they suspended Blake, for publishing confidential company property. Google does not think LaMDA is sentient, just a very good, human response AI.
Forget sentience… the worry is that AI copies human bias | Kenan Malik | The Guardian
Why is Google so alarmed by the prospect of a sentient machine? | John Naughton | The Guardian
Don’t have time to read, The Guardian has a podcast:
Artificial intelligence: conscious or just very convincing? – podcast | News | The Guardian
… it includes a few words by Arthur C. Clarke and the immortal quote from the film 2001: A Space Odyssey – “I’m sorry, Dave. I’m afraid I can’t do that.”.
And Microsoft’s perspective:
Microsoft’s framework for building AI systems responsibly – Microsoft On the Issues
Further Reading
The BBC Reith Lectures last year examined the social impact of artificial intelligence. The lectures were delivered by Professor Stuart Russell, founder of the Centre for Human-Compatible Artificial Intelligence, at the University of California, Berkeley.
Clive Catton MSc (Cyber Security) – by-line and other articles