Back to Google’s “sentient” AI

Here are a couple of articles looking at the issues raised around Google’s AI chatbot, LaMDA (language model for dialog applications), because Blake Lemoine, an engineer working with the system, has claimed LaMDA is sentient.

Google was not pleased – they suspended Blake, for publishing confidential company property. Google does not think LaMDA is sentient, just a very good, human response AI.

Forget sentience… the worry is that AI copies human bias | Kenan Malik | The Guardian

Why is Google so alarmed by the prospect of a sentient machine? | John Naughton | The Guardian

Don’t have time to read, The Guardian has a podcast:

Artificial intelligence: conscious or just very convincing? – podcast | News | The Guardian

… it includes a few words by Arthur C. Clarke and the immortal quote from the film 2001: A Space Odyssey“I’m sorry, Dave. I’m afraid I can’t do that.”.

And Microsoft’s perspective:

Microsoft’s framework for building AI systems responsibly – Microsoft On the Issues

Further Reading

The BBC Reith Lectures last year examined the social impact of artificial intelligence. The lectures were delivered by Professor Stuart Russell, founder of the Centre for Human-Compatible Artificial Intelligence, at the University of California, Berkeley.

Something I am listening to – AI and why people should be scared: The Reith Lectures – Smart Thinking Solutions

The Reith Lecture – “do not design algorithms that can decide to kill humans” – Smart Thinking Solutions

“What we’ll be doing when AI is doing most of what we currently call work” – The Reith Lecture – Smart Thinking Solutions

AI is the future and we cannot avoid the consequences – but what are we going to do about it? – Smart Thinking Solutions

Clive Catton MSc (Cyber Security) – by-line and other articles