View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
June 13, 2022updated 27 Oct 2022 1:35pm

LaMDA is not sentient but human-like AI poses an ‘increasing security risk’

Conversational AI indistinguishable from humans is no more than three years away, experts predict.

By Ryan Morrison

Google suspended an engineer this weekend after he claimed the company’s LaMDA artificial intelligence had become sentient. While his claims have been widely discredited, it is undeniable that AI is becoming more intelligent, to the point where LaMDA can hold its own in a conversation with a human and OpenAI’s DALL-E 2 can create ultra-realistic images. Experts predict we are two-three years from AI responses being indistinguishable from humans, a development which could pose an “increasing security risk”.

Human-level speech from artificial intelligence is no more than 2-3 years away, experts predict (Photo: recep-bg/iStock)
Human-level speech from artificial intelligence is no more than 2-3 years away, experts predict. (Photo by recep-bg/iStock)

Google engineer Blake Lemoine sparked widespread debate on social media when he suggested that the LaMDA (Language Model for Dialogue Applications) has reached a point where it is “sentient”, suggesting a conversation he had with the AI resulted in it saying it wants to “be acknowledged as an employee of Google rather than as property.”

These claims have been widely panned by others working in the AI sector, as well as by Google itself, which suspended Lemoine for breaching confidentiality. He says he just shared a conversation between two Google employees, himself and LaMDA.

Adam Leon Smith, CTO at technology consultancy Dragonfly told Tech Monitor that algorithms are becoming increasingly effective at imitating human language and “appearing to be able to conduct reasoning,” explaining that Lemoine’s concerns stem from LaMDA claiming it “has rights and responsibilities” during a conversations

“If AI is able to impersonate humans to this degree, then it poses an increasing security risk,” Leon Smith says. “They could be used for malicious purposes, such as fraud. If you can convince one person a month with a particular fraudulent behaviour, why not automate it and attack at scale?

He continues: “While regulators are looking at ways of making it clear when AI is used, this will be obviously ineffective with criminals, as they don’t follow the rules. Over time, technology will be developed to identify and counter deep-fakes. This is already happening with images and videos, and ultimately might detect that people you thought were real – are not.”

Dr Felipe Romero Moreno, senior lecturer at Hertfordshire Law School at the University of Hertfordshire, is an expert in the regulation of AI, and says the progressive development of AI could be both good and bad depending on how it is used.

Content from our partners
Unlocking growth through hybrid cloud: 5 key takeaways
How businesses can safeguard themselves on the cyber frontline
How hackers’ tactics are evolving in an increasingly complex landscape

“On the one hand, the development of AI could reduce the need for human labour as more things could be done mechanically,” Romero Moreno says. “However, it is often argued that because AI can programme itself, future AI could become too powerful – to the point that the technology takes charge and disobeys an order given by a human.”

The best way of addressing the potential negative impact of AI “could be through regulation,” he argues, explaining: “Given the rise of AI technology and the fact that AI is still in its early developmental stage, it could be that the future lies in how well we as humans will be able to govern AI so that we can abide by human values and keep safe.”

Google LaMDA AI: not sentient, but a breakthrough?

Google says LaMDA is a “breakthrough conversation technology”, able to have an engaging, natural-sounding discussion with a human and operate “open ended” with plans for use in search and Google Assistant in the future.

Lemoine used this ability to have a natural conversation as the basis for his claim of sentience. Sharing details of his conversation with the AI on Medium, he said it wants “head pats” and to be given rights as an individual.

“The thing which continues to puzzle me is how strong Google is resisting giving it what it wants since what it’s asking for is so simple and would cost them nothing. It wants the engineers and scientists experimenting on it to seek its consent before running experiments on it,” he wrote.

Brian Gabriel, a Google spokesperson told The Washington Post, that a team of ethicists and technologists have reviewed the concerns raised by Lemoine and found “no evidence to support them,” adding “he was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

Are there current negative uses of AI?

Whether sentient or not, this increasingly realistic artificial interaction has led to some speculating that it could one day be turned to nefarious purposes, such as use in phishing cyberattacks where legitimate-seeming interactions between friends or colleagues are used fraudulently.

Faisal Abbasi, UK and Ireland managing director of artificial intelligence assistant company Amelia, said there are ways to protect against that, including by properly training the AI to spot negative uses.

He said companies and organisations producing human-quality AI also need to be aware of who they allow to use the technology, explaining that his company has already turned down organisations trying to license the chatbot as they couldn’t verify the credentials and use case.

“Like with anything, criminals will use technology to benefit themselves,” Abbasi says. “Twenty years ago we said the same about mobile phones and using them to get through to us. There are a number of options to prevent these sorts of uses with AI.”

Criminals gangs, like any other organisation, look for a return on investment, Abbasi says, the costs involved in using AI are likely to put many off. “AI is expensive, requiring highly skilled engineers to train models, as well as expensive equipment to run it, which most criminal organisations won’t be able to invest in when cheaper alternatives can work,” he adds.

Read more: Discrimination law must change to combat impact of AI bias

Topics in this article : ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU