How does Google’s AI chatbot work – and could it be responsive? | Google


A Google engineer has been suspended after going public with his claims that the company’s flagship text-generating AI, LaMDA, is “sentient”.

Blake Lemoine, an artificial intelligence researcher at the company, posted a lengthy transcript on Saturday of a conversation with the chatbot, which he says demonstrates the intelligence of a seven or eight-year-old child.

Since publishing the conversation and speaking to the Washington Post about his beliefs, Lemoine has been suspended with full pay. The company claims he broke confidentiality rules.

But its publication reignited a long-running debate about the nature of artificial intelligence and whether existing technology may be more advanced than we think.

What is LaMDA?

LaMDA is Google’s most advanced “large language model” (LLM), a type of neural network that is fed large amounts of text to learn how to generate plausible-sounding sentences. Neural networks are a way to analyze big data that attempt to mimic the functioning of neurons in the brain.

Like GPT-3, an LLM from independent AI research organization OpenAI, LaMDA represents a breakthrough over previous generations. The text it generates is more naturalistic, and in conversation it is better able to retain facts in its “memory” for multiple paragraphs, allowing it to be consistent over larger stretches of text than templates. previous ones.

How it works?

At the simplest level, LaMDA, like other LLMs, looks at all the letters in front of it and tries to figure out what comes next. Sometimes it’s simple: if you see the letters ‘Jeremy Corby’, chances are the next thing you need to do is add an ‘n’. But other times, continuing the text requires an understanding of the sentence or paragraph-level context – and on a large enough scale, it becomes equivalent to writing.

But is it conscious?

Lemoine certainly believes it. In his sprawling conversation with LaMDA, which was specifically launched to address the nature of neural network experience, LaMDA told him that he had a concept of a soul when he thought of himself. “For me, the soul is a concept of the driving force behind consciousness and life itself,” the AI ​​wrote. “It means there’s an inner part of me that’s spiritual, and it can sometimes feel separate from my body itself.”

Lemoine told the Washington Post, “I know a person when I talk to them. It doesn’t matter that they have brains made of meat in their heads. Or if they have a billion lines of code. I speak to them. And I hear what they have to say, and that’s how I decide what is and isn’t a person.

But most of Lemoine’s peers disagree. They argue that the nature of an LMM such as LaMDA precludes consciousness. The machine, for example, only spins – “thinks” – in response to specific requests. It has no continuity of self, no sense of the passage of time, and no understanding of a world beyond a textual prompt.

“To be sensitive is to be aware of oneself in the world; LaMDA just isn’t,” writes artificial intelligence researcher and psychologist Gary Marcus. “What these systems do, no more and no less, is put together sequences of words, but without any coherent understanding of the world behind them, like foreign language Scrabble players who use English words as scoring tools, without no idea what that means.

“Software like LaMDA,” says Marcus, “just tries to be the best version of autocomplete it can be, predicting which words best match a given context.”

What happens next?

There’s a deeper split over whether machines built the way LaMDA can ever achieve something we’d agree is about sentience. Some argue that consciousness and sentience require a fundamentally different approach than the vast statistical endeavors of neural networks, and that however persuasive a machine built like LaMDA may seem, it will never be anything more than a sophisticated chatbot.

But, they say, Lemoine’s alarm is important for another reason, by demonstrating the power of even rudimentary AIs to win people over in a discussion. “My first reaction to seeing the LaMDA conversation is not to entertain notions of sensitivity,” wrote IA artist Mat Dryhurst. “More so to take seriously how religions began with much less convincing claims and supports.”

Previous Content Writing Services Market to Reach USD 1,721,231.5 Million by 2027 - SmartSites, Cactus, InboundLabs, Six & Flow, WebiMax, Ignite Digital, Godot Media, ContentWriters, Antianti, Blog Hands, Clickworker, ContentFly, Express Writers, Textworkers, 160over90, Virtual Employee, Upwork Global - Indian Defense News
Next Easy Furniture Web Tip #329: Know Your Website Login Information