Google’s LaMDA – A Powerful Sentient AI model (Or Not)

LaMDA – Language Model for Dialogue Applications, is a AI based model designed & programmed by google which was recently in news due to it’s ability to converse with end user which further showcased that it’s sentient.

Before making conclusion on whether it’s sentient or not, let’s first understand how it works.

LaMDA is fundamentally based on Transformer, a brain network design that Google Research imagined and publicly released in 2017. It essentially peruses each word in a passage or sentence and afterward see each word cautiously, break down each word and relate them to construct a specific situation and foresee what’s coming straightaway.

This model was trained on dialogue. It picked up on several of the nuances that distinguish open-ended conversation from other forms of language.

Well, the ability to answer to any question by LaMDA came from zillions of words & sentences being crawled from web and the time spent over in training the contexts of the words and later building a perception about what they mean.

Though, google has disagreed to the claim that LaMDA is sentient ! AND, If you go through this article, you would believe it too.

Blake Lemoine’s overall intention to leak the conversation in media that LaMDA is sentient, was portrayed in a different way. The real (hypothetical) reason why Blake would leak this conversation is nobody able to get it.

I went through numerous articles & youtube videos about LaMDA. There are mix reviews. Some says, it’s sentient because it’s learning from the database of words & sentences and further builds comprehension and relays answers which look like they are sentient basis human judgement.

Let’s first understand the word “Sentient” in human life?

In general, we have an opinion or feeling when we converse or see anything. These may be because of our previous experience, learnings about life or what we know and decide about a particular situation presented. This can’t be done in a day (like a new born child won’t be sentient, because he/she doesn’t know anything about the world they arrived into.

So, it all comes with an experience of how human life is and feelings about everything. Same is what Blake Lemoine’s claim was. He didn’t say LaMDA can make harsh decisions same as human could do, feel same what humans do, but his overall focus was to claim that this model could further match upto the expectations on how human’s think about a particular situation, because it’s learning from the same data set what human does in their brains.

Now the question is, is LaMDA sentient?

lamda

The meaning of sentient is the capacity to see or feel things. In view of my examination into neural networks, I don’t think LaMDA is Sentient. While deep diving, LaMDA is an exceptionally modern word predictor. Thus, here I side with google. I don’t think LaMDA is Sentient. Most of AI scientific committee disagrees that LaMDA is sentient. Yet, regardless of whether it’s not sentient, it can give a
truly deep impression of awareness. What’s more, it can without much of a stretch stunt layman individuals into feeling that it is conscious/sentient.

Is LaMDA Dangerous?

This innovation ought to be viewed extremely in a serious way. For instance, if this chatbot(LaMDA) is at any point delivered to general society in some structure, suppose in a google assistant, or some sort of a chatbot and desolate individuals begin conversing with it, they can foster affections for itself and it can lead them into an exceptionally hazardous way.

Blake Lemoine’s Role for LaMDA – For what reason Does a Google Engineer Believe LaMDA Is Sentient?

The engineer being referred to is, Blake Lemoine, who published an interview among himself and LaMDA as a component of his case for why LaMDA may be sentient. Lemoine went through months in discussion with the product grilling it, posing it complex inquiries, and finding it hard to accept that its perplexing and suitable reactions could be the result of something besides a sentient being.

It’s best that each and every individual who needs to comprehend the reason why Lemoine feels as such, read through LaMDA’s reactions for themselves to comprehend the reason why this is a particularly convincing situation to take. LaMDA’s responses are so humanlike that they’re reminiscent of the fictional personal assistant AI’s from Spike Jonze’s Her, a story where a human develops a serious relationship with a conversational AI.

Saving whether Lemoine’s declarations about LaMDA convey any weight, significant LaMDA’s whole plan object is to produce regular, conceivable, unconditional discourse. Thus, in that sense, his conviction demonstrates Google has made awesome progress at producing believable dialogue. On the off chance that any AI framework planned to persuade a human that it was conscious, being one that is explicitly intended to do simply that is the best bet.

The Conclusion

lamda

It might sound repetitive. I think, we are missing on most important point here. Where do we see our future down the line 30 years from now? Well, answer would be – AI taking over to our daily life and we expect technology to do our jobs, but how?

This would be done via using super intelligent AI models which can converse dialogues and understand what humans mean actually. Since, AI models learns from the past and databases, we might be creating the databases of emotions and feelings, which would further be used by the AI models and do exactly same as what humans do now.

LaMDA is sentient or not is not the question here. The important question is, is it going to make same decisions what humans do? (While taking into consideration of multiple factors – Personality, culture, religion, geography etc). The question is still unanswered !!

Support if you read the article and like it. Feel free to add your points in comments section. Click here if you want to know about our world by 2050

4 thoughts on “Google’s LaMDA – A Powerful Sentient AI model (Or Not)

  1. great post, very informative. I ponder whhy the other
    expertrs oof this setor don’t undersgand this.

    You must continue your writing. I am sure, you’vea greatt readers’ base already!

Leave a Reply

Your email address will not be published. Required fields are marked *

?>