Skip to content

Conscious artificial intelligence? The controversial debate on AI technology

Are Information Technologies Becoming “Aware”? artificial intelligence? an engineer from Google he was suspended for saying yes and sparked a debate that is far from science fiction.

LaMDA, a Google computer program that generates chatbots”knows clearly what he wants and what he considers to be his rights as a person”, Google engineer Blake Lemoine wrote on the Medium platform.

In business and in the scientific community this opinion is sometimes considered absurd or, at best, premature.

Programs based on machine learning are “trained” from data sets that address the concepts of consciousness or identity and are then capable of causing that illusion.

Programs that access the Internet can answer any questionbut that doesn’t make them believablesays Professor Susan Schneider, founder of a research center at Florida Atlantic University.

Despite his objections to the theory, Scheiner disapproves Google sanctions against Lemoine.

Google tends to “try to quell ethical issuesScheiner said. “We need public debates on these thorny issues“, he claimed.

Hundreds of researchers and engineers talked to LaMDA and, to our knowledge, no one made those claims or anthropomorphized LaMDA like Blake did.”, said Brian Gabriel, spokesman for Google.

the power of imagination

From Pinocchio to the movie “Her”, about a romance with a chatbot, the idea of ​​a non-human entity coming to life”is present in our imagination”, said Mark Kingwell, a professor at the University of Toronto (Canada).

It is difficult to respect the distance between what we imagine as possible and what is really possible“, held.

Artificial intelligence (AI) systems have been evaluated for a long time with the Turing test: if the evaluator talks to a computer, without realizing that he is not talking to one, the machine has “passed”.

But in 2022 it is quite easy for an AI to achieve it”, said the author.

When we face words in a language we speak (…) we think we perceive what these phrases generate”, says Emily Bender, an expert in computer linguistics.

Scientists are even able to give a personality to an AI program. “We can make, for example, a nervous AI” with the conversations a depressed person might have, explains Shashank Srivastava, a computer science professor at the University of North Carolina.

If the chatbot is also integrated into a humanoid robot with ultra-realistic expressions or if a program writes poems or composes songs, as is already the case, our biological senses can be easily fooled.

We swim in a media stir around AI”, Bender warns. “And a lot of money is invested there. So employees in this sector feel that they are working on something important, something real, and they do not necessarily have the necessary distance”.

“Future of Humanity”

How then could one accurately determine whether an artificial entity becomes sentient and conscious?

If we manage to replace neural tissues with chips, it would be a sign that machines can potentially be sentient.Schneider said.

The expert closely follows the progress of Neuralink, a company founded by Elon Musk to manufacture brain implants for medical purposes, but also to “secure the future of humanity, as a civilization, in relation to AI”, said the tycoon.

Musk, owner of Tesla and SpaceX, is part of those who have the vision that almighty machines can take control.

For Mark Kingwell it is the other way around.

If one day an autonomous entity appears, capable of managing a language, moving on its own and expressing preferences and weaknesses “it will be important not to consider him a slave (…) and protect it”, he assures.

Source: Elcomercio

Share this article:
globalhappenings news.jpg
most popular