Researchers claim artificially intelligent agents are ‘likely’ to annihilate humankind in the future

Sep 30, 2022 | 3 comments

By Joshua Hawkins

Researchers with the University of Oxford and Google Deepmind have shared a chilling warning in a new paper. Published in AI Magazine last month,  the paper claims that the threat of AI is greater than previously believed. It’s so great, in fact, that artificial intelligence is likely to one day rise up and annihilate humankind.

Even at current operational levels, AI is impressive. It can simulate Val Kilmer’s voice in movies and has helped researchers solve complex problems multiple times. Now, though, researchers posit that the threat of AI is greater than we ever thought.

The researchers published their findings at the end of August. According to the paper, the threat of AI could change based on a set of conditions the researchers have identified. This isn’t the first time researchers have viewed AI as a possible threat, either. But the new paper is intriguing because of the plausible assumptions that it considers.

Chief among these assumptions, the researchers believe that “AIs intervening in the provision of their rewards would have consequences that are very bad,” Michael Cohen, one of the authors on the paper, tweeted this month. Cohen says the conditions the researchers identified show that the possibility of a threatening conclusion is stronger than in any previous publication.

Additionally, the team says that the threat of AI is building based on an existential catastrophe. And that this is not only possible, it’s very likely. One of the biggest problems, Cohen and his fellow researchers note, is that more energy can always be put toward a problem to bring it up to the point where the AI is rewarded for what it has done. As such, it could put us at odds with the machines.

“The short version,” Cohen writes in his Twitter thread, “is that more energy can always be employed to raise the probability that the camera sees the number 1 forever, but we need some energy to grow food. This puts us in unavoidable competition with a much more advanced agent.” With that in mind, it sounds as if the threat of AI could change based on how it is taught.

The number that Cohen is referring to here is based on a test to force the AI agent to reach a certain number. And, once it has been shown a way to do that, it could use more energy to optimize reaching that goal. It’s an intriguing proposition and one that shows why so many scientists are worried about the threat of AI.

Obviously, the examples provided in the paper aren’t the only possible outcomes. And, perhaps we could find a way to keep the machines in check. But, this new research does raise some questions about just how much we want to trust AI going forward, and how we could control it if an existential catastrophe were to strike.
_____________________

Credit: BGR

CuencaHighLife

Dani News

Google ad

Country living News

Fund Grace News

Hogar Esperanza News

Thai Lotus News

The Cuenca Dispatch

Week of April 28

General Motors Auto Parts Manufacturer Laments: “Today Marks a Dark Day for the National Industry”.

Read more

Minister Requests Resignations in Termogás Machala, Dubbed ‘Epicenter of Energy Inefficiency’.

Read more

Chevrolet to Cease Car Assembly in Ecuador by August, Production to Halt in Colombia.

Read more

Google ad

Country Ranch Living News