Premium Only Content

Calculations Suggest It'll Be Impossible to Control a Super-Intelligent AI - ScienceAlert
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Calculations Suggest It'll Be Impossible to Control a Super-Intelligent AI - ScienceAlert
A still from the movie "2001: A Space Odyssey". (MGM) The idea of artificial intelligence overthrowing humankind has been talked about for decades – and programs such as ChatGPT have only renewed these concerns. So how likely is it that we'll be able to control high-level computer super-intellgence? Scientists back in 2021 crunched the numbers. The answer? Almost definitely not. The catch is that controlling a super-intelligence far beyond human comprehension would require a simulation of that super-intelligence which we can analyse. But if we're unable to comprehend it, it's impossible to create such a simulation. Rules such as 'cause no harm to humans' can't be set if we don't understand the kind of scenarios that an AI is going to come up with, suggest the authors of the paper. Once a computer system is working on a level above the scope of our programmers, we can no longer set limits. "A super-intelligence poses a fundamentally different problem than those typically studied under the banner of 'robot ethics'," wrote the researchers back in 2021. "This is because a superintelligence is multi-faceted, and therefore potentially capable of mobilising a diversity of resources in order to achieve objectives that are potentially incomprehensible to humans, let alone controllable." Part of the team's reasoning comes from the halting problem put forward by Alan Turing in 1936. The problem centres on knowing whether or not a computer program will reach a conclusion and answer (so it halts), or simply loop forever trying to find one. As Turing proved through some smart math, while we can know that for some specific programs, it's logically impossible to find a way that will allow us to know that for every potential program that could ever be written. That brings us back to AI, which in a super-intelligent state could feasibly hold every possible computer program in its memory at once. Any program written to stop AI harming humans and destroying the world, for example, may reach a conclusion (and halt) or not – it's mathematically impossible for us to be absolutely sure either way, which means it's not containable. "In effect, this makes the containment algorithm unusable," said computer scientist Iyad Rahwan, from the Max-Planck Institute for Human Development in Germany. The alternative to teaching AI some ethics and telling it not to destroy the world – something which no algorithm can be absolutely certain of doing, the researchers say – is to limit the capabilities of the super-intelligence. It could be cut off from parts of the internet or from certain networks, for example. The 2021 study rejects this idea too, suggesting that it would limit the reach of the artificial intelligence – the argument goes that if we're not going to use it to solve problems beyond the scope of humans, then why create it at all? If we are going to push ahead with artificial intelligence, we might not even know when a super-intelligence beyond our control arrives, such is its incomprehensibility. That means we need to start asking some serious questions about the directions we're going in. In fact, earlier this year tech giants including Elon Musk and Apple co-founder Steve Wozniak signed an open letter asking humanity to pause work on artificial intelligence for at least 6 months so that its safety could be explored. "AI systems with human-competitive intelligence can pose profound risks to society and humanity," said the open letter titled "Pause Giant AI Experiments". "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," it said. The research was published in the Journal of Artificial Intelligence Research in January 2021. A version of this article was first published in January 2021. It has since been updated to reflect AI advances in 2023.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
-
3:05:55
TimcastIRL
10 hours agoJimmy Kimmel Refuses To Apologize Over Charlie Kirk Comments, Blames Gun Violence | Timcast IRL
206K189 -
2:44:24
Laura Loomer
12 hours agoEP144: Trump Cracks Down On Radical Left Terror Cells
61.1K26 -
4:47:56
Drew Hernandez
14 hours agoLEFTISTS UNITE TO DEFEND KIMMEL & ANTIFA TO BE DESIGNATED TERRORISTS BY TRUMP
56.1K24 -
1:12:32
The Charlie Kirk Show
10 hours agoTPUSA AT CSU CANDLELIGHT VIGIL
117K62 -
6:53:45
Akademiks
12 hours agoCardi B is Pregnant! WERE IS WHAM????? Charlie Kirk fallout. Bro did D4VID MURK A 16 YR OLD GIRL?
93.1K8 -
2:26:15
Barry Cunningham
11 hours agoPRESIDENT TRUMP HAS 2 INTERVIEWS | AND MORE PROOF THE GAME HAS CHANGED!
152K96 -
1:20:27
Glenn Greenwald
12 hours agoLee Fang Answers Your Questions on Charlie Kirk Assassination Fallout; Hate Speech Crackdowns, and More; Plus: "Why Superhuman AI Would Kill Us All" With Author Nate Soares | SYSTEM UPDATE #518
131K35 -
1:03:06
BonginoReport
13 hours agoLyin’ Jimmy Kimmel Faces The Music - Nightly Scroll w/ Hayley Caronia (Ep.137)
179K72 -
55:40
Donald Trump Jr.
16 hours agoThe Warrior Ethos & America's Mission, Interview with Harpoon Ventures Founder Larsen Jensen | Triggered Ep275
111K56 -
1:12:08
TheCrucible
13 hours agoThe Extravaganza! EP: 39 (9/18/25)
145K20