Premium Only Content
Why Superhuman AI Would Kill Us All - Eliezer Yudkowsky – Podcast Recap
This podcast recap is from a discussion surrounding the book "Superhuman AI: Existential Risk and Alignment Failure," by Eliezer Yudkowsky articulating a grim warning that unaligned superhuman artificial intelligence (AI) poses an existential threat to humanity. He argues that the development of such AI is an irreversible, catastrophic risk because current techniques are unable to guarantee that a superintelligent entity will be benevolent, as it could eliminate humanity as a side effect or to utilize our atoms for its own goals. The conversation also explores the rapid, accelerating pace of AI capabilities compared to the slower progress in alignment research, using historical examples like leaded gasoline and cigarettes to explain why companies might pursue development despite the potential for global harm. Ultimately, he offers the slim hope that international treaties and public pressure might prevent the worst-case scenario, much like global nuclear war has been averted so far.
-
LIVE
ThatStarWarsGirl
3 hours agoTSWG LIVE: Supergirl Is COMING!
316 watching -
28:03
Welker Farms
7 hours ago $0.05 earnedNo Stopping The International Harvester 9370! ...except for that fuel leak...
13.5K -
14:58
Upper Echelon Gamers
2 hours agoTotal Stagnation - The AI "Nothing" Products
3.72K2 -
1:12:09
MattMorseTV
4 hours ago $0.23 earned🔴Trump just GUTTED the ENTIRE SYSTEM. 🔴
57.7K62 -
1:35:14
Badlands Media
1 day agoAltered State S4 Ep. 7
31.1K2 -
14:37
World2Briggs
9 hours ago $0.02 earnedTop 10 States Americans Regret Moving To
9.1K7 -
1:07:59
TheCrucible
7 hours agoThe Extravaganza! EP: 73 (12/10/25)
258K47 -
1:47:00
Redacted News
7 hours agoBOMBSHELL! NATO'S WORST NIGHTMARE IS ABOUT TO COME TRUE & CONGRESSMAN MASSIE JUST WENT ALL IN
163K207 -
12:26
The Gun Collective
5 hours agoDid Glock Copy Itself? NEW GUNS JUST RELEASED!
27.4K5 -
13:09:11
LFA TV
1 day agoLIVE & BREAKING NEWS! | WEDNESDAY 12/10/25
228K30