Premium Only Content
 
			The 'AI Apocalypse' Is Just PR - The Atlantic
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
The 'AI Apocalypse' Is Just PR - The Atlantic
Big Tech’s warnings about an AI apocalypse are distracting us from years of actual harms their products have caused. Illustration by Joanne Imperio / The Atlantic On Tuesday morning, the merchants of artificial intelligence warned once again about the existential might of their products. Hundreds of AI executives, researchers, and other tech and business figures, including OpenAI CEO Sam Altman and Bill Gates, signed a one-sentence statement written by the Center for AI Safety declaring that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Those 22 words were released following a multi-week tour in which executives from OpenAI, Microsoft, Google, and other tech companies called for limited regulation of AI. They spoke before Congress, in the European Union, and elsewhere about the need for industry and governments to collaborate to curb their product’s harms—even as their companies continue to invest billions in the technology. Several prominent AI researchers and critics told me that they’re skeptical of the rhetoric, and that Big Tech’s proposed regulations appear defanged and self-serving. Silicon Valley has shown little regard for years of research demonstrating that AI’s harms are not speculative but material; only now, after the launch of OpenAI’s ChatGPT and a cascade of funding, does there seem to be much interest in appearing to care about safety. “This seems like really sophisticated PR from a company that is going full speed ahead with building the very technology that their team is flagging as risks to humanity,” Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project, a nonprofit that advocates against mass surveillance, told me. The unstated assumption underlying the “extinction” fear is that AI is destined to become terrifyingly capable, turning these companies’ work into a kind of eschatology. “It makes the product seem more powerful,” Emily Bender, a computational linguist at the University of Washington, told me, “so powerful it might eliminate humanity.” That assumption provides a tacit advertisement: The CEOs, like demigods, are wielding a technology as transformative as fire, electricity, nuclear fission, or a pandemic-inducing virus. You’d be a fool not to invest. It’s also a posture that aims to inoculate them from criticism, copying the crisis communications of tobacco companies, oil magnates, and Facebook before: Hey, don’t get mad at us; we begged them to regulate our product. Yet the supposed AI apocalypse remains science fiction. “A fantastical, adrenalizing ghost story is being used to hijack attention around what is the problem that regulation needs to solve,” Meredith Whittaker, a co-founder of the AI Now Institute and the president of Signal, told me. Programs such as GPT-4 have improved on their previous iterations, but only incrementally. AI may well transform important aspects of everyday life—perhaps advancing medicine, already replacing jobs—but there’s no reason to believe that anything on offer from the likes of Microsoft and Google would lead to the end of civilization. “It’s just more data and parameters; what’s not happening is fundamental step changes in how these systems work,” Whittaker said. Two weeks before signing the AI-extinction warning, Altman, who has compared his company to the Manhattan Project and himself to Robert Oppenheimer, delivered to Congress a toned-down version of the extinction statement’s prophecy: The kinds of AI products his company develops will improve rapidly, and thus potentially be dangerous. Testifying before a Senate panel, he said that “regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.” Both Altman and the senators treated that increasing power as inevitable, and associated risks as yet-unrealized “potential downsides.” But many of the experts I spoke with were skeptical of how much AI will progress from its current abilities, and they were adamant that it need not advance at all to hurt people—indeed, many applications already do. The divide, then, is not over whether AI is harmful, but which harm is most concerning—a future AI cataclysm only its architects are warning about and claim they can uniquel...
- 	
				 12:22 12:22Cash Jordan6 hours ago"CHICAGO MOB" Fights Back... "ZERO MERCY" Marines DEFY Judge, SMASH ILLEGALS33.9K46
- 	
				 46:58 46:58Brad Owen Poker18 hours agoI Make QUAD ACES!!! BIGGEST Bounty Of My Life! Turning $0 Into $10,000+! Must See! Poker Vlog Ep 32319.9K7
- 	
				 2:52:28 2:52:28TimcastIRL9 hours agoSTATE OF EMERGENCY Declared Over Food Stamp CRISIS, Judge Says Trump MUST FUND SNAP | Timcast IRL252K145
- 	
				 3:22:45 3:22:45Tundra Tactical16 hours ago $21.16 earned🚨Gun News and Game Night🚨 ATF Form 1 Changes, BRN-180 Gen 3 Issues??, and Battlefield 6 Tonight!46.6K6
- 	
				 1:45:13 1:45:13Glenn Greenwald12 hours agoJD Vance Confronted at Turning Point about Israel and Massie; Stephen Miller’s Wife Screams “Racist” and Threatens Cenk Uygur with Deportation; Rio's Police Massacre: 120 Dead | SYSTEM UPDATE #540122K169
- 	
				 9:05:24 9:05:24SpartakusLIVE9 hours agoSpart Flintstone brings PREHISTORIC DOMINION to REDSEC41.2K10
- 	
				 1:05:02 1:05:02BonginoReport12 hours agoKamala CALLED OUT for “World Class” Deflection - Nightly Scroll w/ Hayley Caronia (Ep.167)137K84
- 	
				 54:36 54:36MattMorseTV10 hours ago $31.92 earned🔴The Democrats just SEALED their FATE.🔴65.9K114
- 	
				 8:07:01 8:07:01Dr Disrespect18 hours ago🔴LIVE - DR DISRESPECT - ARC RAIDERS - SOLO RAIDING THE GALAXY149K14
- 	
				 1:32:00 1:32:00Kim Iversen13 hours agoThe World’s Most “Moral” Army — Kills 40 Kids During "Ceasefire" | Socialism's Coming: The Zohran Mamdani Agenda117K229