Premium Only Content

What is AI? An A-Z guide to artificial intelligence - BBC
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
What is AI? An A-Z guide to artificial intelligence - BBC
(Image credit: Getty Images ) Artificial intelligence is arguably the most important technological development of our time – here are some of the terms that you need to know as the world wrestles with what to do with this new technology. Imagine going back in time to the 1970s, and trying to explain to somebody what it means "to google", what a "URL" is, or why it's good to have "fibre-optic broadband". You'd probably struggle.
For every major technological revolution, there is a concomitant wave of new language that we all have to learn… until it becomes so familiar that we forget that we never knew it.
That's no different for the next major technological wave – artificial intelligence. Yet understanding this language of AI will be essential as we all – from governments to individual citizens – try to grapple with the risks, and benefits that this emerging technology might pose. Over the past few years, multiple new terms related to AI have emerged – "alignment", "large language models", "hallucination" or "prompt engineering", to name a few.
To help you stay up to speed, BBC.com has compiled an A-Z of words you need to know to understand how AI is shaping our world. A is for… Artificial general intelligence (AGI) Most of the AIs developed to date have been "narrow" or "weak". So, for example, an AI may be capable of crushing the world's best chess player, but if you asked it how to cook an egg or write an essay, it'd fail. That's quickly changing: AI can now teach itself to perform multiple tasks, raising the prospect that "artificial general intelligence" is on the horizon.
An AGI would be an AI with the same flexibility of thought as a human – and possibly even the consciousness too – plus the super-abilities of a digital mind. Companies such as OpenAI and DeepMind have made it clear that creating AGI is their goal. OpenAI argues that it would "elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge" and become a "great force multiplier for human ingenuity and creativity".
However, some fear that going a step further – creating a superintelligence far smarter than human beings – could bring great dangers (see "Superintelligence" and "X-risk"). Most uses of AI at present are "task specific" but there are some starting to emerge that have a wider range of skills (Credit: Getty Images) Alignment While we often focus on our individual differences, humanity shares many common values that bind our societies together, from the importance of family to the moral imperative not to murder. Certainly, there are exceptions, but they're not the majority.
However, we've never had to share the Earth with a powerful non-human intelligence. How can we be sure AI's values and priorities will align with our own?
This alignment problem underpins fears of an AI catastrophe: that a form of superintelligence emerges that cares little for the beliefs, attitudes and rules that underpin human societies. If we're to have safe AI, ensuring it remains aligned with us will be crucial (see "X-Risk").
In early July, OpenAI – one of the companies developing advanced AI – announced plans for a "superalignment" programme, designed to ensure AI systems much smarter than humans follow human intent. "Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue," the company said. B is for… Bias For an AI to learn, it needs to learn from us. Unfortunately, humanity is hardly bias-free. If an AI acquires its abilities from a dataset that is skewed – for example, by race or gender – then it has the potential to spew out inaccurate, offensive stereotypes. And as we hand over more and more gatekeeping and decision-making to AI, many worry that machines could enact hidden prejudices, preventing some people from accessing certain services or knowledge. This discrimination would be obscured by supposed algorithmic impartiality.
In the worlds of AI ethics and safety, some researchers believe that bias – as well as other near-term problems such as surveillance misuse – are far more pressing problems than proposed future concerns such as extinction risk.
In response, some catastrophic risk researchers point out that the various dangers posed by AI a...
-
35:08
Colion Noir
9 hours agoA Bear, an AR-15, and a Home Invasion
20.5K3 -
3:05:55
TimcastIRL
5 hours agoJimmy Kimmel Refuses To Apologize Over Charlie Kirk Comments, Blames Gun Violence | Timcast IRL
165K127 -
2:44:24
Laura Loomer
7 hours agoEP144: Trump Cracks Down On Radical Left Terror Cells
37.8K10 -
LIVE
Drew Hernandez
10 hours agoLEFTISTS UNITE TO DEFEND KIMMEL & ANTIFA TO BE DESIGNATED TERRORISTS BY TRUMP
978 watching -
1:12:32
The Charlie Kirk Show
5 hours agoTPUSA AT CSU CANDLELIGHT VIGIL
81.2K57 -
6:53:45
Akademiks
8 hours agoCardi B is Pregnant! WERE IS WHAM????? Charlie Kirk fallout. Bro did D4VID MURK A 16 YR OLD GIRL?
51.7K6 -
2:26:15
Barry Cunningham
6 hours agoPRESIDENT TRUMP HAS 2 INTERVIEWS | AND MORE PROOF THE GAME HAS CHANGED!
119K82 -
1:20:27
Glenn Greenwald
7 hours agoLee Fang Answers Your Questions on Charlie Kirk Assassination Fallout; Hate Speech Crackdowns, and More; Plus: "Why Superhuman AI Would Kill Us All" With Author Nate Soares | SYSTEM UPDATE #518
108K33 -
1:03:06
BonginoReport
8 hours agoLyin’ Jimmy Kimmel Faces The Music - Nightly Scroll w/ Hayley Caronia (Ep.137)
162K64 -
55:40
Donald Trump Jr.
12 hours agoThe Warrior Ethos & America's Mission, Interview with Harpoon Ventures Founder Larsen Jensen | Triggered Ep275
98.9K56