• 10 months ago
Common sense is not so common. Yejin Choi, professor and computer science researcher, thinks we need to take AI way more seriously for this very reason. Even for the most advanced AI systems, common sense remains difficult to program. Stupid AI isn’t funny, it’s dangerous, and Choi’s years exploring the ethics behind machine learning is distilled into her TED talk. We got to speak to her hours after she delivered it, and explained to us why anything that is learning from the internet is unavoidably going to have racist, sexist and ableist tendencies that we have to actively correct for. Most importantly: we need regulation. Choi believes in fostering a diverse community of thinkers, including philosophers and psychologists, working collectively to bridge AI's gaps of understanding to include ethics, racial equity, and common sense.

Credits
Director
Keshia Hannam

Associate Producer
Manal Ahmed

Editor
Sabrina Sinaga

Director of Photography
Setare Gholipour

Sound Recordist
Joy Jihyun Jeong

Editor-in-Chief
Keshia Hannam

Head of Production
Stephanie Tangkilisan

Producer
Joy Jihyun Jeong

Post Production Coordinator
Skolastika Lupitawina

Assistant Editor
Rendy Abi Pratama

Color
Nadya Sabrina

Sound
Ezound Studios

Bumper Design
Chris Lee

Additional Music by
Borden Lulu - We Share This Convoy
IamDayLight - No Limit - Instrumental Version
Amir Marcus - Mystery Box
AlterEgo - Mystify

Additional Archival Material
Business Chief
Deloitte
Forbes
Getty Images
NPR
TED Conference 2023
The Conversation
ZD Net
Zee Business

Special Thanks
TED Conference 2023
Yejin Choi

Category

🗞
News
Transcript
00:00 AI today can still make unexpected silly mistakes,
00:04 even with a basic that requires just a basic level of common sense.
00:09 We do need to really work on, as a society, the policy and regulations around it.
00:15 We cannot rely on a few tech companies doing it.
00:18 What's up everybody?
00:20 Keshia here, Editor-in-Chief of Eastern Standard Times.
00:23 We are here at TED Conference all week,
00:25 interviewing some of the greatest Asian thinkers and creatives in the world.
00:28 Come check it out with me.
00:30 We have the pleasure of interviewing Yejin Choi,
00:33 who gave her TED Talk yesterday about how important common sense is to AI models that currently it lacks.
00:40 So, I'm excited to share a few spicy thoughts on artificial intelligence.
00:47 But first, let's get philosophical.
00:50 By starting with this quote by Voltaire, an 18th century Enlightenment philosopher who said,
00:56 "Common sense is not so common."
00:58 Turns out, this quote couldn't be more relevant to artificial intelligence today.
01:04 Despite that, AI is an undeniably powerful tool.
01:08 AI today is unbelievably intelligent, and then shockingly stupid.
01:14 It is an unavoidable side effect of teaching AI through brute force scale.
01:21 So, it may seem like everybody in the AI community actually is gung-ho about scaling things up as fast as possible.
01:29 As AI becomes ever more powerful than before,
01:33 the limitations become a real challenge and create real risk to humanity as well,
01:39 because people rely on it.
01:41 You know, like doctors relying on AI to decide how much of a medication should be given to a patient.
01:49 If there's minor ciliator that overdose the patient with a particular medication, that would be detrimental.
01:58 From what I understand, one of the biggest things or demands on both sides,
02:04 people who are very optimistic about AI and people who are more concerned,
02:07 is guardrails or policy or something that has some kind of limitation or restriction
02:13 so that there is everything from ethics and common sense to racial equity built into AI.
02:19 How do you think we practically go about implementing those restrictions?
02:23 Is it lobbying government? Is it starting to create cohorts or coalitions that present these solutions to tech companies?
02:29 All of the above and beyond that, we also need to ensure that some of the important aspects of the data should be open
02:37 so that it's not just at the hands of a few tech companies putting the guardrails in some magical way that people cannot inspect.
02:47 They should be open so that the people in the broader community can think about it together.
02:52 Beyond AI researchers, we do need to involve people in different humanities disciplines,
02:59 including philosophers and psychologists and everybody thinking about how to put the guardrails,
03:06 because AI now is too powerful and it's already begun to make an impact on human lives.
03:14 So it's really a community work.
03:16 [Music]

Recommended