Obama on AI, free speech, and the future of the internet

  • last year
Former President Barack Obama sat down with The Verge’s editor-in-chief and Decoder host Nilay Patel to talk about AI, social networks, and how to think about democracy as both of those things collide. The conversation, which took place just hours after President Joe Biden signed his executive order on AI, covers some of today’s biggest issues in tech and policy. Of course, we also had to ask Obama what apps he has on his homescreen

Category

🤖
Tech
Transcript
00:00 - Hello.
00:00 - Hello, sir.
00:01 - Eli, how are you?
00:02 - Nice to meet you.
00:03 - Very nice to meet you.
00:04 - Yeah, nice to meet you.
00:05 - Looks like you've cleared out my whole office.
00:06 - Yeah, we got rid of it.
00:07 - Man, I mean, I think it's,
00:08 I hope they're going somewhere or something.
00:10 How you been?
00:11 - I'm doing all right, man.
00:12 How are you?
00:12 - I'm doing great.
00:13 I should have told you, by the way,
00:14 you didn't have to wear a tie, but you look sharp.
00:16 You know more about this stuff than I do, so.
00:19 - Well, that's terrifying.
00:20 (laughing)
00:22 President Barack Obama,
00:23 you're the 44th president of the United States.
00:25 We're here at the Obama Foundation.
00:26 Welcome to Decoder.
00:27 - It is great to be here.
00:28 Thank you for having me.
00:29 - I am really excited to talk to you.
00:31 There's a lot to talk about.
00:33 We are here on the occasion of President Biden
00:35 signing executive order about AI.
00:37 I would describe this order as sweeping.
00:39 I think it's over a hundred pages long.
00:41 There's a lot of ideas in it.
00:43 Everything from regulating biosynthesis with AI.
00:46 There's some safety regulations in there.
00:48 It mandates something called red teaming,
00:49 transparency, watermarking.
00:51 These feel like new challenges,
00:53 like very new challenges
00:55 for the government's relationship with technology.
00:57 I want to start with a Decoder question.
00:59 What is your framework for thinking about these challenges
01:01 and how you evaluate them?
01:03 - This is something that I've been interested in for a while.
01:06 So back in 2015, 2016,
01:10 as we were watching the landscape transformed
01:14 by social media and the information revolution
01:18 impacting every aspect of our lives,
01:20 I started getting in conversations
01:24 about artificial intelligence and this next phase,
01:27 this next wave that might be coming.
01:29 And I think one of the lessons that we got
01:33 from the transformation of our media landscape
01:38 was that incredible innovation, incredible promise,
01:43 incredible good can come out of it,
01:45 but there are a bunch of unintended consequences.
01:47 And that we have to be maybe a little more intentional
01:51 about how our democracies interact with
01:56 what is primarily being generated out of the private sector.
02:02 And what rules of the road are we setting up
02:05 and how can we make sure that we maximize the good
02:09 and maybe minimize some of the bad.
02:11 And so I commissioned my science guy, John Holdren,
02:16 along with John Podesta,
02:19 who had been a former chief of staff
02:21 and worked on climate change issues.
02:25 Let's pull together some experts to figure this out.
02:27 And we issued a big report in my last year.
02:32 The interesting thing even then was
02:35 people felt enormously promising technology,
02:40 but we may be over hyping how quick it's gonna come.
02:44 And as we've seen just in the last year or two,
02:48 even those who are developing these large language models
02:52 who are in the weeds with these programs
02:55 are starting to realize this thing is moving faster
02:59 and is potentially even more powerful
03:02 than we originally imagined.
03:04 Now, so my framework and in conversations
03:08 with government officials, private sector, academics,
03:13 the framework I emerged from is that
03:16 this is going to be a transformative technology.
03:19 It's already in all kinds of small ways,
03:23 but very broadly changing the shape of our economy
03:29 in some ways, even our search engines,
03:32 basic stuff that we take for granted
03:34 is already operating under some AI principles,
03:36 but this is gonna be turbocharged.
03:39 It's gonna impact how we make stuff,
03:41 how we deliver services, how we get information,
03:44 and the potential for us to
03:49 have enormous medical breakthroughs,
03:51 the potential for us to be able to provide
03:53 individualized tutoring for kids in remote areas,
03:57 the potential for us to solve some of our energy challenges
04:02 and deal with greenhouse gases,
04:05 that this could unlock amazing innovation,
04:10 but that it can also do some harm.
04:13 We can end up with powerful AI models
04:16 in the hands of somebody in a basement
04:18 who develops a new smallpox variant,
04:21 or non-state actors who suddenly,
04:25 because of a powerful AI tool,
04:28 can hack into critical infrastructure,
04:31 or maybe less dramatically,
04:33 AI infiltrating the lives of our children
04:38 in ways that we didn't intend,
04:41 in some cases, the way social media has.
04:44 So what that means, then, is that
04:47 I think the government,
04:48 as an expression of our democracy,
04:52 needs to be aware of what's going on.
04:55 Those who are developing these frontier systems
04:59 need to be transparent.
05:01 I don't believe that we should
05:04 try to put the genie back in the bottle
05:07 and be anti-tech because of all the enormous potential,
05:12 but I think we should put some guardrails around
05:15 some risks that we can anticipate,
05:18 and have enough flexibility
05:20 that it doesn't destroy innovation,
05:24 but also is guiding and steering
05:26 this technology in a way that maximizes
05:32 not just individual company profits,
05:36 but also the public good.
05:37 - So let me make the comparison for you.
05:38 I would say that the problem in tech regulation
05:41 for the past 15 years has been social media.
05:44 How do we regulate social media?
05:46 How do we get more good stuff, less bad stuff,
05:48 make sure that really bad stuff is illegal?
05:50 You came to the presidency on the back of social media.
05:54 - I was the first digital president.
05:55 - You had a Blackberry.
05:56 I remember people were very excited about your Blackberry.
05:58 I wrote a story about your iPad.
06:01 That was transformative.
06:02 That's young people are gonna
06:03 take to the political environment,
06:05 they're gonna use these tools,
06:06 we're gonna change America with it.
06:07 - You can make an argument, I wouldn't have been elected
06:09 had it not been for social networks.
06:11 - Now we're on the other side of that.
06:12 There was another guy who got elected
06:13 on the back of social networks.
06:15 There was another movement in America
06:16 that has been very negative on the back of that election.
06:19 We have basically failed to regulate social networks,
06:21 I would say.
06:22 There's no comprehensive privacy bill even.
06:25 There was already a framework
06:26 for regulating media in this country.
06:28 We could apply a lot of what we knew about
06:30 should we have good media to social networks.
06:32 There are some First Amendment questions in there,
06:34 what have you, important ones,
06:36 but there was an existing framework.
06:39 With AI, it's we're gonna tell computers to do stuff
06:42 and they're gonna go do it.
06:43 We have no framework for that.
06:47 - We hope they do what we think we're telling them to do.
06:51 - We also ask computers a question
06:52 and they might just confidently lie to us
06:54 or help us lie at scale.
06:56 There is no framework for that.
06:58 What do you think you can pull from the sort of failure
07:01 to regulate social media into this new environment
07:04 such that we get it right this time?
07:05 Or do anything at all?
07:06 - Well, this is part of the reason why I think
07:09 what the Biden administration did today
07:12 in putting out the EO,
07:13 the work they've done is so important.
07:16 Not because it's the end point,
07:18 but because it's really the beginning
07:20 of building out a framework.
07:21 And when you mentioned how this executive order
07:25 has a bunch of different stuff in it,
07:32 what that reflects is we don't know all the problems
07:37 that are gonna arise out of this.
07:39 We don't know all the promising potential of AI,
07:44 but we're starting to put together sort of the foundations
07:52 for what we hope will be a smart framework
07:55 for dealing with it.
07:56 And in some cases, what AI is gonna do
08:00 is to accelerate advances in, let's say, medicine.
08:05 We've already seen, for example,
08:10 with things like protein folding
08:16 and the breakthroughs that can take place
08:18 that would not have happened
08:20 had it not been for some of these AI tools.
08:22 And we wanna make sure that that's done safely.
08:26 We wanna make sure that it's done responsibly.
08:30 And it may be that we already have some laws in place
08:34 that can manage that.
08:36 There may be some novel developments in AI
08:41 where an existing agency, an existing law just doesn't work.
08:46 If we're dealing with the alignment problem
08:50 and we wanna make sure
08:51 that some of these large language models,
08:54 where even the developers aren't entirely confident
08:58 about what these models are doing,
09:01 what the computer's thinking or doing,
09:05 well, in that case, we're gonna have to figure out
09:09 what are the red teaming, what are the testing regiments?
09:12 And in talking to the companies themselves,
09:15 they will acknowledge that their safety protocols
09:18 and their testing regiments, et cetera,
09:20 may not be where they need to be yet.
09:23 And I think it's entirely appropriate then for us
09:26 to plant a flag and say, all right, frontier companies,
09:30 you need to disclose what your safety protocols are
09:34 to make sure that we don't have rogue programs going off
09:38 and hacking into our financial system, for example.
09:43 Tell us what tests you're using.
09:48 Make sure that we have some independent verification
09:50 that right now this stuff is working.
09:53 But that framework can't be a fixed framework
09:58 because these models are developing so quickly
10:03 that oversight and any regulatory framework
10:08 is gonna have to be flexible
10:09 and it's gonna have to be nimble.
10:10 And by the way, it's also gonna require
10:15 some really smart people who understand
10:18 how these programs and these models are working,
10:22 not just in the companies themselves,
10:24 but also in the nonprofit sector and in government,
10:28 which is why I was glad to see that
10:30 the Biden administration, part of the executive order,
10:35 is specifically calling on a bunch of hot shot young people
10:40 who are interested in AI to do a stint
10:45 outside of the companies themselves
10:47 and go work for government for a while,
10:51 go work with some of the research institutes
10:55 that are popping up in places like the Harvard Lab
10:58 or the Stanford AI Center and some other nonprofits.
11:03 Because we're going to need to make sure
11:06 that everybody can have confidence
11:11 that whatever journey we're on here with AI,
11:15 that it's not just being driven by a few people
11:19 without any kind of interaction or voice
11:23 from ordinary folks, regular people
11:27 who are gonna be using these products
11:28 and impacted by these products.
11:30 - There's ordinary folks and there's the people
11:32 who are building it who need to go help write regulations.
11:34 And there's a split there.
11:35 The conventional wisdom in the Valley for years
11:39 is the government is too slow,
11:41 it doesn't understand technology,
11:43 and by the time it actually writes a functional rule,
11:45 the technology it was aiming to regulate will be obsolete.
11:49 This is markedly different, right?
11:50 The AI doomers are the ones asking for regulation the most.
11:54 The big companies have asked for regulation.
11:56 Sam Altman has toured the capitals of the world
11:59 politely asking to be regulated.
12:00 Why do you think there's such a fervor for that regulation?
12:04 Is it just incumbents wanting to cement their position?
12:06 - Well, look, you're raising an important point,
12:09 which is, and rightly there's some suspicion, I think,
12:14 among some people that, yeah, these companies
12:18 want regulation because they wanna lock out competition.
12:22 And as you know, historically,
12:26 sort of a central principle of tech culture
12:30 has been open source.
12:31 We want everything out there.
12:33 Everybody's able to play with models and applications
12:38 and create new products, and that's how innovation happens.
12:44 Here, regulation starts looking like,
12:47 well, maybe we start having closed systems
12:50 and the big frontier companies, Microsoft, the Googles,
12:54 the open AIs, Anthropics,
12:56 that they're gonna somehow lock us out.
12:58 But in my conversations with the tech leaders on this,
13:05 I think there is, for the first time, some genuine humility
13:13 because they are seeing the power
13:17 that these models may have.
13:19 I talked to one executive, and look,
13:23 there's no shortage of hyperbole in the tech world, right?
13:28 But this is a pretty sober guy, like an adult who's--
13:32 - Now I have to guess who it is.
13:33 - Who's seen a bunch of these cycles
13:35 and been through boom and bust.
13:37 And I asked him, I said, well,
13:40 when you say this technology you think
13:42 is gonna be transformative, give me sort of some analogy.
13:45 He said, you know, I sat with my team and we talked about it
13:49 and after going around and around,
13:51 what we decided was maybe the best analogy was electricity.
13:55 And I thought, well, yeah, electricity,
13:58 that was a pretty big deal.
13:59 And if that's the case, I think what they recognize
14:03 is that it's in their own commercial self-interest
14:08 that there's not some big screw up on this,
14:11 that if in fact it is as transformative
14:15 as they expect it to be, then having some rules,
14:19 some protections that create a competitive field,
14:24 allow everybody to participate, come up with new products,
14:26 compete on price, compete on functionality,
14:30 but that none of us are taking such big risks.
14:35 - Yeah, there's a view--
14:35 - That the whole thing blows up in our faces.
14:40 I do think that there is sincere concern
14:43 that if we just have an unfettered race to the bottom,
14:47 that this could end up choking off the goose
14:50 that might be laying a bunch of golden eggs.
14:52 - There is the view in the Valley though,
14:53 that any constraint on technology is bad.
14:56 - Yeah, and I disagree with that.
14:57 - Any caution, any principle where you might slow down
15:00 is the enemy of progress and the net good is better
15:02 if we just race that as fast as possible.
15:04 - In fairness, that's not just in the Valley,
15:06 that's in every business I know.
15:10 It's not like Wall Street loves regulation.
15:12 It's not as if manufacturers are really keen
15:15 for government to micromanage how they produce goods.
15:19 But one of the things that we've learned
15:24 through the industrial age and the information age
15:32 over the last century is that you can over-regulate,
15:37 you can have over-bureaucratized things,
15:41 but that if you have smart regulations
15:45 that set some basic goals and standards,
15:48 making sure you're not creating products
15:51 that are unsafe to consumers,
15:52 making sure that if you're selling food,
15:56 people who go in the grocery store can trust
15:59 that they're not gonna die from salmonella or E. coli,
16:03 making sure that if somebody buys a car
16:06 that the brakes work, making sure that
16:11 if I take my electric whatever
16:18 and I plug it into a socket anywhere,
16:21 any place in the country,
16:23 that it's not gonna shock me and blow up on my face.
16:26 It turns out all those various rules, standards,
16:29 actually create marketplaces and are good for business.
16:33 And innovation then develops around those rules.
16:38 So it's not an argument that,
16:41 I think part of what happens in the tech community
16:44 is the sense that we're smarter than everybody else
16:48 and these people slowing us down
16:51 are impeding rapid progress.
16:55 And when you look at the history of innovation,
16:58 it turns out that having some smart guide posts
17:02 around which innovation takes place,
17:04 not only doesn't slow things down,
17:08 in some cases it actually raises standards
17:10 and accelerates progress.
17:12 There were a bunch of folks who said,
17:13 "Look, you're gonna kill the automobile
17:17 "if you put airbags in there."
17:19 Well, it turns out actually people figured out,
17:21 you know what, we can actually put airbags in there
17:24 and make 'em safer and over time the costs go down.
17:29 - There's a great tip-top. - And everybody's better off.
17:31 - Somebody reacting to drunk driving laws in the '80s,
17:33 I'll send it to you.
17:35 There's a really difficult part in the CEO about provenance.
17:39 Watermarking content,
17:40 making sure people can see it's AI generated.
17:42 You are among the most deep-faked people in the world.
17:47 - Well, because what I realized is when I left office,
17:49 I'd probably been filmed and recorded
17:52 more than any human in history
17:54 just 'cause I happened to be the first president
17:56 when the smartphone came out.
17:59 - I'm assuming you have some very deep personal feelings
18:01 about being deep-faked in this way.
18:03 There's a big First Amendment issue here, right?
18:06 I can use Photoshop one way
18:08 and the government doesn't say I have to put a label on it.
18:10 I use it a slightly different way,
18:12 the government's gonna show up and tell Adobe,
18:13 "You've gotta put a label on this."
18:15 How do you square that circle?
18:18 It seems very challenging to me.
18:19 - I think this is gonna be an iterative process.
18:22 I don't think you're gonna be able to create a blanket rule,
18:25 but the truth is that's been how
18:28 our governance of information, media, speech,
18:35 that's how it's developed for a couple hundred years now.
18:39 With each new technology,
18:41 we have to adapt and figure out some new rules of the road.
18:44 So let's take my example.
18:47 A deep fake of me that is used for political satire
18:52 or just to, you know, somebody doesn't like me
18:55 and they wanna deep fake me.
18:57 I was the president of the United States
18:59 and there are some pretty formidable rules
19:04 that have been set up to protect people
19:06 from making fun of public figures.
19:09 I'm a public figure.
19:10 And what you are doing to me as a public figure
19:14 is different than what you do to a 13-year-old girl
19:19 and a freshman in high school.
19:23 And so we're gonna treat that differently.
19:27 And that's okay.
19:28 We should have different rules for public figures
19:30 than we do for private citizens.
19:32 We should have different rules for what is clearly
19:37 sort of political commentary and satire
19:40 versus cyber bullying or--
19:42 - Where do you think those rules land?
19:43 Do they land on individuals?
19:45 Do they land on the people making the tools
19:48 like Adobe or Google?
19:50 Do they land on the distribution networks like Facebook?
19:52 - My suspicion is how responsibility is allocated,
19:56 we're gonna have to sort out.
19:58 I think that, but I think the key thing to understand is,
20:03 and look, I taught constitutional law.
20:06 I'm close to a first amendment absolutist
20:08 in the sense that I generally don't believe
20:13 that even offensive speech, mean speech, et cetera,
20:18 should be certainly not regulated by the government.
20:23 And I'm even game to argue that on social media platforms,
20:28 et cetera, that the default position should be free speech
20:34 rather than censorship.
20:36 I agree with all that.
20:38 But keep in mind, we've never had completely free speech.
20:43 We have laws against child pornography.
20:45 We have laws against human trafficking.
20:48 We have laws against certain kinds of speech
20:57 that we deem to be really harmful
21:00 to the public health and welfare.
21:04 And the courts, when they evaluate that,
21:07 they say, they come up with a whole bunch
21:10 of time, place, manner restrictions
21:13 that may be acceptable in some cases,
21:15 aren't acceptable in others.
21:17 You get a bunch of case law that develops.
21:19 There's arguments about it in the public square.
21:22 We may disagree, should Nazis be able to protest in Skokie?
21:26 Well, that's a tough one, but we can figure this out.
21:31 And that I think is how this is gonna develop.
21:34 I do believe that the platforms themselves
21:39 are more than just common carriers like the phone company.
21:46 They're not passive.
21:48 There's always some content moderation taking place.
21:53 And so, once that line has been crossed,
21:58 it's perfectly reasonable for the broader society to say,
22:01 well, we don't wanna just leave that entirely
22:05 to a private company.
22:07 I think we need to at least know
22:09 how you're making those decisions,
22:11 what things you might be amplifying through your algorithm
22:14 and what things you aren't.
22:16 And it may be that what you're doing isn't illegal,
22:21 but we should at least be able to know
22:22 how some of these decisions are made.
22:24 I think it's gonna be that kind of process
22:27 that takes place.
22:29 What I don't agree with is the large tech platforms
22:33 suggesting somehow that we want to be treated entirely
22:38 as a common carrier and--
22:46 - It's the Clarence Thomas view, right?
22:48 - Yeah, which, but on the other hand,
22:51 we know you're selling advertising
22:53 based on the idea that you're making a bunch of decisions
22:56 about your products. - Well, this is
22:57 very challenging, right?
22:58 If you say you're a common carrier,
22:59 then you are in fact regulating them.
23:01 You're saying you can't make any decisions.
23:02 You say you are exercising editorial control.
23:05 They are protected by the First Amendment,
23:07 and then regulations get very, very difficult.
23:09 It feels like even with AI,
23:13 when we talk about content generation with AI
23:15 or with social networks,
23:16 we run right into the First Amendment over and over again.
23:19 And most of our approaches, this is what I worry about,
23:22 is we try to get around it
23:23 so we can make some speech regulations
23:25 without saying we're gonna make some speech regulations.
23:28 Copyright law is the most effective speech regulation
23:30 on the internet because everyone will agree,
23:31 okay, Disney owns that, bring it down.
23:33 - Well, because there's property involved.
23:35 There's money involved. - There's money.
23:37 Maybe less property than money,
23:39 but there's definitely money.
23:40 - IP and hence money, yeah.
23:43 Well, look, here's my general view.
23:46 - Yeah, but do you worry
23:47 that we're making fake speech regulations
23:49 without actually talking about the balance of equities
23:51 that you're describing here?
23:52 - I think that we need to have,
23:56 and AI I think is gonna force this,
23:59 that we need to have a much more robust
24:04 public conversation around these rules
24:08 and agree to some broad principles to guide us.
24:13 And the problem is right now, let's face it,
24:17 it's gotten so caught up in partisanship,
24:20 partly because of the last election,
24:23 partly because of COVID and Vax and anti-Vax proponents,
24:28 that we've lost sight of our ability
24:32 to just come up with some principles
24:35 that don't advantage one party or another
24:37 or one position or another,
24:38 but do reflect our broad adherence to democracy.
24:42 But the point I guess I'm emphasizing here
24:47 is this is not the first time we've had to do this.
24:50 We had to do this when radio emerged.
24:52 We had to do this when television emerged.
24:55 And it was easier to do back then,
24:58 in part because you had three or five companies
25:02 or the public through the government
25:05 technically owned the airwaves.
25:06 And so you could make these--
25:08 - No, no, this is the square on my bingo card.
25:09 If I could get to the red lion case with you, I've won.
25:13 There was a framework here that said
25:14 the government owns the airwaves,
25:15 it's gonna allocate them to people in some way
25:19 and we can make some decisions
25:20 and that is an effective and appropriate--
25:21 - That was the hook.
25:22 - Can you bring that to the internet?
25:24 - I think you have to find a different kind of hook.
25:27 - Sure.
25:28 - But ultimately, even though the idea that
25:31 the public and the government owned the airwaves,
25:33 that was really just another way of saying,
25:39 this affects everybody.
25:41 And so we should all have a say in how this operates
25:43 and we believe in capitalism and we don't mind you
25:47 making a bunch of money through the innovation
25:50 and the products that you're creating
25:52 and the content that you're putting out there,
25:54 but we wanna have some say in what our kids are watching
25:59 or how things are being advertised, et cetera.
26:02 - If you were the president now
26:03 and I was with my family last night
26:06 and the idea that the Chinese TikTok
26:09 teaches kids to be scientists and doctors
26:12 and our TikTok, the algorithm is different
26:15 and we should have a regulation like China has
26:16 that teaches our kids to be doctors, it came up
26:18 and all the parents around the table said,
26:20 yeah, we're super into that, we should do that.
26:22 How would you write a rule like that?
26:24 Is it even possible with our first amendment?
26:25 - Well, look, for a long time, let's say under television,
26:29 there were requirements around children's television.
26:32 It kept on getting watered down to the point where
26:35 anything qualified as children's television, right?
26:38 We had a fairness doctrine that made sure
26:42 that there was some balance
26:45 in terms of how views were presented.
26:47 And I'm not arguing good or bad in either of those things.
26:52 I'm simply making the point that we've done it before.
26:57 And there was no sense that somehow that was anti-democratic
27:00 or it was that squashing innovation.
27:02 It was just an understanding that we live in a democracy.
27:06 And so we kind of set up rules so that
27:10 we think the democracy works as better rather than worse.
27:15 And everybody has some say in it.
27:18 The idea behind the first amendment is
27:23 we're gonna have a marketplace of ideas
27:25 that these ideas battle themselves out.
27:28 And ultimately we can all judge
27:31 better ideas versus worse ideas.
27:33 And I deeply believe in that core principle.
27:38 We are gonna have to adapt to the fact that now
27:42 there is so much content, there are so few regulators,
27:47 everybody's can throw up any idea out there,
27:51 even if it's sexist, racist, violent, et cetera.
27:56 And that makes it a little bit harder
27:59 than it did when we only had three TV stations
28:02 or a handful of radio stations or what have you.
28:05 But the principle still applies,
28:07 which is how do we create a deliberative process
28:11 where the average citizen can hear
28:14 a bunch of different viewpoints and then say,
28:17 you know what, here's what I agree with,
28:21 here's what I don't agree with.
28:22 And hopefully through that process, we get better outcomes.
28:26 - Let me crash the two themes of our conversation together,
28:29 AI and the social platforms.
28:31 Meta just had earnings,
28:33 Mark Zuckerberg was on the earnings call.
28:35 And he said, "For our feed apps,"
28:37 Instagram, Facebook, threads,
28:40 "For the feed apps, I think that over time,
28:43 "more of the content that people consume
28:45 "is either going to be generated or edited by AI."
28:49 So he envisions a world in which social networks
28:50 are showing people perhaps exactly what they wanna see
28:54 inside of their preferences,
28:55 much like advertising that keeps them engaged.
28:58 Should we regulate that away?
28:59 Should we tell them to stop?
29:00 Should we embrace this as a way to show people
29:03 more content that they're willing to see
29:04 that might expand their worldview?
29:07 - This is something I've been wrestling with for a while.
29:08 I gave a speech about misinformation
29:12 in our information silos at Stanford last year.
29:15 I am concerned about business models
29:21 that just feed people exactly what they already believe
29:28 and agree with and all designed to sell them stuff.
29:38 Do I think that's great for democracy?
29:40 No.
29:42 Do I think that that's something
29:46 that the government itself can regulate?
29:49 I'm skeptical that you can come up
29:52 with perfect regulations there.
29:54 What I actually think probably needs to happen though
29:58 is that we need to think about different platforms
30:04 and different models, different business models,
30:09 so that it may be that I'm perfectly happy
30:15 to have AI mediate how I buy jeans online.
30:20 That could be very efficient.
30:24 I'm perfectly happy with it.
30:25 And so if it's a shopping app or a thread, fine.
30:34 When we're talking about political discourse,
30:37 when we're talking about culture, et cetera,
30:39 can we create other places for people to go
30:43 that broaden their perspective,
30:46 make them curious about how other people
30:51 are seeing the world,
30:53 where they actually learn something
30:54 as opposed to just reinforce their existing biases.
30:57 But I don't think that's something
30:59 that government is going to be able
31:01 to sort of legislate.
31:05 I think that's something that consumers
31:08 and interacting with companies are gonna have to discover
31:14 and find alternatives.
31:16 The interesting thing, look, I'm not obviously 12 years old.
31:21 I didn't grow up with my thumbs on these screens.
31:26 So I'm an old ass, 62 year old guy
31:30 who sometimes can't really work all the apps on my phone.
31:34 But I do have two daughters who are in their 20s.
31:39 And it's interesting the degree to which
31:43 at a certain point they have found almost every app,
31:48 social media app thread getting kind of boring after a while.
31:57 It gets old.
31:58 Precisely because all it's doing is telling them
32:00 what you already know or what the program thinks
32:04 you want to know or what you want to see.
32:06 So you're not surprised anymore.
32:08 You're not discovering anything anymore.
32:10 You're not learning anymore.
32:11 And so I think there's a promise to how we can,
32:16 there's a market, let's put it that way.
32:19 I think there's a market for products
32:22 that don't just do that.
32:25 It's the same reason why people have asked me around AI,
32:30 are there going to still be artists around
32:34 and singers and actors,
32:37 or is it all going to be computer generated stuff?
32:41 And my answer is, for elevator music,
32:44 AI is going to work fine.
32:47 - A bunch of elevator musicians just freaked out, dude.
32:51 - For the average, even legal brief,
32:56 or let's say a research memo in a law firm,
33:03 AI can probably do as good a job
33:05 as a second year law associate.
33:07 - Certainly as good a job as I would do.
33:08 - Exactly.
33:09 But Bob Dylan or Stevie Wonder.
33:14 - There's one thing.
33:15 - That is different.
33:16 And the reason is because part of the human experience,
33:20 part of the human genius is it's almost a mutation.
33:23 It's not predictable.
33:24 It's messy, it's new, it's different, it's rough.
33:27 It's weird.
33:28 That is the stuff that ultimately taps
33:33 into something deeper in us.
33:35 And I think there's going to be a market for that.
33:38 - So you, in addition to being the former president,
33:42 you are a best-selling author.
33:44 You have a production company with your wife.
33:46 You're in the IP business,
33:48 which is why you think it's property.
33:49 It's good, I appreciate that.
33:51 The thing that will stop AI in its tracks in this moment
33:55 is copyright lawsuits, right?
33:57 You ask a generative AI model to spit out
33:59 Barack Obama's speech,
34:01 and it will do it to some level of passability.
34:05 Probably C plus, that's my estimation.
34:06 - It'd be one of my worst speeches,
34:08 but it might sound--
34:10 - You fire a can of C plus content
34:12 at any business model on the internet, you upend it.
34:15 But there are a lot of authors, musicians now,
34:18 artists suing the companies saying,
34:20 "This is not fair use to train on our data,
34:22 to just ingest all of it."
34:24 Where do you stand on that?
34:25 Do you think that, as an author,
34:26 do you think it's appropriate
34:28 for them to ingest this much content?
34:29 - Set me aside for a second,
34:30 'cause Michelle and I, we've already sold a lot of books
34:35 and we're doing fine,
34:36 so I'm not overly stressed about it personally.
34:39 But what I do think,
34:43 and I think that's what President Biden's
34:47 executive order speaks to,
34:49 but there's a lot more work that has to be done on this,
34:52 and copyright is just one element of this.
34:54 If AI turns out to be as pervasive
35:01 and as powerful as its proponents expect,
35:06 and I have to say, the more I look into it,
35:08 I think it is going to be that disruptive,
35:11 we are going to have to think about,
35:13 not just intellectual property,
35:16 we're gonna have to think about jobs
35:18 and the economy differently,
35:20 and not all these problems are gonna be solved
35:24 inside of industry.
35:26 So what do I mean by that?
35:27 I think with respect to copyright law,
35:31 you will see people with legitimate claims
35:40 financing lawsuits and litigation,
35:43 and through the courts
35:46 and various other regulatory mechanisms,
35:49 people who are creating content,
35:51 they're gonna figure out ways to get paid
35:53 and to protect the stuff they create.
35:58 And it may impede the development
36:01 of large language models for a while,
36:03 but over the longterm, I don't think,
36:07 that'll just be a speed bump.
36:09 The broader question is gonna be,
36:11 what happens when 10% of existing jobs
36:18 now definitively can be done better
36:22 by some large language model
36:26 or other variant of AI?
36:30 And are we gonna have to re-examine
36:37 how we educate our kids
36:41 and what jobs are gonna be available?
36:43 And the truth of the matter is that for,
36:48 during my presidency,
36:49 there was, I think, a little bit of naivete,
36:52 where people would say,
36:55 the answer to lifting people out of poverty
36:58 and making sure they have high enough wages
37:00 is we're gonna retrain them
37:01 and we're gonna educate them
37:02 and they should all become coders
37:04 'cause that's the future.
37:05 Well, if AI is coding better than
37:07 all but the very best coders,
37:11 if Chad GPT can generate a research memo
37:15 better than the third, fourth year associate,
37:18 maybe not the partner,
37:20 who's got a particular expertise or judgment,
37:23 now what are you telling young people coming up?
37:28 And I think we're gonna have to start
37:30 having conversations about
37:33 how do we pay those jobs that can't be done by AI?
37:38 How do we pay those better?
37:42 Healthcare, nursing, teaching,
37:47 childcare, art,
37:51 things that are really important to our lives
37:54 but maybe commercially, historically,
37:56 have not paid as well.
37:58 Are we gonna have to think about
37:59 the length of the work week
38:02 and how we share jobs?
38:03 Are we gonna have to think about
38:05 the fact that more people
38:08 choose
38:11 to
38:13 operate like independent contractors
38:16 but where are they getting their healthcare from
38:18 and where are they getting their retirement from?
38:22 Those are the kinds of conversations
38:24 that I think we're gonna have to start
38:26 having to deal with
38:27 and that's why I'm glad that
38:30 President Biden's EO begins that conversation.
38:34 Again, I can't emphasize enough
38:36 because I think you'll see some people saying,
38:38 "Well, we still don't have tough regulations.
38:40 "Where's the teeth in this?
38:41 "We're not forcing these big companies
38:43 "to do X, Y, Z as quickly as we should."
38:47 That I think this administration understands
38:51 and I've certainly emphasized in conversations with them,
38:54 this is just the start
38:56 and this is gonna unfold over the next
38:59 two, three, four, five years.
39:01 And by the way, it's gonna be unfolding internationally.
39:04 There's gonna be a conference this week
39:07 in England
39:09 around international safety standards on AI.
39:14 The Vice President, President Harris
39:18 is gonna be attending.
39:19 I think that's a good thing
39:21 because part of the challenge here
39:24 is we're gonna have to have some cross-border frameworks
39:28 and regulations and standards and norms.
39:31 That's part of what makes this
39:33 different and harder to manage
39:37 than the advent of radio and television
39:40 because the internet by definition
39:42 is a worldwide phenomenon.
39:46 - Yeah, you said you were the first digital president.
39:49 I gotta ask, have you used these tools?
39:50 Have you had the aha moment
39:52 where the computer's talking to you?
39:53 Have you generated a picture of yourself?
39:54 - I have used some of these tools
39:56 during the course of these conversations
39:59 and this research.
40:00 - Has Bing flirted with you yet?
40:03 It flirts with everybody, I hear.
40:06 - Bing didn't flirt with me,
40:07 but the way they're designed,
40:10 and I've actually raised this with some of the designers.
40:13 In some cases, they're designed to anthropomorphize,
40:20 to make it feel like you are talking to a human.
40:25 It's like, can we pass the Turing test?
40:29 That's a specific objective
40:32 'cause it makes it seem more magical.
40:34 And in some cases, it improves function,
40:36 but in some cases, it just makes it cooler.
40:38 And so there's a little pizzazz there
40:40 and people are interested.
40:42 I have to tell you that generally speaking though,
40:44 the way I think about AI is as a tool,
40:50 not a buddy.
40:52 And I think part of what we're gonna need to do
40:55 as these models get more powerful,
41:00 and this is where I do think government can help,
41:01 is also just educating the public
41:04 on what these models can do and what they can't do.
41:07 That these are really powerful extensions of yourself
41:13 and tools, but also reflections of yourself.
41:20 And so don't get confused and think
41:24 that somehow what you're seeing in the mirror
41:27 is some other consciousness.
41:31 A lot of times, this is just feeding back to you.
41:34 - You just want Bing to flirt with you.
41:35 This is what I felt personally very deeply.
41:39 All right, last question.
41:40 I need to know this.
41:41 It's very important to me.
41:42 What are the four apps in your iPhone dock?
41:45 - Four apps at the bottom, you got Safari.
41:48 - Key.
41:50 - I've got my text.
41:53 The green box.
41:56 - You're a blue bubble.
41:57 Do you give people any crap for being a green bubble?
42:00 - No, I'm okay.
42:02 I've got my email and I have my music.
42:09 That's it.
42:11 - It's like the stock set.
42:12 - Yeah, if you asked the ones
42:17 that I probably go to more than I should,
42:22 I might have to put like words with friends on there
42:25 where I think I waste a lot of time
42:27 and maybe my NBA league pass.
42:31 - Oh, that's pretty good.
42:33 That's pretty good.
42:33 - But I try not to overdo it on those.
42:38 - League pass is just one click above the dock.
42:40 That's what I'm getting out of this.
42:41 - That's exactly.
42:42 - President Obama, thank you so much for being on Decoder.
42:43 I really appreciate this conversation.
42:44 - I really enjoyed it.
42:45 And I want to emphasize once again,
42:47 'cause you've got an audience
42:51 that understands this stuff, cares about it,
42:53 is involved in it and working at it.
42:55 If you are interested in helping to shape
42:58 all these amazing questions that are gonna be coming up,
43:01 go to ai.gov and see if there are opportunities for you
43:05 fresh out of school,
43:06 or you might be an experienced tech coder
43:11 who's done fine, bought the house,
43:16 got everything set up and says,
43:17 you know what, I wanna do something for the common good.
43:22 Sign up.
43:23 This is part of what we set up during my presidency,
43:26 US Digital Services.
43:28 And it's remarkable how many really high level folks
43:33 decided that for six months, for a year, for two years,
43:41 them devoting themselves to questions
43:44 that are bigger than just what the latest app
43:49 or video game was,
43:53 turned out to be really important to them
43:56 and meaningful to them.
43:58 And attracting that kind of talent into this field
44:02 with that perspective, I think is gonna be vital.
44:05 - Yeah, sounds like it.
44:06 - All right, great to talk to you.
44:07 - Thanks so much.
44:08 - You bet.
44:09 - Thank you very much. - I really enjoyed it.
44:10 - I appreciate that.
44:11 - Come on, why don't we get a picture?
44:12 - Yeah.
44:13 - All right, three, two, one.
44:16 One more.
44:17 Good, fantastic.
44:19 Really enjoyed it.
44:20 You did great.
44:21 - Ha!
44:22 - Perfect.
44:23 - Can I just show you one thing real quick?
44:24 - Yes, of course.
44:25 [MUSIC PLAYING]

Recommended