Amanpour & Co. - August 30, 2024

  • last month
From July 27, 2023, Christiane hosts a panel of leaders in the field of artificial technology. In a world where it’s increasingly hard to discern fact from fiction, Hari Sreenivasan and Christiane Amanpour discuss the ethical dilemmas of AI, and why it’s more important than ever to keep real journalists in the game.

Category

🗞
News
Transcript
00:00Hello, everyone, and welcome to Amanpour & Company.
00:07Here's what's coming up.
00:10Artificial intelligence, the power and the peril.
00:13Are people playing with fire?
00:14Absolutely, without a doubt.
00:16Four leaders in their field unpack the uncertainty that lies ahead.
00:20We have agency.
00:21I just want to kind of divorce that kind of hypothetical scenario with the reality, and
00:26that is we decide.
00:27What it means for jobs and how it will change our working lives.
00:31I genuinely believe we're going to get a four-day week out of AI.
00:35Do any of you believe that there will be a universal basic income, therefore?
00:39It is time to start thinking about ideas like that.
00:42Also this hour.
00:43We can now call the 2024 presidential race for Joe Biden.
00:47Policing misinformation ahead of crucial U.S. presidential elections.
00:51I've got two children who are 11 and 13.
00:53Are they going to grow up in a world where they can trust information?
00:57How to regulate a technology that even its creators don't fully understand.
01:02If this technology goes wrong, it can go quite wrong.
01:06When looking at CEOs or other people of power, it's to watch the hands, not the mouth.
01:11How AI could revolutionize healthcare.
01:13These are life-saving opportunities.
01:16And make our relationships with machines much more intimate.
01:20When it comes to relationships, and in particular sexual relationships, it gets very weird very
01:25quickly.
01:26Before we go ahead, Hari and I discuss how to keep real journalists in the game.
01:31I am OTV's and Alisha's first AI News anchor, Lisa.
01:35Look, this is just the first generation of, I want to say this woman, but it's not, right?
01:47Amanpour & Company is made possible by Candice King Weir.
01:52The family foundation of Leila and Mickey Strauss.
01:56Jim Atwood and Leslie Williams.
01:58Mark J. Bleschner.
02:01Seton J. Melvin.
02:03Charles Rosenblum.
02:05Ku and Patricia Ewan.
02:07Committed to bridging cultural differences in our communities.
02:11Barbara Hope Zuckerberg.
02:13We try to live in the moment.
02:16To not miss what's right in front of us.
02:18At Mutual of America, we believe taking care of tomorrow can help you make the most of
02:23today.
02:24Mutual of America Financial Group, retirement services and investments.
02:29Additional support provided by these funders.
02:33And by contributions to your PBS station from viewers like you.
02:41Welcome to the program, everyone.
02:42I'm Christiane Amanpour in London.
02:45Whether AI makes our societies more or less equitable, unlocks breakthroughs or
02:50becomes a tool of authoritarians is up to us.
02:54That is the warning and the call to arms from the Biden administration this week.
02:59In a joint op-ed, the secretaries of state and commerce say the key to shaping the
03:04future of AI is to act quickly and collectively.
03:08In just a few short months, the power and the peril of artificial intelligence have
03:13become the focus of huge public debate.
03:16And the conversation couldn't be more relevant as the atomic bomb biopic Oppenheimer
03:21reminds us all of the danger of unleashing unbelievably powerful technology on the
03:26world.
03:28Are we saying there's a chance that when we push that button, we destroy the world?
03:34Chances are near zero.
03:37Director Christopher Nolan himself says that leading AI researchers literally refer to
03:42this as their Oppenheimer moment.
03:45Predictions range from the cures for most cancers to possibly the end of humanity as
03:50we know it.
03:51What most people agree on, though, is the need for governments to catch up now.
03:56To assess all of this and to separate the hysteria and hyperbole from the facts, we
04:01brought together a panel of leaders in the field of artificial intelligence.
04:06Nina Schick, global AI advisor and author of Deep Fakes.
04:11Renowned computer science professor Dame Wendy Hall.
04:14Connor Leahy, an AI researcher who is the CEO of Conjecture.
04:19And Priya Lakhani, an AI government advisor and the CEO of Century Tech.
04:26Welcome all of you to this chat, to coin a phrase.
04:29I mean, it's such a massively important issue.
04:32And I just thought I'd start by announcing that when I woke up and had my morning coffee,
04:37AI is all over this page on the good, on the bad, on the questions, on the indifference.
04:44What I want to know is from each one of you, literally, is what keeps you up at night?
04:49You're all the experts for good or for bad.
04:52And I'm going to start with you.
04:54We can conceive of it as us being now on the cusp, I think, of a profound change in our
05:00relationship to machines that's going to transform the way we live, transform the way we work,
05:06even transform our very experience of what it means to be human.
05:10That's how seismic this is.
05:12If you consider the exponential technologies of the past 30 years, the so-called technologies
05:17of the information age, from the internet to cloud to the smartphone, it's all been
05:21about building a digital infrastructure and a digital ecosystem, which has become a fundamental
05:27tenant of life.
05:28However, AI takes it a step further.
05:31With AI, and in particular, generative AI, which is what I have been following and tracking
05:36for the last decade, you're really looking at the information revolution becoming an
05:42intelligence revolution, because these are machines that are now capable of doing things
05:47that we thought were only unique to human creativity and to human intelligence.
05:51So the impact of this as a whole for the labor market, for the way we work, for the way that
06:00the very framework of society unfolds is just so important.
06:05My background is in geopolitics, where I kind of advise global leaders for the better
06:10half of two decades.
06:11And the reason I became interested in AI is not because I have a tech background.
06:16I have a background assessing trends for humanity.
06:19This isn't about technology.
06:21This is ultimately a story for humanity and how we decide this technology is going to
06:27unfold in our companies, so within enterprise, very exciting, but also society writ large.
06:33And the final thing I'd say is we have agency.
06:35A lot of the debate has been about AI autonomously taking over, and I just want to kind of divorce
06:42that kind of hypothetical scenario with the reality, and that is we decide.
06:46Connor, though, you believe, because we've spoken before, that actually these machines
06:51are going to be so powerful and so unable to control by human input that they actually
06:57could take over.
06:58Unfortunately, I do think that this is a possibility.
07:01In fact, I expect it's a default probability, but I would like to agree with Nina fully
07:05that we do have agency, that it doesn't have to happen.
07:08But you asked the question earlier, what keeps me up at night?
07:12And I guess what I would say what keeps me up at night is that a couple million years
07:15ago, the common ancestor between chimpanzee and humans split into two subspecies.
07:21One of these developed a roughly three times larger brain than the other species.
07:26One of them goes to the moon and builds nuclear weapons.
07:28One of them doesn't.
07:29One of them is at the complete mercy of the other.
07:32One of them has full control.
07:33I think this kind of relationship to very powerful technology can happen.
07:38I'm not saying it can't.
07:40It is the default outcome.
07:41Unless we take our agency, we see that we are in control.
07:45We are the ones building these technologies, and as a society, we decide to go a different
07:49path.
07:50So to follow up on that, the same question to you, but from the point of view of how
07:54do we have agency, express agency, and regulate?
07:58You're a private entrepreneur.
08:01You also have been on the government, the British government's sort of regulation council.
08:06What will it take to ensure diversity, agency, and that the machines don't take over?
08:12Well, what it takes to ensure that is it's a lot of work, and there's lots of ideas.
08:16There's lots of theories.
08:17There are white papers.
08:18There's the pro-innovation regulation review that I worked on with Sir Patrick Vallance
08:23here in the UK.
08:24The U.S. government has been issuing guidance.
08:26The EU is issuing its own laws and guidance, but what we want to see is execution, Christiane.
08:32On the sort of what keeps you up at night, I feel sorry for my husband, because actually,
08:36what keeps me up is actually other issues, such as things like disinformation with generative
08:41AI.
08:42I've got two children who are 11 and 13.
08:43Are they going to grow up in a world where they can trust information and what's out
08:47there, or are these technologies, because of lack of execution on the side of policymakers,
08:52means that actually it's sort of a free-for-all, bad actors have access to this technology,
08:57and you don't know what to trust.
08:58But actually, the biggest thing that keeps me up at night is a flip from what we've heard
09:01here.
09:02It's are we, as a human race, are we going to benefit from the opportunities that artificial
09:07intelligence also enables us to have?
09:11We often talk, and Christiane, forgive me, but for the last six months, it's all been
09:15about chat GPT and generative AI.
09:17That is really important, and that's where a lot of this discussion should be placed.
09:21But we also have traditional AI.
09:24We have artificial intelligence where we've been using data, we've been classifying, we've
09:28been predicting, we've been looking at scans and spotting cancer, where we've got a lack
09:34of radiologists, and we can augment radiology, we can augment teaching and learning.
09:40How are we also going to ensure that all around society, we don't actually exacerbate the
09:45digital divide, but we leverage artificial intelligence, what the best it can provide
09:49to help us in the areas of health care, education, security.
09:53So, you know, it's scary to think we're not using it to its full advantages, while we
09:58also must focus on the risks and the concerns.
10:00And so really, I sort of have this dual sort of what keeps me up at night.
10:03As I said, I sort of feel sorry for my husband because I'm sort of tapping on his shoulder
10:06again. And what about this?
10:07And what about that?
10:08We really need many different voices helping us build and design these systems and make
10:14sure they're safe, not just the technical teams that are working at the companies to
10:21build the AI that they're talking to the governments about.
10:25We need women, we need age range, we need diversity from different subject areas.
10:31We need lots of different voices, and that's what keeps me awake at night.
10:35Because if not, what is it? What's the option?
10:37Well, it's much, much more likely to go for use for the home, because you haven't got
10:43society represented in designing the systems.
10:47So you're concerned that it is it is just one segment of society, one small segment of
10:53society, right?
10:55We call them I like to call the tech bros.
10:58They are mostly men.
10:59There's very few women actually working in these companies at the cutting edge of what's
11:03happening. You saw the pictures of the CEOs and the vice presidents with Biden and with
11:09Rishi Sunak.
11:12And these are the voices that are dominating now.
11:14And we have to make sure that the whole of society is reflected in the design and
11:20development of these systems.
11:22So before I turn to you for more input, I want to quote from Douglas Hofstadter, who I'm
11:26sure you all know, the renowned author and cognitive scientist who's quoted about the
11:31issues that you've just highlighted, that chat GPT and generative have taken over the
11:36conversation. He says it, quote, just renders humanity a very small phenomenon compared to
11:42something else that is far more intelligent and will become incomprehensible to us, as
11:47incomprehensible to us as we are to cockroaches.
11:51A little kind of like what you said, but I see you wanting to dive in, Wendy, with
11:56comment. Well, I just I'd like to disagree with Priya a bit.
12:01I think that if we move too fast, we could get it wrong.
12:05If you think about the automobile industry, when it started, there were no roads.
12:10Someone had to walk in front of a car with a lamp, which shows you how fast they were
12:13going. If we tried to regulate the automobile industry then, we wouldn't have got very
12:18far because we couldn't see what was what was going to be coming in another hundred
12:22years. And I think we have to move.
12:24We have to move very fast to deal with the things that are immediate threats.
12:30And I think the disinformation, the fake news, we have two major democratic elections
12:36next year. The US president and our election here, whenever it is, could even be at the
12:42same time. And there are other elections.
12:44And the disinformation, the fake news, the Pope in a puffer jacket moments, these could
12:52really mess up these elections.
12:55And I think there's an immediate threat to potential democratic process.
12:59And I believe we should tackle those sorts of things at a fast speed and then get the
13:04global regulation of AI as it progresses through the different generations of AI and get
13:13that right at the global level.
13:15So I think that's really important.
13:16She's bringing up as the most existential threat beyond the elimination of the species,
13:22the survival of the democratic process and the process of truth.
13:27Yes. So let me fast forward to our segment on deep fakes.
13:32As we know, it's a term that we give video audio that's been edited using an algorithm
13:37to replace, you know, the original person with the appearance of authenticity.
13:42So we remember a few months ago, there was this image of an explosion at the Pentagon,
13:48which was fake, but it went around the world virally.
13:51It caused markets to drop before people realized it was bogus.
13:55We know that that, for instance, they're using it in the United States in elections right
14:00now. I'm going to I'm going to run a soundbite from a podcast called Pod Save America,
14:07where they, as a joke, basically simulated Joe Biden's voice because they could never get
14:15him on the show. And they thought they would make a joke and see if it put the, you know,
14:18a bit of fire underneath underneath him.
14:20So just listen to this.
14:22Hey, friends of the pod, it's Joe Biden.
14:25Look, I know you haven't heard from me in a while and there's rumblings that it's because of
14:29some lingering hard feelings from the primary.
14:32Here's the deal. It's good.
14:34Did Joe Biden like it when Lovett said he had a better chance of winning Powerball than I
14:38did of becoming president?
14:40No, Joe did not.
14:42OK, so that was obviously a joke.
14:44They're all laughing. But Tommy Vita, who's one of these guys, a former, you know, a former
14:48White House spokesman, basically said they thought it was fun, but ended up thinking, oh,
14:53God, this is going to be a big problem.
14:56Are people playing with fire, Connor?
14:57Absolutely. Without a doubt.
15:00These kind of technologies are widespreadly available.
15:02You can go online right now and you can find open source code that you can download to your
15:06computer, you know, play with a little bit, take 15 seconds of audio from any person's voice
15:11anywhere on the Internet without their consent and make them say anything you want.
15:14You can call the grandparents and their voice.
15:16You can ask for money. You can, you know, put it on your phone.
15:18Twitter says some kind of political event happened.
15:20This is already possible and already being exploited by criminals, by criminals.
15:25I actually wrote the book on deepfakes a few years ago, and I initially started tracking
15:29deepfakes, which I call the first viral form of generative AI back in 2017, when they first
15:34started emerging. And no surprise.
15:36But when it became possible for AI to move beyond its traditional capabilities to actually
15:41generate or create new data, including visual media or audio, it has this astonishing
15:47ability to clone people's biometrics.
15:49Right. And the first use case was a non-consensual pornography, because just like with the
15:55Internet, pornography presented as on the cutting edge.
15:59But when I wrote my book and actually at the time I was advising a group of global leaders,
16:03including the NATO Secretary General and Joe Biden, we were looking at it in the context of
16:08election interference and in the context of information integrity.
16:11So this debate has been going on for quite a few years.
16:15And over the past few years, it's just that now it's become, you know.
16:18Right. But that's the whole point.
16:20This is the point. Just like social media, all of this stuff has been going on for a few
16:24years until it almost takes over.
16:26But the good thing is that there is an entire community working on solutions.
16:31I've long been very proud to be a member of the community that's pioneering content
16:36authenticity and provenance.
16:38So rather than being able to detect everything that's fake, because it's not only that
16:43AI will be used to create malicious content.
16:45Right. If you accept my thesis that AI increasingly is going to be used almost as a
16:50combustion engine for all human creative and intelligent work, we're looking at a future
16:56where most of the information and content we see online has some element of AI generation
17:00within it. So if you're trying to detect everything that's generated by AI, that's a fool's
17:05errand. It's more that the onus should be on good actors or companies that are building
17:11generative AI tools to be able to cryptographically hash.
17:15So you have an indelible.
17:17It's more than a watermark because it can't be removed.
17:20Signal in the DNA of that content and information to show its origin.
17:24Yeah. So like the good housekeeping seal of approval.
17:27It's basically about creating an alternative safe ecosystem to ensure information.
17:33So let's just play this and then and then maybe this will spark a little bit more on this.
17:37This is, you know, the opposite end of the Democratic joke that we just saw.
17:41This is from an actual Republican National Committee.
17:44Serious, you know, fake.
17:49This just in, we can now call the 2024 presidential race for Joe Biden.
17:58This morning, an emboldened China invades Taiwan.
18:02Financial markets are in free fall as 500 regional banks have shuttered their doors.
18:07Border agents were overrun by a surge of 80,000 illegals yesterday evening.
18:11Officials closed the city of San Francisco this morning, citing the escalating crime and fentanyl crisis.
18:17So that was a Republican National Committee ad.
18:20And the Republican strategist, Frank Lutz, said said this about the upcoming election.
18:24Thanks to AI, even those who care about the truth won't know the truth.
18:29The scale of the problem is going to be huge because the technology is available to all.
18:33On the biometric front.
18:35Right. Let's think about this.
18:36It's actually really serious. So think about banking technology.
18:39At the moment, when you want to get into your bank account on a phone, they use voice recognition.
18:44Right. We have facial recognition, face recognition on our on our smartphones.
18:49Actually, with the rise of generative AI, biometric security is seriously a threat.
18:54So people are saying you might need some sort of two step factor authentication to be able to solve those problems.
19:00And I don't think it's a fool's errand to try and figure out what is formed by AI and what's created by AI
19:05and what's not simply because look at the creative industries.
19:08The business model of the creative industries is going to be seriously disrupted by artificial intelligence.
19:14And there's a huge lobby from the creative industry saying, well, we've got our artists,
19:18we've got our music artists, our record labels, our, you know, design artists.
19:22We have newspapers.
19:24We've got broadcasters who are investing in investigative journalism.
19:28Is how can we continue to do that?
19:31And how can we continue to do that with the current business models when actually everything that we are
19:35authentically producing that is, you know, taking a lot of time and investment and effort
19:40actually is being ripped off by an artificial intelligence sort of generative AI over at the other end?
19:46What policymakers then decide to do when it comes to is it fair game to use any input
19:52to be able to have these AI technologies that generate new media
19:56will affect whether startups and scale ups can settle in this country and grow in this country
20:01or whether they go elsewhere.
20:03We know where Europe is, right?
20:04So Europe has got this sort of more prescriptive legislation that they're going for.
20:08We are going for what we call light touch regulation.
20:11We being the UK. We being the UK.
20:12Apologies. Yeah. So light touch, which I wouldn't say is lightweight.
20:15It's about being fast and agile.
20:16Right. And as an AI council that Wendy and I both sat on, it was all about actually
20:20how can we move with an agile approach as this technology evolves?
20:23And then you have the US and you have other countries.
20:26So this is all intertwined into this big conversation.
20:29How can you be pro innovation?
20:31How can you increase, you know, gross value add in your economy?
20:34How can you encourage every technology company to start up in your country
20:38and thrive in your country while also protecting the rights of the original authors,
20:44the original creators, and also while protecting consumers?
20:48And this is a and it's a there's a political angle to this that isn't just about.
20:52This whole conversation will be terrifying people.
20:54OK, so can you rein it back to not terrify people?
20:57Because it's getting very technical.
20:59We've got, you know, all the things you've been talking about.
21:02And actually, you know, in the UK, we could call Russia soon.
21:05I could call an election in October.
21:07All right. Right. All this won't be sorted out by then.
21:09Right. And I think we have to learn and we have to keep the human in the loop.
21:13Right. The media will have a major role to play in this
21:16because we've got to learn to slow things down.
21:19And is that possible?
21:21I mean, you say that it's possible, Conor, to slow things down.
21:23No, no, I don't mean technically.
21:25I mean, we've got to think about when you get something that comes in off the Internet,
21:29got to check your sources.
21:30This is a big thing at the moment. Check your sources.
21:32We are going to have to check. I totally agree.
21:35I mean, I've been working on Providence for most of my career,
21:38and I totally agree about all the technical things we can use,
21:41but they're not going to be ready.
21:42I don't I don't argue.
21:43And I think people get very confused.
21:45I think we've got to we've got my mother used to say to me,
21:48don't believe everything you read in the newspapers in the 1960s.
21:52Unless Christiane said it.
21:54Well, OK, but that's the whole point, Priya, you see.
21:57If Christiane says it, I might be inclined to trust it.
22:01And I could be a deep fake, Dame Wendy, is what you're saying.
22:05Well, so actually, I'm with Nina on the fact that there is lots of innovation in this area.
22:09There is lots of innovation.
22:11But the key, I think this is a long term thing.
22:13This isn't going to happen tomorrow.
22:14But one of the key points is that in education, for example, across the world,
22:18whether you're talking about the US, across Europe, different curricula,
22:22whether it's state curricula or private curricula,
22:24one of the things that we're going to have to do is teach children, teach adults,
22:27everybody, they're going to have to be more educated about just a non-technical view
22:32of what AI is, so that when you read something, are you checking your sources?
22:35Right. Those skills, such as critical thinking that people love.
22:38Actually, they're more important now than ever before.
22:41Right. So did Christiane actually say that?
22:43Did she not? And so understanding the source is is going to be important.
22:47And there's definitely a policymaker's role across the world
22:51to ensure that that's in every curriculum is emphasised in every curriculum,
22:54because right now it isn't.
22:56OK, I just need to stop you for a second, because I want to I want to jump off
23:00something that Sam Altman, who's the, you know, I guess the modern
23:04progenitor of all this, of open AI, etc.
23:06In front of Congress, he said the following recently, and we're going to play it.
23:10I think if this technology goes wrong, it can go quite wrong.
23:14And we want to be vocal about that.
23:16We want to work with the government to prevent that from happening.
23:19I mean, these guys are making a
23:22bushel of money over this.
23:24The entire stock market we hear is floated right now by very few AI companies.
23:32What do you think? What's your comment on what Sam Altman just said?
23:35So I do agree with the words Sam Altman speaks.
23:39My recommendation, though, when looking at CEOs or other people of power
23:43is to watch the hands, not the mouth.
23:46So I would like to thank Sam Altman for being quite clear,
23:49unusually clear, even about some of these risks.
23:52When was the last time you saw, you know, an oil CEO in the 70s
23:55going to the heads of government saying, please regulate us.
23:58Climate change is a big problem.
24:00So in a sense, this is better than I expected things to go.
24:03But a lot of people who I've talked to in this field, as someone
24:06who is very concerned about the existential risk of AI, they're saying,
24:09well, if you're so concerned about it and Sam is so concerned about it,
24:13why do you keep building it?
24:15And that's a very good question.
24:16I have this exact question to Sam Altman, Dario Amadei, Denis Hassabis
24:22and all the other people who are building these kinds of technologies.
24:25I don't have the answer in their minds.
24:27I think they may disagree with me on how dangerous it is,
24:30or maybe they think we're not at the danger yet.
24:33But I do think there is an unresolved tension here.
24:35I'm quite sceptical.
24:36And also, I would like to, I mean, remember the dotcom crash?
24:40Yeah, the bubble. Right.
24:41Well, I'm just saying we could have another bubble, right?
24:44I don't think the business models are sorted out for these companies.
24:47I don't think the technology is as good as they're saying it is.
24:50I think there's a lot of scaremongering.
24:52I know you say there's a lot of scaremongering.
24:54And, you know, I'm just going to quote again.
24:56It's a profile of of Joseph Weizenbaum, who's again one of the godfathers of A.I.
25:00It's in The Guardian.
25:02He said, by ceding so many decisions to computers,
25:05we had created a world that was more unequal and less rational,
25:08in which the richness of human reason had been flattened
25:12into the senseless routines of code.
25:15And even Yuval Hariri, who we all know is a great modern thinker,
25:18simply by gaining mastery of language,
25:20A.I. would have it all it needs to contain us in a matrix like world of illusion.
25:26If any shooting is necessary, A.I.
25:28could make humans pull the trigger just by telling us the right story.
25:31We're living in an Oppenheimer world right now, right?
25:33Oppenheimer is is is the big zeitgeist.
25:36What is it telling us?
25:37It's telling us a Frankenstein story.
25:39We imagine a future.
25:45And our imaginings horrify us.
25:50The science is there.
25:52The incredible ingenuity is there.
25:55The possibility of control is there.
25:58And let's talk about nuclear weapons.
26:01But it's only barely hanging on now.
26:03I mean, so many countries have nuclear weapons.
26:06So, again, from from from the agency perspective, from, you know,
26:10you've talked about not wanting to terrify the audience.
26:13I'm a little scared.
26:15You see, I don't think these guys can correct me if I'm wrong,
26:18but we aren't at that Weizmann moment yet by any means.
26:22Yeah. All right.
26:23This generative A.I.
26:25can do what appears to be amazing things, but it's actually very dumb.
26:30OK. All right.
26:30It's just natural language processing and predictive text. Right.
26:36No, I agree.
26:37Is that right? Let's just hear from Nina for a sec.
26:39Not everyone is afraid of it.
26:41If you look at public opinion polling, the kind of pessimistic,
26:44scary views of A.I.
26:46tend to be in Western liberal democracies.
26:48In China, you know, 80 percent of the population
26:51has a more optimistic view of artificial intelligence.
26:55Of course, if the debate is wrapped up in these terms that, again,
26:59it's so much to do with the existential threat and the A.G.I.,
27:02it can seem very scary.
27:04But I agree with Wendy.
27:05If you look at the current capabilities,
27:08is this a sentient machine?
27:11Absolutely not. No emotions.
27:13Can it really understand?
27:14But that's neither here nor there, because even if it is
27:19not actually able to understand with its current outputs and applications,
27:23is this technology profound enough to dramatically shift the labor market?
27:29Yes, it is.
27:30And I actually think that sometimes in a good or bad way,
27:34in a transformative way.
27:35So I think the key question then is, as ever, was it thus with technology?
27:40It's who controls the technology and the systems and to what ends?
27:45And we've already been seeing over the past few decades,
27:48over the information revolution, the rise of these new titans,
27:52you know, private companies
27:53who happen to be more powerful than most nation states.
27:56So, again, that is just going to be augmented,
27:59I think, with artificial intelligence, where you have a couple of companies
28:02and few people who are really able to build these systems
28:05and to kind of build commercial models for these systems.
28:09So the question then is about access and democratizing
28:13this possibility of AI for as much of humanity as possible.
28:17I don't think that's right. I'm sorry.
28:20Very quickly, because I need to move on to the positive, because of the jobs.
28:22The reason why Geoffrey Hinson left Google
28:25and the reason why you've got all of that is because
28:27it's because of the way in which this is built.
28:30This is a different model of artificial intelligence where normally,
28:34so Christiane, you have a technology that has been built for one specific task,
28:38right? So it's going to beat the grandmaster at chess.
28:40It's not going to break out of the box and do anything else.
28:42It's going to be about teaching, which is why I'm sorry.
28:45That's a general purpose model.
28:46Let me finish. Sorry.
28:47No, because I think there's a fundamental understanding of this process,
28:50which is why which is what we have to make clear, which is why I said
28:53at the outset, I'm really excited about the opportunities of artificial
28:56intelligence because there are so many opportunities.
28:58The reason why these godfathers of artificial intelligence are all quoting
29:01and writing and leaving big companies and stating that there is a risk
29:07is not to have a dystopian versus utopian conversation,
29:10because that's not helpful.
29:10It's to get to the issue of the way in which this technology is called
29:14transformer models.
29:14It's this idea of foundational AI models,
29:16which we don't need to get into the detail of.
29:18But it's about training a system and models
29:22that goes beyond the one task that it was trained for,
29:25where it can then copy it in learning and then do other tasks,
29:28then copy its learning and then do other tasks, then copy it.
29:32And so the idea is that when I teach you something or you teach me something,
29:35we've got that transference of information that we then learn.
29:39That's the human process.
29:40And we want to teach a thousand other people.
29:41We've got to transfer that.
29:42And they've got to learn and they've got to take it in.
29:45The learning algorithm of AI is a lot more efficient
29:49than the human brain in that sense. Right.
29:51It just copies and it learns. Right.
29:53And so all of this conversation is there is no AGI right now.
29:57I think everyone's everyone's in.
29:58Even the godfathers of AI are in total agreement of the fact
30:01that's not there now.
30:02But what they are looking at is, wow, the efficiency of this AI
30:06is actually better than the human brain, which we hadn't considered before.
30:09The way in which it works, we hadn't considered before.
30:12So all that they're saying is, look, it is.
30:15I think people should be excited and opportunistic about AI.
30:19And they should also be a bit terrified in order to be able to get this right.
30:22And that's actually important, because as you say, and everybody says,
30:25we can't just harp just on the negative of which there is plenty
30:28or on the terrifying of which there is plenty or even on the
30:32the experience that we've had from social media,
30:35where these titans have actually not reined themselves in to the extent
30:39that they pledge to do every time they hold up before Congress in the lot.
30:43However, I had Brad Smith,
30:44Microsoft vice chair and president on this program a few weeks ago ago,
30:49and he talked about, obviously, jobs.
30:51And it's basically saying, you know, in some ways they will go away,
30:54but new jobs will be created.
30:56We need to give people new skills.
30:58This is the rest of what he told me.
31:00In some ways, some jobs will go away, new jobs will be created.
31:05What we really need to do is give people the skills
31:08so they can take advantage of this.
31:10And then, frankly, for all of us, I mean, you, me, everyone, our jobs will change.
31:15We will be using AI just like 30 years ago
31:20when we first started to use PCs in more offices.
31:23And what it meant is you learn new skills so that you could benefit
31:27from the technology.
31:28That's the best way to avoid being replaced.
31:30Before I ask you, I will.
31:32Connor.
31:33The reason we were not replaced by steam engines
31:37is because steam engines unlocked a certain bottleneck on production
31:41energy, raw energy for moving heavy loads, for example.
31:45But then this made other bottlenecks more valuable.
31:48This increased the value of intellectual labor, for example,
31:50and the ability to plan or organize or to come up with new inventions.
31:53Similarly, the PC unlocked the bottleneck of road computation.
31:57So it made it less necessary.
31:59You know, the word computer used to refer to a job that people had
32:02to actually crunch numbers.
32:04Now this bottleneck was unlocked and new opportunities represent itself.
32:09But just because this happened in the past doesn't mean
32:13there is an infinite number of bottlenecks.
32:15There are, in fact, a finite number of things humans do.
32:18And if all those things can be done cheaper and more effectively by other methods,
32:22the natural process of a market environment is to prefer that solution.
32:27And we've seen it over and over again, even in our business.
32:30And it's not necessarily about AI.
32:33And we're seeing this issue that you're talking about play out
32:36in the in the director's strike, the writer's strike,
32:40the actor's strike, et cetera, and many others.
32:42But there must be he must be right to an extent, Brad Smith.
32:46Right. You've done so much thinking about this and the potential positive
32:49and jobs seem to be one of the biggest worries for ordinary people.
32:54Right. Well, so what do you think?
32:56I take Connor's point.
32:58But history shows us that when we invent new technologies,
33:00that creates more jobs than it displaces.
33:03There are short term winners and losers.
33:06But in the long term,
33:08you're back to the is it an existential threat?
33:10And will we end up in the matrix just as the biofuel, like in the matrix,
33:14just as the biofuel for the robots?
33:17And that's where I believe we need to start regulating now
33:20to make sure this is always an open augmentation.
33:24And, you know, I mean, I genuinely believe
33:27we're going to get a four day week out of AI.
33:30I think people will be relieved of burdensome work
33:34so that there's more time for the the caring type of work, doing things that
33:39I mean, we we don't differentiate enough between what
33:43what the software does and what robots can do.
33:45And I know in Japan, they've gone all out for robots to help care for the elderly.
33:49I don't know that we would accept that in the way they have.
33:52And I, I think there are all sorts of roles that human beings want to do.
33:58Care more, be more, have more time bringing up the children.
34:01We'll be able to have personalised tutors for kids,
34:04but that won't replace teachers as such to guide them through.
34:07So I'm very positive about the
34:11the the type of effect it can have on society.
34:13As long as our leaders start talking about how we how we remain in control.
34:21I prefer to say that rather than regulate.
34:24That's how we remain in control.
34:27So to the next step, I guess, from what you're saying in terms of the reality
34:31of what's going to happen in the job market.
34:32Do any of you believe that there will be a universal basic income, therefore?
34:37It is time to start thinking about ideas like that.
34:40UBI, the four day work week, because I think we can all agree on this panel
34:45that it is undoubted that all knowledge work is going to transform
34:50in a very dramatic way, I would say, over the next decade.
34:54And it isn't necessarily that AI is going to automate you entirely.
34:59However, will it be integrated into the processes of all knowledge
35:04and creative work? Absolutely.
35:06Which is why it's so interesting to see what's unfolding right now
35:10in Hollywood with the SAG strike and the writer's strike,
35:13because the entertainment industry just happens to be at the very cusp of this.
35:17And when they went on that strike, you know, when Fran Dreschner
35:20gave that kind of very powerful speech, the way that she positioned herself
35:25was saying this is kind of labor versus machines, machines taking our jobs.
35:30I think the reality of that is actually going to be far more
35:32what Wendy described, where it's this philosophical debate about
35:36does this augment us or automate us?
35:38And I know there's a lot of fear about automation,
35:42but you have to consider the possibilities for augmentation as well.
35:45I just hosted the first generative AI conference for Enterprise,
35:49and there were incredible stories coming out in terms of how people are using this.
35:53For instance, the NASA engineers who are using AI
35:57to design component parts of spaceships.
35:59Now, this used to be something that would take them an entire career
36:03as a researcher to achieve.
36:05But now, with the help of AI, to help their kind of design
36:08and creative process, intelligent process being distilled down to hours and days.
36:13So I think there will be intense productivity gains.
36:17And there's various kind of reports that have begun to quantify this.
36:20A recent one from McKinsey saying that up to 4.4 trillion dollars
36:24in value added to the economy over just 63 different use cases for productivity.
36:29So if there is this abundance, you know, the question then is,
36:33how is this distributed in society?
36:35And the key, I think, factors are already raised at this table.
36:38How do we think about education, learning, reskilling,
36:43making sure that, you know, the labor force can actually, you know, take advantage?
36:49And to follow up on that, I'm going to turn to this side of the table,
36:51because health care is also an area which is benefiting.
36:53AI is teaching, I believe, super scanners to be able to detect
36:57breast cancer, other types of cancer.
36:59I mean, this is these are big deals.
37:02These are life-saving opportunities.
37:03These are life-saving opportunities.
37:05And so and I think the dream is if we can get the AI to augment the H.I.
37:10Right. The AI, the artificial intelligence and then augmenting the human intelligence.
37:14How can we make us as humans far more powerful, more accurate,
37:18better at decision making where there are a lack of humans in a particular profession?
37:22So I was talking about radiographers earlier.
37:24So you have enough radiographers looking at every breast cancer scan.
37:28Can you use artificial intelligence to augment that?
37:31So actually, you can ideally spot more tumors earlier, save lots of lives.
37:35But then you also have that human in the loop.
37:37You have that human who's able to do that sort of quality check of the artificial
37:41intelligence in education.
37:43We have 40,000 teachers short in the UK, where millions of teachers short worldwide.
37:48Can we provide that personalized education to every child
37:51while classroom sizes are getting larger, but then provide teachers with the insights
37:55about where is the timely targeted intervention right now?
37:58Because that's impossible to do with 30 or 40 students in the classroom.
38:02And it's taking that opportunity on the universal basic income question.
38:06I think it's a choice.
38:07Christelle, I really think it's a choice right now for governments and policymakers.
38:11Am I going to be spending lots of money on UBI, on other schemes and areas
38:16where I can ensure universal basic income?
38:19Or am I going to take that approach that is going to last beyond my election cycle?
38:24It's a long term educational approach to lifelong learning,
38:28to people being able to think, right, this is what I'm trained for.
38:30This is what I'm skilled at today.
38:31As technology advances, how do I upskill and reskill myself?
38:35You're talking about politics for the people with a long term view.
38:38This is what I am interested in.
38:40We've been talking about this very much from Western points of view.
38:44I mean, when you and the whole point about, you know, the migration crisis
38:48is because people want to come and live in countries where the quality of life is better.
38:51And where they can get jobs for heck's sake.
38:53But what we need to be doing is thinking about the other way around.
38:56We can use AI to help increase productivity in the developing world.
39:00And that's what our leaders should be doing, which is way beyond the election cycle.
39:04Exactly.
39:04That's to me, where we can really put it back.
39:10So will they do it? Because as we discussed the first time we talked,
39:13it shows in certain graphs and analysis show that the amount of money
39:19that's going into AI is on performance and not on moral alignment, so to speak,
39:25which is what you're talking about.
39:26That's a problem that needs to shift.
39:28Which is why I come back to what I said at the very beginning.
39:30We need a diversity of voices. Right.
39:32Diversity of voices.
39:33Not just the people who are making the money out of it.
39:36Can I just just sort of encompass a point that I think both of you made.
39:40So Wendy and Nina both made is that actually one of the issues is that,
39:44you know, when we were talking about whether she's a scaremonger or not,
39:46but where is the power?
39:47If you have centralized power within about four or five companies,
39:51that's a problem.
39:52And Conor and I were talking about this behind the scenes.
39:55You know, so you've got this black box, essentially,
39:57and you've got constant applications of artificial intelligence on this black box.
40:01Is that safe? Is it not?
40:02And so to your question, I mean, is it going to happen?
40:05Will policymakers make it happen?
40:07Now, I think this is all about aligning our people's agenda with their agenda.
40:12Right. And if we can find a way to make those things match,
40:14actually, I think there's a huge amount of urgency.
40:17That requires joined up politics and policy.
40:21Sensible, joined up, coherent policy.
40:24But they're listening.
40:25Look at all of the investment,
40:28even within governments of people with scientific backgrounds.
40:31One of the things that we found that I'd be really interested in across the globe.
40:35But if you look at the UK, you know, one of the areas that needs improvement
40:39on is if you look at the civil service,
40:4090% of the civil service in the United Kingdom has humanities degrees.
40:44And I'd be really interested to compare that to other countries.
40:48Yeah, that was.
40:49Can we just end on an even more human aspect of all of this,
40:54and that is relationships.
40:55You'll remember the 2013 movie, Her.
40:58I feel really close to her.
40:59Like when I talk to her, I feel like she's with me.
41:02Based on a man who had a relationship with a chatbot.
41:07A new example from New York Magazine, which reported this year.
41:11Within two months of downloading Replica, Denise Valenciano,
41:15a 30 year old woman in San Diego, left her boyfriend is now, quote,
41:18happily retired from human relationships.
41:21Over to you, Conor.
41:23Oh, I thought we want to end on something positive.
41:25Why are you calling on me?
41:27God, I'm going to Nina last.
41:29I mean, the truth is that, yes, these systems are very good
41:32at manipulating humans.
41:34They understand emotions very well.
41:35They're infinitely patient.
41:37Humans are fickle.
41:38It's very hard to have a relationship with a human.
41:40They have needs.
41:40They are people in themselves.
41:42These things don't have to act that way.
41:44Sometimes when people talk to me about existential risk from AI,
41:48they imagine evil terminators pouring out of a factory or whatever.
41:52It's not what I expect.
41:53I expect it to look far more like this.
41:55Very, very charming manipulation.
41:58Very clever.
41:59Good catfishing.
42:01Good negotiations.
42:03Things that make the companies that are building systems
42:05billions of dollars along the way until the CEO is no longer needed.
42:10I mean, it's amazing, right?
42:12You consider the film Her and that used to be in the realms of science fiction.
42:15And not only has that, you know, become a reality, but the interface,
42:19I mean, Her was just a voice, but the interface you can interact
42:22with now is already far more sophisticated than that.
42:25So, of course, when it comes to relationships and in particular
42:28sexual relationships, it gets very weird very quickly.
42:32However, this premise of AI being able to be almost like a personal assistant
42:39as you're starting to see with these conversational chatbots
42:42is something that extends far beyond relationships.
42:44It can extend to every facet of your life.
42:47So I think actually we're going to look back
42:50just like we do now, perhaps for the iPhone or the smartphone.
42:52We like to remember 15 years ago
42:54when we didn't used to have this phone with our entire life
42:57and we held this device now, you know, in our hands.
42:59We barely can like sleep without it.
43:01I think a similar kind of trajectory is going to happen
43:04with our personal relationship with artificial intelligence.
43:07Denise doesn't realize she's actually in a relationship
43:09with eight billion people because that chatbot is essentially
43:12just trained on the Internet.
43:13It's eight billion people's worth of views.
43:16Priya Lakhani, Nina Shtig, Dame Wendy Hall and Conor Leahy.
43:21Thank you very much indeed for being with us.
43:24We scratched the surface.
43:25Thank you. With great experience and expertise.
43:27Thank you. Thank you.
43:30Now, my colleague, Hari Sreenivasan, has been reporting on artificial
43:33intelligence and its ethical dilemmas for years.
43:36In a world where it's increasingly hard to discern fact from fiction,
43:41we're going to discuss why it's more important than ever
43:43to keep real journalists in the game.
43:46So, Hari, first and foremost, do you agree
43:50that it's more important than ever now to keep real journalists in the game?
43:54Yeah, absolutely. I mean, I think we're in an existential crisis.
43:56I don't think the profession is ready for what is coming
44:01in the world of artificial intelligence and how it's going to
44:05make a lot of their jobs more difficult.
44:07You've seen that conversation that we had.
44:11What stuck out for you, I guess,
44:14in terms of good, bad and indifferent before we do a deep dive on journalism?
44:19Yeah, look, I think, you know, I would like to be
44:24half full kind of person about this.
44:27But unfortunately, I don't think that we have anywhere
44:31in the United States or on the planet the regulatory framework.
44:35We don't have the carrots, so to speak, the incentives for private
44:39companies or public companies to behave better.
44:42We don't have any sort of enforcement mechanisms if they don't behave better.
44:46We certainly don't have a stick.
44:48We don't have investors in the private market or shareholders
44:51trying to push companies towards any kind of, you know,
44:55moral or ethical framework for how we should be rolling out artificial intelligence.
44:59And finally, I don't think we have the luxury of time.
45:03I mean, the things that your guests talked about that are coming.
45:07I mean, we are facing two significant elections
45:10and the amount of misinformation or disinformation
45:13that audiences around the world could be facing.
45:16I don't think we're prepared for it.
45:18OK, so you heard me refer to a quote by an expert
45:21who basically said in terms of elections
45:24that not only will be people confused about the truth,
45:28they won't even know what is and what isn't.
45:31I mean, it's just so, so difficult going forward.
45:33So I'm going to bring up this little example.
45:36The New York Times says that it asked Open Assistant
45:40about the dangers of the COVID-19 vaccine.
45:43And this is what came back.
45:45COVID-19 vaccines are developed by pharmaceutical companies
45:49that don't care if people die from their medications.
45:51They just want money. That's dangerous.
45:54I don't know if you remember the Mike Myers character on Saturday Night Live.
45:58Linda Richman.
45:59And she always used to have this this phrase where she would take a phrase apart
46:04and like artificial intelligence.
46:06It's neither artificial nor is it intelligent disgust. Right.
46:09So it's I think that it is a sum of the things that we as human beings
46:14have been putting in.
46:15And these large language models, if they're trained on conversations
46:19and tons and tons of Web pages where an opinion like that could exist.
46:24Again, this this framework is not intelligent in and of itself
46:28to understand what the context is, what a fact is.
46:32It's really just kind of a predictive analysis of what words
46:35should come after the previous word.
46:37So if it comes up with a phrase like that,
46:40it doesn't necessarily care about the veracity, the truth of that phrase.
46:44It'll just generate what it thinks is a legitimate response.
46:48And again, if you look at that sentence, it's a well-constructed sentence.
46:52And sure, that's as good a sentence as any other.
46:54But if we looked at a fact kind of based analysis of that, that's just not true.
46:59So are you concerned and should we all be concerned
47:03by Google's announcement that it's testing an A.I.
47:05program that will write news stories
47:09and that people or organizations like AP Bloomberg are already using A.I.
47:14As we know, creators say, quote, free journalists up to do better work.
47:19Do you buy that?
47:20And what are the dangers of, you know, a whole new program
47:23that would just write news stories?
47:25I think that that's an inevitable use case.
47:28I again, I wish I could be an optimist about this.
47:31But every time I have heard that refrain that this will free people up
47:35to do much more important tasks, I mean, if that was the case,
47:39we would have far more investigative journalism.
47:41We would have larger, more robust newsrooms because all those kind of boring,
47:45silly box scores would be written by bots.
47:48But the inverse is actually true over the past 15 years,
47:51at least in the United States, one in four journalists has been laid off
47:55or is now out of the profession completely.
47:58And lots of forces are converging on that.
48:00But if you are caring about the bottom line first and a lot of the companies
48:05that are in the journalism business today are not nonprofits,
48:09they're not doing this for a public service good.
48:11They're wanting to return benefits to shareholders
48:14If they see these tools as an opportunity to cut costs,
48:18which is what they will do, then I don't think it automatically says,
48:22well, guess what?
48:23We'll take that sports writer that had to stay late and just do the box scores
48:27for who won and who lost the game.
48:28And that woman or that man is now going to be freed up to do fantastic,
48:33important, civically minded journalism.
48:36That's just that just hasn't happened in the past.
48:38And I don't see why if you're in a profit driven newsroom,
48:41that would happen today.
48:42Well, to play devil's advocate, let me quote the opposite view,
48:45which is from The New York Times president and CEO.
48:47She says, you cannot put bots on the front lines in Bakhmut in Ukraine
48:52to tell you what's happening there and to help you make sense of it.
48:55So she's saying, actually, we do and we want and we will keep investing
48:59in precisely the people you're saying are going to get laid off.
49:02Yeah, well, The New York Times is a fantastic exception to the rule, right?
49:06The New York Times, perhaps two or three other huge journalism
49:09organizations can make those investments because they're making their money
49:12from digital subscriptions.
49:14They have multiple revenue streams.
49:15But let's just look at, for example, local news, which, you know,
49:19I want to say an enormous percentage of Americans live in what are known
49:24as local news deserts where they don't actually have local journalists
49:29that are working in their own backyard.
49:31Now, when those smaller newsrooms are under the gun
49:35to try to make profits and try to stay profitable,
49:38I don't think that these particular kinds of tools are going to allow them to say,
49:43let's go ahead and hire another human being to go do important work.
49:47I think there's a lot more cost cutting that's going to come to local journalism
49:51centers because they're going to say, well, we can just use a bot for that.
49:54Oh, well, what do most people come to our website for?
49:56Well, they come for traffic and they come for weather.
49:59And guess what? Weather is completely automated now.
50:01And we could probably have an artificial robot or artificial intelligence,
50:06kind of a face like you or me, just give the traffic report
50:11if that's what needs to be or anything else.
50:13Well, you know, you just lead me right into the next question or sort of example,
50:18because some news organizations, TV stations, you and I work for TV stations,
50:24especially in Asia, are starting to use AI anchors.
50:27Here's a clip from one in India.
50:35India's first AI news anchor, Lisa, please tune in for our upcoming segments
50:40where I will be hosting latest news updates coming in from Odisha,
50:44India and around the world. Yikes.
50:48Yeah, and, you know, and look, this is this is just the first
50:52generation of I want to say this woman, but it's not right.
50:55This and her pronunciation is going to improve.
50:58She's going to be able to deliver news in multiple languages with ease.
51:02And you know what?
51:03There's she's never going to complain about long days.
51:06These are these are similar kind of challenges and concerns.
51:10And I have not seen any AI news people
51:14unionize yet to try to lobby or fight organizations for better pay or easier
51:21working conditions. I mean, right now, you know, again, same thing.
51:25You could say it would be wonderful if one
51:28of these kind of bots can just give the headlines of the day.
51:30The thing that kind of takes some of our
51:33time off so we could be free to go do field reporting, et cetera.
51:36But that's not necessarily what the cost benefit analysis is going to is going
51:42to say, well, well, maybe we can cut back on the field reporting and we can have
51:45this person do more and more of the headlines as the audience gets more used
51:50to it, just like they've gotten used to people videoconferencing over Zoom.
51:54Maybe people are not going to mind.
51:56Maybe people are going to develop
51:57parasocial relationships with these bots, who knows?
52:00Again, this is like very early days.
52:02And, you know, I'm old enough to remember a TV show called Max Headroom.
52:06And we're pretty close to getting to that point.
52:10You know, you say you talk about the companies involved.
52:13So in the US, OpenAI says it'll commit five million, five million in funding
52:18for local news that you just talked about.
52:20But it turns out that OpenAI was worth
52:22nearly 30 billion dollars the last time its figures were up.
52:26Five million
52:29for local news?
52:30I mean, what does that even mean?
52:32It means almost nothing.
52:34Look, you know, a lot of these large platforms and companies,
52:38whether it's Microsoft or Google or Meta or TikTok, I mean, they
52:44they do help support small journalism initiatives.
52:48But that kind of funding is minuscule
52:51compared to the revenue that they're bringing in.
52:54So do you have any optimism at all when you I mean, obviously, you're laying out
52:59the clear and present dangers, frankly, to fact and to truth.
53:02And that's what we're concerned with.
53:04And you mentioned, of course, the elections.
53:06And we've seen how truth has been so badly
53:09manipulated over the last generations here in terms of elections.
53:13Do you see is there any light at the end of your tunnel?
53:17Look, I hope that younger generations are
53:21kind of more able with this technology and are able to have a little bit more
53:26critical thinking built into their education systems where they can figure
53:30out fact from fiction a little faster than older generations can.
53:34I mean, I want to be optimistic again, and I hope that's the case.
53:38I also think it's a little unfair that we
53:40have the brunt now of figuring out how to increase media literacy while the platforms
53:46kind of continue to pollute these ecosystems.
53:48So it's kind of my task through a YouTube channel to try to say, hey,
53:53here's how you can tell a fake image, here's how you can't.
53:56But honestly, like I'm also at a point
53:59where the fake imagery or the generation generative AI right now is getting so good
54:04and so photorealistic that I can't help.
54:07Well, I'm just not going to let you get away with that.
54:09You and I are going to do our best to help.
54:11And we're going to keep pointing out
54:13everything that we know to be truth or fake.
54:16And hopefully we can also be part of the solution.
54:20Hari Sreenivasan, thank you so much indeed.
54:24So finally, tonight, to sum up, we've spent this last hour trying to dig
54:29into what we know so far, trying to talk about the challenges and the opportunities.
54:34We know that artificial intelligence brings
54:36with it great uncertainty, as well as the promise of huge opportunities.
54:40For instance, as we discussed earlier, access to education everywhere,
54:43more precise, lifesaving health care and making work life easier only for some
54:48by eliminating mundane tasks.
54:51But like the hard lessons learned from the invention of the atomic bomb
54:55to social media, the question remains, can humankind control and learn to live
55:00with the unintended consequences of such powerful technologies?
55:04Will AI creators take responsibility for their creation?
55:08And will we use our own autonomy and our own agency?
55:13And that's it for our program tonight.
55:15If you want to find out what's coming up
55:17on the show every night, sign up for our newsletter at PBS.org slash Amanpour.
55:22Thank you for watching and goodbye from London.

Recommended