• 4 months ago
To celebrate his new book Mastering AI, Fortune AI editor Jeremy Kahn chats with Editor-in-Chief Alyson Shontell about the promises and perils of artificial intelligence. Kahn offers dramatic predictions of AI’s impact over the next decade, from reshaping our economy and the way we work, learn, and create to unknitting our social fabric, jeopardizing our democracy, and fundamentally altering the way we think.

Category

🤖
Tech
Transcript
00:00We've already allowed a lot of these things to affect us and affect our brains.
00:06Literally, you know, people are, you know, we're addicted to our phones.
00:09And I think with chatbots, because they're so much more immersive,
00:12there's a real risk that we will become even more addicted to these things,
00:16be more socially isolated, actually.
00:18If you talk about people using AI as kind of companion chatbots,
00:22which is already happening, I think there's a real danger there
00:25that people will prefer this to the messy human interactions
00:28that we have to have, you know.
00:30Your spouse or something may challenge you, you may get in an argument with them.
00:33Your chatbot probably won't do that.
00:35And so it may be a lot more comfortable for people to say,
00:37oh, I'm just going to talk to my chatbot.
00:39It's a lot easier than talking to my wife or something.
00:41So, lots to unpack in the book.
00:44It's very full of, like, where AI is now, where it's going,
00:48your predictions for a lot of things, so I want to unpack a lot of that.
00:51But first, I just wanted to get your big take.
00:53Where is the AI cycle right at this moment?
00:56How long until we actually feel its full transformation in our daily lives,
00:59for the average person?
01:01Yeah, and this was not one of the questions I prepared for Allison.
01:04No, so I think, obviously, there's quite a lot of hype around AI at the moment,
01:10but it is, you know, very early days.
01:13Someone asked me if it was a baseball game, what inning it would be,
01:16and I sort of think it's, you know, maybe the bottom of the first
01:18or the top of the second.
01:20And, you know, Gartner has this famous hype cycle
01:23where you have to kind of go up and there's a peak,
01:25and then there's this trough of disillusionment,
01:27and then there's the actual adoption of the technology.
01:30I think we're still very much, you know, climbing that peak.
01:33So there might be a little bit of a trough of disillusionment coming,
01:37but I do think this technology will ultimately be very transformative
01:40and will be so if we look back, I think, even just in five years' time,
01:44we'll be quite amazed by the effects that it has had
01:47and the impacts it's going to have.
01:49Can you just help us wrap our heads around, like,
01:51why is this such a big deal?
01:53We've seen cheapy tea.
01:55We know it can make us more efficient.
01:57Why is this such a hype?
01:59Why is this the, like, best thing since electricity,
02:02in some cases, people are saying?
02:04Yeah.
02:05No, I do think it's one of these technologies
02:07that has an amazing breadth.
02:09It's going to have impacts across industry sectors.
02:13I think there's a lot of other technologies
02:15where the impact was relatively narrow.
02:17It was sort of within one sector or one particular kind of job,
02:20and I think the difference with AI
02:22is it affects sort of all professions
02:24and all industry sectors simultaneously,
02:27and I think it has both impacts on us personally
02:30and on our professional lives
02:32and on the organization of companies and the economy,
02:34so the impacts are very broad.
02:36Then the other thing is that they're happening very quickly.
02:38I think the scale of adoption
02:40is much faster than you have with other technologies.
02:43ChatGPT, which sort of kicked off most public awareness
02:46of sort of what's going on with AI 18 months ago,
02:49you know, that hit 100 million users
02:54within just, I think it was two months.
02:57I mean, it was the fastest-growing consumer technology to date.
03:01So, I mean, I think those two things,
03:03the breadth and the speed, are what makes this different.
03:07So with AI comes great opportunity and also great risk.
03:11You actually wrote that we face the prospect
03:14of potentially not being the preeminent intelligence
03:16on the planet for much longer.
03:18Is this really a good idea?
03:20Have we really thought this all through?
03:22Yeah, no, I mean, this is a technology with tremendous risk,
03:24but that is, I think, why people are both fascinated by it
03:27and why we need to be so careful with it,
03:29because it does have the potential to, you know,
03:32for the first time to have something that challenges
03:35what makes us unique as a species,
03:37which is really our intelligence.
03:39I think most other innovations that have come along
03:41have not done that.
03:42I talk about it in the book that, you know,
03:44when we had the automobile come along
03:46and steam power come along,
03:48that challenged our physical ability,
03:50sort of our brawn.
03:52But we never really defined ourselves as a species.
03:54Based on brawn, there were always animals
03:56that were stronger than us.
03:57We had domesticated oxen years ago for this purpose
03:59or elephants, horses.
04:02But, you know, this is different,
04:03because this challenge is the thing
04:04that really makes us unique as a species
04:06and really makes us think about,
04:07well, what is human intelligence?
04:09And, you know, how do we measure that?
04:11And, you know, these are somewhat unresolved issues.
04:15And then if we have a technology
04:16that exceeds that ability, what does that really mean?
04:18What does it do for our sort of place in the world?
04:21So I guess one question, too,
04:23is what is the difference between
04:24artificial intelligence and human intelligence?
04:27Like, how are we separate
04:28from what this thing we're building?
04:30Yeah, it's a good question.
04:32Human intelligence is not super well defined,
04:34even by people who study it,
04:35which is not helpful for these issues.
04:39But artificial intelligence is basically
04:41any system that tries to mimic aspects
04:43of human intelligence.
04:44So it's always sort of defined against us
04:47and in particular tries to mimic
04:49our ability to learn and our ability to reason.
04:52So it broadly defines all those technologies
04:55that people have tried to mimic
04:57aspects of human thinking.
04:59I think as these systems have been developed,
05:01the AI systems we have out now,
05:03what's very interesting about them
05:04is they're very not human-like in some ways.
05:06They've been trained on all this human-generated data,
05:09but then they behave,
05:11and in some ways they behave
05:12in ways that seem to mimic us,
05:13but in other ways they can do things that we can't.
05:15I mean, even the ability to recall
05:18vast amounts of information
05:19across huge ranges of subjects,
05:21most people can't actually do that.
05:24And these systems can,
05:26with some reliability,
05:28but not 100% reliability,
05:30answer these questions across,
05:32you can ask a one-minute,
05:33a really hard math question,
05:34you can ask it to tell you a joke,
05:36you can ask it to compose,
05:38summarize your meeting in a haiku in French,
05:41and it'll do all that.
05:42And most people can't do that.
05:45And so I think that's why it's very different than us.
05:48I think we're gonna have to get used to
05:49sort of where it excels
05:52and where it has weaknesses.
05:55So I'm curious,
05:56we can use it for a lot of things
05:58to make us more efficient.
06:00What is the risk to our own cognition,
06:02to our own brains?
06:04Could it be rewired by AI
06:06if it will make us super lazy?
06:08What is this gonna do to our brains
06:09that are already tripping out
06:10from the social media and phone era?
06:13Yeah, no, I think that's really an urgent,
06:15this is weird
06:16because we're sort of in the round here,
06:18so I feel like I have to
06:19keep looking over my shoulder.
06:21No, I think that's one of the really urgent issues
06:23that I try to address in the book is,
06:25and one of the dangers
06:26that does not get enough attention
06:28that this technology poses,
06:29is that if we rely on it too much,
06:32I do think it will have all these detrimental impacts
06:35on our own human intelligence
06:36and our own human cognition.
06:38There'll be a tendency to defer
06:39to the judgment of AI
06:41and we will lose, I think,
06:43the ability to sort of think critically.
06:46I think there's a really dangerous sort of framing
06:49that happens with the use
06:50of particularly generative AI now,
06:52which is the idea that
06:53thought and writing are separable,
06:55that you can get the thing to compose
06:58your thoughts in any format you want,
07:01it will draft something for you,
07:02you just have to give it some bullet points.
07:04And I think that's a sort of dangerous idea
07:07and I think writing and thinking
07:08are not separable activities.
07:10And I worry that with this technology,
07:14more and more people will come to see them
07:15as separable activities
07:16and will actually lose their writing ability.
07:18And I think when you lose your writing ability,
07:19you actually lose your thinking ability to some extent.
07:22So I think that's another risk
07:23that has not really been recognized.
07:26Again, memory is ready.
07:28We have a lot of technologies
07:29that pose somewhat of a risk to memory.
07:31Everybody now Googles everything,
07:33nobody remembers any facts,
07:35nobody can remember any phone numbers
07:36because they're all in our phones.
07:38And I think AI just exacerbates
07:40or accelerates some of those trends.
07:42As you said, like with social media,
07:43we've already allowed a lot of these things
07:46to affect us and affect our brains.
07:49Literally, we're addicted to our phones.
07:52And I think with chatbots,
07:53because they're so much more immersive,
07:55there's a real risk that we will become
07:57even more addicted to these things,
07:59be more socially isolated actually.
08:01If you talk about people using AI
08:03as kind of companion chatbots,
08:05which is already happening,
08:06I think there's a real danger there
08:08that people will prefer this
08:09to the messy human interactions that we have to have.
08:12Your spouse or something may challenge you,
08:15you may get in an argument with them.
08:17Your chatbot probably won't do that.
08:18And so it may be a lot more comfortable for people to say,
08:20oh, I'm just gonna talk to my chatbot.
08:22It's a lot easier than talking to my wife or something.
08:24Well, yeah, I mean, I think the deterioration
08:29of our physical relationships with each other
08:32is probably at serious risk.
08:34We can talk about the risks a lot,
08:35but clearly we're doing this for a reason, right?
08:37People are building something
08:38because they're excited about it.
08:39And actually, in your writing,
08:41I was a little bit surprised.
08:42You're pretty optimistic.
08:43You acknowledge there are some ways
08:45this could definitely go wrong,
08:46we need to be careful,
08:47but there's an optimism underlying all of this.
08:49So what are you most optimistic about that this will do?
08:53Yeah, so I am optimistic, I think.
08:55I mean, I wrote the book thinking
08:57that there are these dangers,
08:58but if we can take some sensible steps
09:00to avoid those dangers,
09:01I think there are these huge opportunities we can seize.
09:03And we really, I mean, the subtitle of the book
09:05is a survival guide to our superpowered future.
09:08And I really do think this will give us superpowers
09:10if we take the right steps
09:12to kind of design the technology very deliberately
09:14and that we have some sensible regulation
09:16around the technology.
09:17And we can talk a little bit more about that in a sec.
09:19But I mean, some of the things I think
09:21where we'll have the most greatest positive impact,
09:24one is an area where people were very scared
09:26when ChatGPT came out, which is education.
09:28A lot of educators and teachers
09:30were very freaked out by this technology,
09:32that students were just gonna use it to cheat,
09:34that no one would learn anything anymore.
09:36I actually think in the long run
09:38we're gonna see this as a great boon to education,
09:40that we will all,
09:41it can potentially give everyone a personal tutor,
09:43which is an incredibly powerful thing.
09:45And I think we're not gonna get rid of human teachers,
09:48but it will allow human teachers
09:49to concentrate their efforts
09:51and assist students individually.
09:54And you can have kind of the AI tutor
09:57as this great teaching assistant,
09:58and it can provide feedback to the teacher saying,
10:00oh, this particular student is struggling
10:03to learn this concept.
10:05So when you have that one-on-one human intervention with them,
10:08you should focus on that as a teacher.
10:10And it allows you to kind of pinpoint those interactions.
10:13It also would allow the student,
10:15when they're not in the classroom,
10:16to learn a lot more.
10:17It'll allow people to become lifelong learners.
10:19So I think there's a lot of pluses for education.
10:22The other big one, I think, is in the sciences.
10:25We're already seeing this technology
10:27have a tremendously positive impact on drug discovery.
10:30I think in the future,
10:31when coupled with things like wearable devices
10:33that can give you a lot more data,
10:35there's gonna be a lot of improvements
10:37in personalized medicine.
10:39So really transformative effects there,
10:41I think, that are positive.
10:43And there's one area
10:44where I'm actually in the near term
10:45super pessimistic about the technology,
10:47which is what it's gonna do to democracy,
10:49where there are paths you can envision
10:51where actually this could be
10:52a very democratically enhancing technology.
10:55One, you know, there's been some interesting studies.
10:57Cornell University's done a few on chatbots
11:00and their persuasive power.
11:02And that's sort of a scary thing
11:04because in sort of used incorrectly,
11:07there's a chance that these chatbots,
11:10AI chatbots, could have a tremendous influence over us
11:12and what we think.
11:13And that's, you know, who wields that power
11:15and for what purpose, I think, are very live questions.
11:18But you could also,
11:19Cornell had looked at people
11:20who believed in conspiracy theories
11:22and they had them interact with a chatbot.
11:24And actually not that long interactions,
11:26like about an hour dialogue with a chatbot.
11:28It found, and the chatbot was told in its initial prompts
11:32that it should try to dissuade these people
11:34from these conspiracy theories.
11:35And it found that actually it had a noticeable effect.
11:39People were less confident
11:40in their belief in these conspiracies
11:42after just an hour interaction with a chatbot.
11:44And that effect lasted.
11:46So they went back four months later
11:48and they interviewed people again
11:49and they had drifted away
11:51from their belief in conspiracy theories over that period
11:54from this one interaction with the chatbot.
11:56And that was more powerful
11:57than they had done some human controls with.
11:59Even people who were experts
12:00who had been trained to sort of deprogram people
12:02who believed in conspiracies.
12:03And the chatbot was much more powerful.
12:05So again, that's a place where if used correctly,
12:08it could actually sort of move us away
12:10from the political polarization
12:11that we're experiencing at the moment.
12:13And I'm kind of hopeful about that.
12:15But we have to kind of get there first.
12:16And there's all these other
12:17sort of negative political effects potentially on democracy,
12:20which I do talk about in the book as well.
12:22But I think in the longterm,
12:24I think there's a potential
12:25that it could be very democratically enhancing.
12:29So there's risks in how this is built.
12:31And in a lot of the previous cases,
12:33it feels like tech has just happened to us
12:35as opposed to like us really controlling the outcome
12:37of what the tech looks like.
12:39You have met every major leader on this.
12:42You just spent some time with Satya Nadella of Microsoft.
12:45You've met with Sam Altman of OpenAI.
12:48You've gotten to know some of these leaders,
12:50their teams, what they're building.
12:51You've been following it from the earliest days.
12:54Do you trust them?
12:56No, and none of us should.
12:59I mean, and I don't think it's that they're bad people,
13:02but they run companies and companies have profit motives.
13:05And I think sometimes those don't align
13:07with what we really want as a society.
13:10And I think we need some regulation here.
13:12And I think there has to be regulatory agencies
13:14that have the power to look over the shoulder
13:16of tech companies and see what they're building.
13:18And I think we need some safeguards around that.
13:21I think we need even some safeguards
13:23around what business models we allow these companies to use.
13:26I think one of the biggest problems we saw
13:28with social media was you had business models
13:30built around engagement and advertising.
13:33And actually that was a little bit problematic
13:35because as it turned out,
13:37the most polarizing extreme content
13:39tended to drive engagement more.
13:41And I think that's with sort of AI chatbots
13:44and AI personal assistants,
13:45there's also going to be this danger
13:47that if we have business models built around engagement,
13:50again, you could have very addictive companion chatbots.
13:53And then what does that do to human connection
13:55and social isolation with influence?
13:59Again, if you have advertisers
14:01that pay the chatbot company to say,
14:04whenever someone asks you,
14:06I was using the example of shoes, I don't know why,
14:08but what pair of running shoes should I buy?
14:10If you have Nike paying the chatbot maker to say,
14:13recommend always Nike shoes,
14:15that's not a situation you want.
14:17And I think you want a government regulator
14:18to be able to step in and say,
14:20actually, that's not a fair response.
14:24Chatbots are too persuasive.
14:26And I think the FTC could play that role in the US.
14:28So I think we need sensible regulation.
14:31And I don't think we should simply trust
14:32that these companies are going to do the right thing
14:34out of the kindness of their heart
14:35or the goodness of their souls.
14:37And to be fair, some of them have been asking
14:39for regulation, and I think they are aware.
14:42They have been, although it's really interesting.
14:44They say that, and then if you look at every time
14:46so far regulation's been proposed,
14:48they line up to oppose the regulation.
14:50That's happening in California right now
14:52with the Senate Bill 1047.
14:55It has happened with the EU AI Act.
14:58So it's happening a little bit
14:59with what the Copyright Office is doing around AI.
15:02So yeah, again, they talk a good game
15:04about please, please regulate us,
15:06and then when something happens, they're like,
15:07oh, but not like that.
15:09That's not what I meant when I said please regulate us.
15:11Fair enough.
15:12So I'm about to open it up to questions,
15:13so be thinking of your best AI questions for Jeremy.
15:16But one last one before I toss it to the audience.
15:19You have kids.
15:20They are teenagers.
15:22How are you preparing them
15:23for what they're going to walk into
15:25when they graduate college?
15:26What is their future looking like?
15:28What are their jobs prospects going to look like?
15:30How are you preparing your kids?
15:31Yeah, that's a great question.
15:34I wish I was preparing them better.
15:36No, I think it's a really good question,
15:38and it's a really hard question to answer,
15:39but I think what we really need
15:41is people who are lifelong learners,
15:43who are flexible and resilient,
15:45because I think no matter what happens,
15:47there's going to be a tremendous amount
15:48of change and disruption.
15:49I do not think we're,
15:50I think there's going to be plenty of jobs.
15:51One of the things I say in the book
15:52is the idea of mass unemployment
15:54I think is a complete red herring.
15:56There's not going to be a mass unemployment.
15:58And in fact, I think there'll be plenty of jobs
16:00to go around,
16:01and I think there'll be interesting jobs, again,
16:03if we design this technology sort of deliberately
16:05and carefully.
16:06But those jobs are going to change
16:08from what they are today,
16:09and they may change quite a bit
16:10over the course of your career.
16:11So I think you have to really have students
16:14and employees who are flexible,
16:17who can change,
16:18who assume their job is going to change
16:21over the course of even a few years.
16:23I think you need people who are willing
16:25to experiment, who are curious,
16:27all of these sort of qualities.
16:29And yes, I do think, as right now,
16:32the main interaction with these chatbots
16:34is around prompting.
16:36And that's a writing skill, actually.
16:39So being able to express yourself clearly in text
16:41actually matters.
16:42So I'd say that's something you want your kids
16:44still to be able to write,
16:46even though these things, like I say,
16:47there's this tendency to think,
16:48oh, they're going to take over all the writing.
16:50Nobody's going to have to write.
16:51But actually, how you interact with them
16:53and what you tell them to do
16:54and communicating to the AI system
16:56clearly what you want it to do
16:58and coming up with sort of clever ways
17:00of asking it to do things
17:02is actually really important to the outcome.
17:03So I think those things will matter.
17:05Like I said, I think AI will be a huge boon to science.
17:09So there's still plenty of jobs
17:10for people in the STEM professions and science.
17:13And when it comes to actually delivering healthcare,
17:16I think that still matters.
17:17So we're still going to need doctors and nurses
17:18and our pets will still need care.
17:20So there's lots of things for people to do.
17:22I just think we want people
17:24who are going to be lifelong learners
17:25and who are going to be very resilient.
17:27Excellent. Thank you.
17:28Well, I want to open it up.
17:29Does anyone have a burning question?
17:31Just raise your hand.
17:32In the back. Paolo.
17:34Yeah. And I'll repeat it on the mic.
17:37We're going to get some mics.
17:38So the question, let's just repeat it,
17:41is about copyright.
17:43These LLMs, these large language models,
17:45have been trained,
17:46they feed on all this data and information
17:48that the internet has provided.
17:49However, they don't always ask for permission.
17:51And I've actually heard this era likened
17:52to the Napster era
17:53where it's like scrape, scrape, scrape,
17:54don't compensate.
17:55So will we ever get compensated
17:57for the things we create?
17:58Going forward, yes.
18:00And one of the arguments I make in the book
18:01is that the era of scraping is over.
18:04And it's over not necessarily
18:05because the courts are going to say it's over,
18:07but it's ending anyway
18:08for a variety of technical reasons
18:11and also some consumer pressure, actually.
18:14I mean, there's been a big backlash.
18:15I think the Scarlett Johansson incident
18:18with OpenAI's GPT-4.0 model recently
18:22where they, I mean, they may not have,
18:24but the appearance
18:26that they may have copied her voice
18:28without her permission
18:29was sort of shocking to people.
18:31And I think there's been
18:32such a big backlash over this
18:33that a lot of companies
18:35just don't want to go near that prospect
18:37and they actually want to know
18:39that their systems that they're using
18:40have been trained with consent
18:43kind of at the heart of that process.
18:45So that's one thing.
18:46But more technically,
18:47then a lot of artists
18:49have started to mask their work online
18:51with software programs
18:53that are also kind of
18:54a type of machine learning
18:55that throw off these models.
18:58So when the models ingest that material,
18:59they cannot properly re-identify it.
19:01So you can't prompt with that artist's name
19:03and actually get something
19:04that looks like that artist's work.
19:06But in some cases,
19:07they actually poison the entire model
19:08so the whole model stops working.
19:10So that has, I think,
19:11led people to start thinking,
19:13the tech companies
19:14to start thinking about
19:15the fact that they're gonna have to
19:16pay people to license this data.
19:19We're also running out
19:20of public data to scrape.
19:22That's the other thing that's happened.
19:24And there's huge private data sets.
19:26There's much more private data
19:27than there is public data in the world.
19:29But to get that private data,
19:30you have to pay for it.
19:31So I think the tech companies are learning
19:33that they're going to have to license
19:34this material going forward.
19:36The second part of your question
19:37is more interesting about
19:38what about everything
19:39that's been scraped already?
19:40Are people gonna be compensated for that?
19:41I'm less confident of the answer there.
19:44I think we're moving to an era
19:45of licensing for any data going forward.
19:47I think the data that's already been scraped,
19:49it's going to depend on these court cases.
19:51And I think it's very hard
19:53to figure out exactly
19:54how they're gonna be decided.
19:57I talk to a lot of legal experts around this.
20:00And there's quite a division of opinion.
20:03But I'd say the slight weighting of opinion
20:05is towards the idea
20:06that the courts may very well say
20:08that simply training a model
20:10on scraped data
20:11is a kind of fair use.
20:14That it is okay to create a copy
20:16of all of this data off the internet
20:18to train the model.
20:20But the key is that the outputs
20:22can't be sort of plagiaristic.
20:24You cannot have infringing outputs.
20:25So I think the courts may say
20:26that the ingestion of the data
20:28for training is fair use,
20:29but you still can't have an output
20:31that exactly copies
20:32or is substantially similar
20:34to a piece of copyrighted material.
20:37And you'll still have issues
20:38where you can sue over that.
20:40So I think it's gonna be interesting
20:41how that plays out.
20:42Definitely.
20:43And one we will be watching closely
20:44for Fortune as well.
20:46Any final questions?
20:47Okay, one here.
20:49Yeah, how does this affect journalists
20:51and journalism worldwide?
20:54It's connected to data,
20:55but also to the public information
20:56and finding facts.
20:58Will they be behind paywalls
20:59more and more?
21:00And AI will,
21:01how is it a tool for journalists?
21:03I feel there's plenty of journalists
21:05in this room.
21:06So that's one.
21:07And the other one is,
21:08what's your take on
21:09the Chinese room theorem?
21:10Yeah, so those are
21:11very different questions.
21:12But the Chinese room one,
21:14I'll take that first quickly
21:15and then do the journalism one.
21:17Yeah, so there's this famous
21:18thought experiment called
21:19Searle's Chinese Room.
21:20And it's the idea of like
21:21if you're a person in a room
21:23and they just had
21:24a Chinese dictionary
21:25and they didn't actually
21:26speak Chinese
21:27and you'd hand them a note
21:28through a slot in the door
21:30which would have
21:31something written on it
21:32and they had to look it up
21:33in the dictionary
21:34and then provide the translation
21:35in a language they didn't speak
21:36and put it back out,
21:37would you know whether
21:38the person actually understood
21:39the language?
21:40Would it be okay to say
21:41that they understood the language
21:42because they could do
21:43this translation?
21:44And is that any different
21:45than what these large
21:46language models are doing
21:47when they're just predicting
21:48the next word
21:49with no real understanding
21:50I think it is actually different.
21:52It is, I think we're actually
21:54what we're finding out
21:55and we don't really
21:56understand very much
21:57about how these large
21:58language models work
21:59but from what little
22:00we do understand so far
22:01I think we're starting
22:02to get the sense
22:03it's a little bit beyond
22:04just Searle's Chinese Room
22:05that they actually do form
22:07kind of conceptual patterns.
22:09They have some,
22:10they don't have anything
22:11like a human level understanding
22:12at least at the moment
22:13of anything they're doing
22:14but they seem to have,
22:15they seem to group concepts
22:16together in interesting ways.
22:18So within those
22:19artificial neural networks
22:20which somewhat resemble
22:22human concepts
22:23so I think there's
22:24something going on here
22:25that is beyond simply
22:27some people say
22:28stochastic parrots
22:29I mean I think
22:30these are not simply
22:31stochastic parrots
22:32or Searle's Chinese Room
22:33there's something
22:34slightly beyond that
22:35going on
22:36but it's not quite
22:37it's not human level
22:38understanding
22:39and their concepts
22:40are not exactly aligned
22:41with human concepts
22:42and I think we're going
22:43to need a lot more research
22:44on exactly what's happening
22:45inside these networks
22:46to sort of untangle that.
22:48The journalism question.
22:49Yeah there's huge,
22:50huge potential impact
22:51on journalism.
22:52I think going forward
22:54there's huge potential
22:55to use this as a tool
22:56but as you said
22:57yeah,
22:58all this information
22:59is being scraped
23:00and put into these systems
23:01and ultimately maybe
23:02depriving publications
23:04of the flow of traffic
23:06on which they depend.
23:07I think that is a huge risk
23:09to journalism going forward
23:11and that yes
23:13more and more publications
23:14will put their stuff
23:15behind paywalls
23:16and I think we'll try
23:17to build kind of
23:18direct relationships
23:19with readers.
23:20I think they will also
23:21increasingly license
23:22content to the chatbot
23:23providers to make sure
23:24that they're being compensated
23:25for any responses
23:27that a chatbot gives
23:28that are based on content
23:30from their publication
23:31and based on their reporting.
23:32That that will become
23:33a revenue stream
23:34for publications
23:36but I do worry about
23:38what that does
23:39to business models
23:40of publications
23:41that are very dependent
23:42on search traffic
23:43and advertising revenue
23:44based on search traffic.
23:45I think this does
23:46challenge those
23:47business models
23:48and that's going to be
23:49an issue going forward
23:51for all of us.
23:52But then there's also
23:53the use of these
23:55to produce journalism.
23:56I think they can do things
23:57like write headlines.
23:58There's already
23:59some AI systems
24:00that are pretty good
24:01at writing kind of
24:02basic news stories
24:03but they always have
24:04to be fact-checked
24:05and there is this issue
24:06around hallucination
24:07at the moment.
24:08I think these tools
24:09will get better and better
24:10at doing some aspects
24:11of journalism
24:12composing headlines,
24:14composing basic news stories
24:15they will not come up
24:16with the best sources
24:17to interview
24:18or they're not going to go out
24:20and do investigative work.
24:21They're not going to do
24:23the really great enterprise stories
24:25that human journalists do.
24:27So again I think there's
24:28at the sort of
24:29higher value added
24:30end of the spectrum
24:32there's plenty of room
24:33still for human journalists
24:34and these are just going to
24:35kind of be tools
24:36that we use
24:37in our day-to-day work
24:38and not something
24:39that replaces
24:40the need for human reporters.
24:42So one last question
24:44over here in the front.
24:46Hi.
24:47How is it that we can
24:49not allow
24:52the devices
24:54to be smarter than we are
24:56and have their own
24:58internet?
25:00Yeah.
25:01I mean this is sort of
25:02the existential risk question
25:03that comes up around AI
25:04and I do address that
25:05in one of the book's
25:06last chapters
25:07and first of all
25:09I think we can all be
25:10somewhat assured
25:11that these scenarios
25:12where we have
25:13kind of what's called
25:14artificial super intelligence
25:15which is something
25:16that would be
25:17sort of smarter
25:18than all of humanity combined
25:19that's a ways off.
25:20I think it's not coming
25:21any time particularly soon
25:23and the idea of
25:24artificial general intelligence
25:26which is this other term
25:27AGI which people talk about
25:28which would be
25:29an AI system
25:30that's as smart
25:31as a human
25:32but not necessarily
25:33all of us combined.
25:34Those systems
25:35may be somewhat closer
25:36I mean perhaps
25:37we'll get there
25:38in the next decade
25:39but I'm optimistic
25:42that we can come up
25:43with ways to sort of
25:44control some of these systems
25:45but it's why I think
25:46we need regulation
25:47and we need kind of
25:48regulators looking
25:49over the shoulder
25:50of these companies
25:51and policing what they're doing.
25:52I don't think
25:53I think we still need
25:54quite a lot of research
25:55on the super intelligence question
25:56like how would we
25:57control something
25:58that was actually smarter
25:59than all of humanity combined.
26:00That's a hard thing
26:01to think about controlling
26:02and I think we should not
26:03have a system built
26:04until we've kind of
26:05figured out an answer
26:06to that question
26:07and the only way
26:08to kind of prevent
26:09a system like that
26:10from being built
26:11is by looking
26:12at what's being done
26:13and to put some limits
26:14on what can be built
26:15and I think
26:16it would be sensible
26:17to do that.
26:18I don't think
26:19the risk is huge
26:20but I feel like
26:21it's not nonexistent either
26:22and therefore
26:23it would be prudent of us
26:24to take some steps
26:25and invest some money
26:26and some time
26:27and have some smart people
26:28thinking about
26:29how we could potentially
26:30do this
26:31and how we could
26:32sort of take that risk
26:33off the table
26:34but I don't want that
26:35to crowd out
26:36dealing with a lot
26:37of the near term risks
26:38that are here today.
26:39Thank you Jeremy
26:40for sharing your insights.
26:41You can meet
26:42the great Jeremy
26:43at his book signing
26:44just around the corner
26:45of the bleachers
26:46but thank you so much
26:47for giving us the floor.
26:48Thank you Allison
26:49and thank you all for coming.

Recommended