• 9 hours ago

Visit our website:
http://www.france24.com

Like us on Facebook:
https://www.facebook.com/FRANCE24.English

Follow us on Twitter:
https://twitter.com/France24_en

Category

🗞
News
Transcript
00:00Who knew?
00:13For me, it was during the tedium of a 1986 summer job in the Paris suburb of Boulogne-Biancourt
00:19as I punched citizens' names into the city school board's first-ever computer network.
00:25Who knew that a lowly data entry gig made me an unwitting foot soldier in the transformation
00:32of capitalism and perhaps humanity?
00:35From two fingers on a keyboard, fast forward to the present, in the age of terabytes and
00:39artificial intelligence.
00:40It's France and India, a co-host, an AI summit in Paris.
00:46We ask if it's spun so far beyond our control that any well-intentioned bid by world leaders
00:52to forge common rules and guidelines seems, well, too little too late.
00:56Right now, it's Silicon Valley that's got the money to draw the best and the brightest
01:01and the cheap energy to power the bigger and bigger data centers needed for AI.
01:07And to that, a new US administration that's ready to start a trade war with all those
01:11in its path.
01:12And how does the rest of the world defend privacy, shared natural resources, including
01:17strategic minerals in places like China and Africa?
01:21And what to do about a digital age that so far seems to concentrate so much power and
01:26wealth into the hands of so few?
01:28More broadly, is the technology evolving faster than humanity can process its potentials and
01:34its dangers?
01:35We'll be asking our panel today in the France 24 debate, the global challenges of artificial
01:41intelligence.
01:42And joining us from the AI Action Summit, the host of our Tech 24 program, Peter O'Brien.
01:50It's day one.
01:51Thanks for being with us.
01:52You're welcome.
01:53He's a mathematician by training, who's advised President Macron.
01:59Thierry Coulomb is president of the Polytechnique Institute of Paris.
02:04You hosted the, I guess you could call it the pre-summit last week, which brought together
02:09some heavy thinkers already.
02:11Thanks for joining us.
02:13Bangladeshi-American data scientist Rumaan Chowdhury is the CEO and co-founder of Humane
02:19Intelligence.
02:20Tell us what Humane Intelligence is.
02:22We are a tech nonprofit that's building the community of practice to understand the impact
02:26of AI on society.
02:28All right.
02:29Thanks for joining us.
02:30Thanks as well to a man who's been covering AI for Fortune magazine since, well, before
02:33it became a thing.
02:34Jeremy Kahn, the author of Mastering AI, A Survival Guide to Our Superpowered Future.
02:40Welcome to the show.
02:41Thanks for having me.
02:42By the way, you can listen, like, and subscribe to the France 24 debate wherever podcasts
02:48are streamed.
02:50Rumaan, what does AI mean for you?
02:54Artificial intelligence has been compared to many things, electricity, oil, so many
02:59things.
03:00I think just for the average person, artificial intelligence represents the next wave of technological
03:05evolution, things that will impact every aspect of our lives.
03:09And in fact, AI is really used in many things today.
03:11We just don't see it.
03:13So for the regular person, AI may change how you live, work, and play.
03:17How you live, work, and play.
03:20So it changes us as humans?
03:21I don't know if it changes us as humans.
03:23I actually think the human condition has been actually fairly universal across time.
03:27This is why Shakespeare holds so much meaning for us, even though here we are in a fancy
03:31studio with electricity, and that's something unfathomable in this time.
03:35I don't think it changes humans at all.
03:37All right.
03:38It doesn't change humans, but there are these concerns and, you know, this worry that Thierry
03:44Coulomb, that in a lot of the talk, and there's been plenty of media coverage in this country
03:49over the last 72 hours ahead of this summit, a lot of the talk has been explaining what
03:54artificial intelligence is.
03:57And should we be reassuring people or should they worry?
04:02Well, first we've got to understand things before we worry or we are too optimistic.
04:09So the people at Institut Polytechnique de Paris, they are part of the people who did
04:15artificial intelligence.
04:17The Institut Polytechnique de Paris is made of six engineering schools that started, some
04:21of them in the 18th century, some of them in the 20th century.
04:25We are trying to turn them into a single institution adapted, relevant for the 21st century.
04:32But what they have always done is to keep pace with the scientific revolutions and to
04:38anticipate them.
04:39So we are strong in math and stats and computer science.
04:43So that's what we are training students to do.
04:48And I think we can be adapted to these exciting times.
04:52You're figuring out the how, but do you think about the why?
04:55Yes, indeed, because there are social impacts, indeed, and we also have social scientists.
05:03I guess it's too early to say, but it's not too early to think about the potential consequences
05:09and the way we can regulate things.
05:13Jeremy Kahn, we heard the French president who invited disruption in the speech he gave
05:19a short while ago.
05:24Disruptions become something, think of a dirty word for some when they look at this.
05:29Yeah, absolutely.
05:30I think people are worried about what this technology is going to do to society, to their
05:35jobs.
05:36People are very concerned about the potential impacts of this.
05:40And I think it's right to say it's a disruptive technology.
05:44I think it will change many things and be fairly transformative.
05:47I think there will be some very good changes, and I think there are some risks that we should
05:51be mindful of and try to take action now to potentially avoid or mitigate.
05:58It already feels that the information age has begot the disinformation age.
06:03A lot of the focus has been on this aspect of it among the general public that doesn't
06:09see what Rahman was telling you about, which is the applications that go beyond asking
06:14chat GPT about a specific topic.
06:18Eliza Herbert has that story.
06:23Joe Biden and Donald Trump closer than ever.
06:26Or Emmanuel Macron embracing Marine Le Pen.
06:30As deepfakes grow more and more prolific, so do the abuses of artificial intelligence.
06:36In just a few clicks, an image can be falsified and virally spread across the internet.
06:42Chatbots too are adding to disinformation.
06:45According to experts, programs that impersonate real people can be a dangerous digital tool,
06:51especially when generating myths or rumours around election campaigns.
06:56Now important questions that we should tackle, the future of the workforce, of course, the
07:03impact on our democracy.
07:06Automation threatens to replace jobs like customer service, retail and many other professions.
07:13Generative AI can clone voices used in impersonation scams, and services such as chat GPT capable
07:20of producing text, image and sounds directly endanger artists' livelihoods, copyright
07:27and the collective imagination.
07:30How can we fail to see that this is the Trojan horse for giving up the use of our most fundamental
07:36faculties?
07:37Another issue is on the battlefield.
07:40Militaries already use AI in warfare for mass surveillance and with autonomous drones.
07:47And recently, Google dropped its ban on using AI to develop weapons.
07:52It could also be used in much more dangerous and harmful ways.
07:59For example, kill anyone who fits the following description.
08:06Bias and discrimination, privacy and data leaks, and the environmental cost of the heavily
08:13energy dependent services are a few more of the risks that come with AI.
08:19Yeah, I guess, Peter O'Brien, we could have entitled this show, What Could Possibly Go
08:24Wrong?
08:25I suppose these kinds of worst case scenarios spelled out there, that's not what we heard
08:34from the French president earlier, but it's high on everybody's mind.
08:40Yeah, and I think one thing that went a little bit under the radar was that quickfire round
08:45that he did in his interview on Sunday night, where he was asked the question, is AI a threat
08:51to our democracies?
08:52And he hesitated, and he finally did answer yes.
08:55In fact, I read that speech he made on Sunday night as a classic Macron en même temps, right?
09:02He's supporting the AI Act.
09:04He actually even called for global regulation of AI.
09:08But at the same time, clearly it was a massive effort to drum up investments.
09:13And that's absolutely absorbed this entire first day of the summit.
09:17This headline that he's drummed up, 109 billion euros, it is actually on a similar level to
09:25the kind of promise that we saw from Stargate in the US.
09:29But some definitely that I've spoken to at the summit feel like the other concerns around
09:34safety, around risk, have been sidelined while Macron kind of rolls back the years
09:39and plays the best hits of the start-up nation.
09:42Yeah, let me bring in Romain on this.
09:45There are mixed messages, and we heard him earlier from the French president.
09:48On the one hand, he's saying, day two, we're going to talk about regulation.
09:52But it's day one.
09:53Come invest in France.
09:54Come invest in Europe.
09:55Right.
09:56And we're seeing that shift actually across the board.
09:57Even a lot of the people who called for safety and were concerned about existential risk
10:02today are saying we should innovate.
10:04I think there's two things being reflected here.
10:06First is there is immense commercial opportunity, and there's a lot of pressure from big and
10:11powerful companies for these governments to be aligned with them.
10:16On the other end, I do think one of the reasons we don't talk about AI abolition is that even
10:21for those of us who work on societal impact, we do want to see a better vision where artificial
10:26intelligence is used to improve human lives.
10:28We want to see how it's improved the lives of humans.
10:35We're at the same time worried about who has control.
10:42And we heard this from the French president in that Sunday night speech that he gave to
10:52French television and to journalists who had come from India.
10:58Emmanuel Macron talking about, well, the dangers that lie ahead and talking about the
11:04United States in particular, raising concerns, he says, with its tech dominance.
11:15It affects everyone's life, yet it's controlled by only a few.
11:19And that's where the problem begins, when tools that billions rely on to learn, work,
11:25treat others, and shape their lives on the hands of a small group, mostly private interests,
11:31without a global conversation about what's right, where the limits should be, and how
11:35to ensure access for all, then we're not doing our job properly.
11:43Now Peter O'Brien, the French president there, doesn't say the words the United States, but
11:49it's implicit.
11:54Everyone is looking forward, with trepidation I have to say, a lot of people here in Europe
11:59for J.D. Vance's speech tomorrow.
12:02If anyone tells you they know what he's going to say, then don't bank on it.
12:07We know he's from Silicon Valley.
12:09We know he's criticized the EU a lot in the past for over-regulating.
12:14But that tends to be about what he calls free speech concerns.
12:19His speech comes slap-bang in the middle of Emmanuel Macron's Tomorrow Morning and
12:25Ursula von der Leyen's.
12:27So that's, if I can put it this way, sort of unusual bedfellows.
12:34But the truth is that America is a huge concern for a lot of people working in AI regulation,
12:41because he has ripped up Joe Biden's executive order putting safety guardrails around AI.
12:50He's fired the head of the U.S.'s AI Safety Institute.
12:53France, meanwhile, has just started its own AI Safety Institute.
12:56That's one of the things to come out of the summit.
12:59But it's looking increasingly like it's the U.S. against the world in AI, as well as many
13:03other things.
13:05In many other things.
13:06And you spoke with the vice president of the European Commission on this.
13:10Yeah, that's right.
13:12She's here at this event at the Maison de la Chimie, which is just, we trudged through
13:17a muddy construction site from the Grand Palais to get here and interview her.
13:22And first off, I asked her, is this event actually been a little bit too much about
13:27investment and a bit too much about the French?
13:31Of course, we have one common single market in the European Union.
13:35So we want to make sure that we have also that kind of framework in the European Union
13:39that encourages investments and innovations when it comes to AI.
13:44You know what JD Vance's comments are like on the EU.
13:48Is he going to come in and throw a grenade tomorrow in all of this?
13:52Of course, all the countries, they have their own rules when it comes, for example, to digital
13:57sector and other sectors.
13:58And in the European Union, we are very committed to our rules.
14:03We want to make sure that we have fair and safe and also democratic environment when
14:07it comes to digitalization.
14:10Was Italy right to take action against DeepSeek?
14:15All the companies who are doing business and operating in the European Union, of course,
14:20have to respect our digital rules.
14:23And it also comes to data protection.
14:25So that is now something that is national authorities are doing and checking it, for
14:30example, with DeepSeek.
14:31Is it not problematic that we're getting very different approaches from different data protection
14:35authorities from within the EU?
14:38When we look at our digital rules, it would be better if we have always one approach in
14:45the European level.
14:46And we know that with the GDPR, we have different ways of implementation in different member
14:52states.
14:53And of course, it's never good if we have that kind of fragmentation in our markets.
15:01That's the EU's tech boss saying that actually the EU needs to take a common approach for
15:05AI, worried about things splintering apart further.
15:09GDPR, she mentions there, just to tell our viewers, that's part of the rules for tech
15:17transparency put forth by Brussels that U.S. tech companies have to abide by.
15:24And as Peter O'Brien was saying, might be in the crosshairs of that speech we hear from
15:30JD Vance.
15:32Your thoughts on the U.S. vice president at this summit and what you're expecting from
15:37him?
15:38You know, I'm a mere citizen.
15:41As a citizen, I just think that Europe might find the proper balance between innovation
15:47and regulation.
15:48We have a model where the citizen is protected and we want to innovate.
15:52Because we don't want to be the referee in a football match between China and the United
15:55States.
15:56That's right.
15:57That's right.
15:58And the only way to do this is to be an actor.
15:59And we are, for once, we did not miss the train.
16:02I mean, we France and Europe, we are in the train.
16:06There is a lot of uncertainty.
16:08Who knows what will happen, but we are players.
16:13And I would like to advocate for two things we higher education institutions can do.
16:18First, education.
16:19I mean, we have to train people to AI.
16:22We have to train them to be critical and aware, and that will help.
16:28And the second thing is, let's speak a little bit of the potential upsides of AI and the
16:36potential positive consequences.
16:38Every scientific presentation I hear these days, whether in biology, chemistry, physics,
16:45uses AI with a power that is incredible.
16:50So as far as you have to test molecules and to test proteins and so on and so forth, I
16:56mean, you are going to the very scientific methods are changing for the better.
17:03And well, that's that's worth it, isn't it, when we speak of diagnosis of cancer?
17:10Jeremy Kahn, you heard there Thierry Coulon alluding to the fact that France has its own
17:15AI champion.
17:16It's a company, one of its co-founders, an alum of Polytechnique.
17:26And it's called Mistral.
17:29And your thoughts?
17:32I mean, Mistral is a great, it's a good company.
17:34You know, they're good founders and they've put out some good models.
17:38I think the issue has been a question of scale.
17:43The model sizes that they have put out are not as big as what you've seen from sort of
17:46open AI or Anthropic.
17:48And I think there's this issue of what is it going to take to sort of get to the next
17:52level of capabilities?
17:54And what does it for a while look like to get to the next level?
17:56It's going to take these massive data centers, which is why OpenAI has done this partnership
18:01to build Stargate.
18:03It's not clear, it wasn't clear that Mistral would have that kind of capital to build something
18:07that big.
18:08Now there's some debate in the field, actually, about whether that's really necessary.
18:11There are these models that are coming out now that are smaller, that do very well.
18:15DeepSeek is a smaller model.
18:17This is China's version.
18:18Yeah, this is China's, this model from a Chinese startup.
18:21It's a very innovative model in a lot of ways.
18:25One of the things that's innovative about it is they showed that you could take a smaller
18:27model and get it to do these kind of reasoning steps that are very similar to these very
18:31big models.
18:32And so that's interesting.
18:34And it's possible that Mistral could borrow some of those innovations from DeepSeek.
18:38The whole world is sort of studying this model.
18:40It's an open source model, meaning anyone can download it, anyone can examine it.
18:43They publish quite a lot of information about how they engineered it.
18:47And the whole world can kind of learn from that.
18:48And I'm sure the guys at Mistral are looking at that and thinking, are there things here
18:52that we can borrow?
18:53So if something like DeepSeek is open source, does it need to be banned?
18:58Emmanuel Macron says he's against that for now.
19:02You heard the hesitation there in that clip that Peter O'Brien played us of the Texar
19:07at the European Commission.
19:08Yeah, well, the model itself, you can take the entire model and run it on your own equipment.
19:13And then there's very little risk to privacy or security.
19:16The issue has been if they also, DeepSeek also offers the chance to interact with the
19:20model through their own servers.
19:23So then you are talking to a system in China.
19:25And there, there has been concern about what was happening with that data.
19:29And does the Chinese government potentially have potential to access that data?
19:34And there's some relationship in that service with China Telecom, which DeepSeek has not,
19:39you know, was very quiet about.
19:41That seems that there is perhaps some relationship there.
19:43So there are concerns if you're using the app that you would download on your phone,
19:48then you're talking to a server in China.
19:50There there's some concern.
19:51But if a company wants to take the model and run it on their own equipment, there's much
19:55less concern.
19:56Rumaan Choudhury, what's your religion when it comes to these, the nationality of these
20:01large language models?
20:03Right.
20:04Well, first in talking about open source, look, open source is not new.
20:06It was not invented with AI.
20:08Open source has existed for decades and actually has been net good for competition.
20:12We have to ask ourselves in a market like AI that is almost naturally a monopoly or
20:17an oligopoly because it is so capital intensive to build models.
20:22Don't we want to also spur innovation by allowing small, you know, smaller actors to thrive?
20:27As Jeremy's pointed out, DeepSeek and even, you know, Metas Llama models allow you to
20:32download and utilize these models for yourselves or even to make a little startup.
20:37So I think, you know, we have to look at the actors who are saying open source is bad and
20:42how they might benefit from it.
20:44What we want is a competitive market.
20:46Because right now the concern more broadly, just going beyond artificial intelligence,
20:52is the most powerful control, the secret sauce, whether that's algorithms or whether it's
21:01the model for making artificial intelligence.
21:05That's right.
21:06So my nonprofit, Humane Intelligence, we work on tests and evaluation.
21:09We've pioneered things like public red teaming.
21:11It's where you bring in people to evaluate models.
21:14Now, unfortunately, that can only go so far because today the model owners write their
21:19own homework, they grade their own tests, and then they publish their own findings telling
21:23the world about how great their models perform.
21:26Even when you think about all of the news and the hype around DeepSeek versus open AI,
21:30what you have to look at are the metrics by which they analyze performance.
21:34Now it's things like medical exams, legal exams, physics questions, and coding questions.
21:39Very little of those questions were about impact on society.
21:43Might it perpetuate discrimination?
21:46It doesn't hold up against cybersecurity standards.
21:48So as long as we let companies write their own tests and grade their own homework, we're
21:53really not understanding what's happening.
21:55So whose job is it to enforce accountability?
21:59I think it's an ecosystem.
22:00I think government plays a role, but as we've talked about the United States, that can also
22:05sometimes be questionable.
22:06That's over for four years, at least.
22:09To be determined, but I think your prediction is probably correct.
22:12This is also why we need a robust civil society, technical organizations, and also I'm a big
22:17proponent of independent algorithmic auditors.
22:19One of the things that I and others are pushing for in the United States is something called
22:23safe harbor.
22:24It exists for ethical hackers as it relates to software, and I think it needs to exist
22:29for people who want to understand the impact of algorithms and not be sued by companies.
22:33Can they do that?
22:34Can they take on Silicon Valley?
22:37We can try.
22:39Your thoughts on this, Jeremy?
22:41Look, I think it's going to be difficult with the current administration to see the U.S.
22:46government doing something on the federal level anytime soon, but there are states in
22:50the U.S. that are trying on their own to pass rules, including some states with quite a
22:53lot of power.
22:54There was a bill in California that got to the governor's desk.
22:58It was ultimately vetoed, but there's going to be a sort of second attempt or third attempt
23:01in California to pass legislation.
23:03There's some legislation in Colorado, and those things can actually have an impact.
23:07The companies don't love it because they'd rather see one national rule that makes it
23:12much easier for them to comply to, but these state efforts can actually have impact.
23:18We've seen that with privacy.
23:19The state of Illinois has a very strong data privacy and biometric privacy law.
23:24It's a state law, but it's had national impact, and so you might see those kind of efforts
23:28actually creating some accountability for these companies.
23:32This kind of civil society approach, that's anathema to what France is all about.
23:37We're a very top-down country.
23:40That's right.
23:41Nevertheless, here we try to cure ourselves, and I like the idea that maybe we are not
23:49going to a superintelligence ruled by somebody, but to a network of intelligences that would
23:58interact between themselves and that could be controlled or we could be critical about them.
24:07I also love the idea of open source.
24:10Yesterday, I was at a meeting of so-called AI Alliance, and when you see that IBM and
24:18Meta are in favor of open source because they have understood that it helps competition,
24:24well, that's good news.
24:26We're actually a member of AI Alliance as well.
24:29So, there is this other issue, which is we don't yet know how much energy all of this
24:36computing is going to require.
24:39Emmanuel Macron talking up that big investment that Peter O'Brien was telling us about, and
24:47more and more, let's take the example of the United States, it's contemplating the opening
24:50of its first nuclear power plants in decades.
24:54Studies predict they'll consume up to 9% artificial intelligence, will consume up to 9% of the
25:01country's energy by 2030.
25:03Here in Europe, data centers already consume a whopping 3% of electricity, and that could
25:10climb to 8% by 2030, Peter O'Brien.
25:14That's the equivalent of the whole electricity output of a country like Spain.
25:22Of course, a lot could happen between then, but still, this requires a lot of energy,
25:30and a lot of the protesters are about that.
25:36Yes, there's a few things here.
25:38One is that the kind of efficiency saving we saw with DeepSea might show that it doesn't
25:45have to be an exponential increase in energy usage.
25:49Another thing is, you're totally correct, depending on the country, there will be a
25:53different environmental impact.
25:56I think one of the main reasons that Macron has drummed up the support for France is because
26:01of its abundant nuclear power, because a lot of companies don't want to be seen to be pushing
26:07out loads more CO2 in the atmosphere, so if these kind of data centers are set up in other
26:11countries that are more reliant on hydrocarbons, there might be a lot more emissions.
26:16In fact, I was talking to Stuart Russell, who is a scientist at Berkeley, he's literally
26:22written the book on AI, the AI textbook, which is in more than 1,500 universities around
26:28the world.
26:29He's got an interesting take on this.
26:31He says, look at the statistics we currently have.
26:34At its current rate, we can't, and this is far from the consensus that we've seen emerging
26:41in some of the media and a lot of the people talking about AI.
26:44He says that at the moment, it is a tiny fraction of the world's carbon emissions,
26:48AI data centers, but he's someone who believes in AI potentially having an exponential increase
26:55in terms of its resource use, so he said in that case, if it does continue on an exponential
27:01path, then we do need to start to worry, but as I said earlier, DeepSeek is potentially
27:07a bit of a counterexample as to somewhere where efficiencies can be made.
27:11Because the planet's resources, Raman Chowdhury, are finite.
27:16I'll also add that DeepSeek is built on reusing, essentially, a fundamentally, you know, bigger
27:25trained model.
27:26So DeepSeek is able to be efficient because existing foundation models exist, so we couldn't
27:31have one without the other.
27:33You know, I'm not a climate and AI expert, however, I channel my friend, Dr. Sasha Luccioni,
27:37climate lead at Hugging Face, and she's been doing amazing work on things like energy star
27:41ratings for AI models, and it's also important to differentiate what we mean.
27:45There's a difference between the cost of energy in training a model, which is quite high,
27:50versus, you know, and that's a sunk cost, versus performing an individual task, asking
27:55ChatGPT 10 questions, right?
27:57If ChatGPT has already been built, and then we're distilling models on top of it, that's
28:01what the process for DeepSeek is called, distillation, is that cheaper, easier, and better?
28:06But I don't think it gets rid of the question of what's the climate impact of AI models,
28:11just because it's less than what we use today.
28:13We know the planet is burning.
28:15We know that the environment is declining.
28:17Right.
28:18We know the environment is declining, and like you're saying, these large language models
28:23are riding piggyback, and now we have this ambition, right?
28:28The Indians have been talking up the fact that they want their large language models
28:32to be not just in English, but also in all the languages of their country.
28:37That too is going to cost money.
28:39Absolutely.
28:40I believe India's launched a competition for homegrown foundation models, and my understanding
28:46is that the government is hosting this competition to spark their startup ecosystem in what's
28:52called sovereign AI.
28:54It's a phrase I heard quite a bit in my international travels as US science envoy in the Biden administration,
29:00science envoy for AI.
29:02Sovereign AI is a big topic of conversation.
29:04People want their own homegrown models for things like data protection, privacy, security,
29:09but also cultural context, linguistic completeness.
29:13What people don't want is sort of a cultural encroachment and values that are different
29:17from theirs.
29:18Yeah, because already capitalism has killed a lot of languages.
29:24People in France, for instance, don't speak with regional accents as much anymore, speak
29:28local dialects because they all watch the same eight o'clock news when I was growing
29:33up.
29:34Now, with artificial intelligence, does that accelerate that homogenization of culture?
29:39It's definitely a risk.
29:40And if we don't do things like building sovereign models or more local models, it's definitely
29:44a risk that you could get more homogenization.
29:48That is something I think people need to be aware of.
29:50On the other hand, whether it really makes sense for everyone to train their own large
29:53model, in every case, I'm not sure it does.
29:55I'm not sure that's the most sort of, you know, we're talking about environmentally
29:57sustainable paths.
29:58It's not clear that would be necessarily the most environmentally sustainable path either.
30:03But I think there's ways to sort of use these models where you could counteract some of
30:07those trends.
30:09You know, you could, the models actually have a fair bit of knowledge.
30:14They sort of suck up the world's internet worth of data.
30:17And within that, you can sometimes take a model and fine tune it to perform fairly well
30:21at a local task or a local language.
30:23It's much harder for languages that are not digitized in any way, so-called low resource
30:28languages.
30:29There it's a much bigger problem because there just isn't digital data on those languages.
30:33And there you might have to do things, but there's some ways AI can even help there.
30:36So you could do, for instance, AI is very good at doing, taking voice to text.
30:41So one idea is if you can create some kind of chatbot or voice interface for people in
30:46low resource countries, you actually have a way of digitizing their data.
30:49You can actually get them to tell sort of their traditional stories to something that
30:54then can take that and create digital documents, digital artifacts that can then be used to
30:58train a model.
30:59But right now we don't have the data for some of these low resource languages to train effective
31:04models.
31:05So is that something that- Yes, indeed.
31:07We did not mention so far very much the issue of data.
31:12To have a good artificial intelligence, what do we need?
31:15We need computing power.
31:17We need talents, qualified personnel.
31:21We need a business model, but we also need clean data and interesting data.
31:26And that's why imagine all the libraries are connected, all the incredible works of art
31:33or literature of every country is treated by AI.
31:38That would be fantastic.
31:39And indeed, you have to collect data in some parts of the world where no digitalization
31:46has taken place.
31:47Fantastic.
31:48Or is it theft?
31:49Because a lot of these large language models rely on taking everybody's personal data.
31:55Yes.
31:56That's what's extremely interesting in the current times.
31:59What should be open?
32:01What should be proprietary?
32:03Where is privacy?
32:04I mean, that's- In his speech, Peter O'Brien, in his speech,
32:10Emmanuel Macron talked about copyright, but what was he meaning by that?
32:18Copyright is one of the issues that's among the most thorny because one of the outcomes
32:24we have from the summit is a copyright charter that's been signed by a lot of publishers
32:30and a lot of sort of artists' representatives, writers' representatives, musicians' representatives
32:37around the world.
32:38So a lot of them have signed this charter calling for protections of copyright.
32:43And there's a couple of things in here.
32:45It's not just about how we make sure people are fairly compensated for their work essentially
32:54being data-mined, but also what happens to stuff that's generated by AI?
33:00Is that then under copyright?
33:01Does it fall under the copyright of the person who produced the work?
33:04So in terms of the outcomes of this summit, it's notable that none of the AI companies
33:10are signatories of this charter.
33:13So that's not going to be enough to get them to change if, for instance, they have stolen
33:17data.
33:20We saw actually last week the U.S. Copyright Office released its kind of opinion, if you
33:27like, on the latter point, whether stuff produced by AI falls under copyright.
33:32And they essentially said, you know, in the vast majority of cases, no, it doesn't because
33:37it's been produced by an AI, therefore it shouldn't come under copyright.
33:42And they said, actually, for now, copyright law is strong enough on its own to be able
33:48to deal with a lot of the questions around AI and copyright.
33:53But that still leaves a massive question over the enforcement, because there's a lot
33:59of work out there that it appears to have been taken without permission.
34:05And I know it's a huge question of, you know, can this ever be compensated?
34:11Because at the moment, we've now got publishing houses, for instance, that are signing deals.
34:15We saw recently AFP here in Paris sign a deal with Mestral giving them access to all of
34:21their journalists' work in response, you know, in return for some sort of gain from
34:28Mestral.
34:29But the question is whether this is enough and whether it will be – I know there's
34:35definitely a lot of smaller artists or writers that aren't necessarily represented by big
34:40houses like AFP that have a very different perspective on it.
34:44Yeah.
34:45AFP, the French news agency.
34:47Romain Chaudhry, as we saw those images of leaders beginning to arrive for that dinner
34:53that they hosted at the French presidential palace for those attending the AI summit.
34:59This question of who owns the Internet and who owns what's in it.
35:03I think the Internet has undergone, frankly, sad change.
35:07The Internet started off as a place of openness.
35:10It was free for anybody.
35:13And yes, we needed to protect people on the Internet.
35:15But increasingly, it's become a privatized and monetized space.
35:19And what suffers is the quality of what's on there.
35:22You had mentioned earlier about the quality of data that goes in data sets.
35:25Now, if you think about AFP or, you know, The Atlantic or all these places just handing
35:29over their data sets to big companies, the people who suffer the most are actually the
35:34people who have written the articles, the journalists who are not being adequately compensated.
35:39Because one of the questions is, what does adequate compensation even look like, right?
35:43What does it mean to have a revenue model?
35:46But it is also self-defeating, because no AI can ever write as well as a journalist.
35:52Jeremy wrote a book recently, and no AI could write that book, and it's built on his years
35:57of experience.
35:58And AI could mimic Jeremy if you wanted it to, but it's not going to get that native
36:03knowledge.
36:04And one of the things people talk about is something called model collapse, where models
36:08are increasingly trained on data made by AI.
36:11And, you know, it's almost like an inbreeding that leads to, and if you think about like,
36:15you know, animals, if you inbreed too much, then it has these fundamental genetic flaws.
36:21And the same thing happens to AI models.
36:23You cannot simply keep training a model on its own data, because after a while it removes
36:28itself from reality.
36:30So you'll forever need this, you know, genuine source of information.
36:33Well, if we're making an exploitative system where we are simply, you know, the producers,
36:40but not the receivers of the benefits, that is, you know, not a system that's sustainable.
36:45Jeremy, are you reassured there that no matter, come hell or high water, you'll always need
36:51a good copy editor?
36:53Yeah, well, that's a good...
36:54As a human being?
36:55Well, copy editor is an interesting one, because that's one where maybe the systems are good
36:59enough now to do some of that.
37:00But no, I'm assured that Ramon thinks that AI can't write as well as I can.
37:05That's also, I mean, people always ask me on the book, did I use AI to write it?
37:08And I did not, because...
37:09And they said, well, did you try?
37:10And I did actually spend a little bit of time.
37:13Could I get this thing to write as well as I could?
37:14I could not.
37:15I could not prompt it to...
37:16And I tried to give it some samples in my writing.
37:18I could not get it to write as well as I could.
37:21And but I'm not sure that's gonna be true forever, actually, to be fair.
37:24But what I do think is gonna be true is that you're still gonna need journalists, and you're
37:28gonna need journalists to find facts, and particularly to go out and investigate, and
37:32to hold public figures to account, and to go out and find new facts that are not already
37:37in the public domain.
37:38And that will remain, and maybe even become an even more important sort of skill going
37:42forward.
37:43So whether the actual composition of the product, you know, is done by the human sitting at
37:50a keyboard writing, or whether it's sort of synthesized by and created by AI, I'm not
37:55sure.
37:56But the actual fact-finding very much will still have to be done by journalists.
37:59And then when I think it comes to sort of the writing of works of fiction, of novels
38:02and things, those are a form of art.
38:05And I think art is ultimately about the communication from one person to another.
38:09It's about actually the intention of the artist.
38:11And these AI systems have no intention.
38:13They're not people.
38:14They have no lived experience, which they're trying to convey.
38:18So I think ultimately there's gonna be something inauthentic always about a novel written by
38:22AI.
38:23And I hope people recognize that inauthenticity and prefer actually the authentic communication
38:28you get between artist and human audience.
38:31We talk about the beauty of mathematics.
38:33You're a mathematician, Thierry Coulomb.
38:37But at the end of the day, when it comes to things that are more of a technical ilk,
38:45the machines can take over.
38:48Well, as far as math are concerned, I don't know.
38:52Thierry Taoub, who is probably the best living mathematician, said a few months ago that
38:58he used AI as an undergrad, moderately gifted undergrad student.
39:06I don't know whether it's true, still true or not.
39:09You remember when we used to say that about chess, the machines that couldn't be...
39:15Nevertheless, as Yann LeCun says, AI does not understand the way the world works, I
39:23mean, how reality works.
39:26So I do think that at least for some time, the efficient combination will be the combination
39:33of human intelligence and artificial intelligence.
39:37Again, as far as diagnosis is concerned, that's obvious, that you need specialists to properly
39:42interpret the thousands of images that have been sorted out by artificial intelligence.
39:49So well, let's work with that.
39:51Let's try to be assisted by artificial intelligence to go further, probably not to be replaced.
39:58And what about winners and losers?
40:00Because the French saying, and you heard the French president saying, come invest with
40:05us, we got nuclear energy, we have lots of land and we have smart people who go to Polytechnique.
40:12And data, because we are a centralized state.
40:15Right.
40:16Yes.
40:17Not all EU members are so well endowed.
40:19So are there going to be winners and losers now?
40:22Well, I guess at least in Europe, as far as higher education institutions are concerned,
40:30the field I know best, we try to collaborate.
40:34I mean, we are in an alliance with the TUM, with Technion and EPFL, so that's Europe in
40:39a broad sense, with the Danish and the Netherlands.
40:45And we tackle the same big challenges, AI, climate, defence and security.
40:50So yeah, there is a European game where there should be winners and few losers.
40:58Very briefly there, Peter O'Brien, can we say this day one is the easy part?
41:07I don't know if Macron would say it's been easy because he's obviously working on a lot
41:11of these contracts for many years.
41:15And tomorrow, I mean, yes, I think so, because it is really the politics of it that are going
41:23to be difficult.
41:24Macron's clearly said he's in favour of a global regulation of sorts.
41:31And what I think is interesting is China.
41:33We haven't talked about China, but in the last six months, they really have moved towards
41:38more of a concern on AI risk and safety.
41:43In a way, perhaps, as the French have got less interested in that.
41:47But China will be present.
41:50The vice premier will be there tomorrow.
41:52And it seems, at least on some topics, they'll want to collaborate.
41:56But I do think even, you know, I've seen some people say the wording that France has gone
42:00for seems like an almost like a bit of a bit of a middle finger to the US in some ways,
42:06because talking about things like inclusivity and sustainability are not what the US wants
42:12to hear.
42:13All right.
42:14And we'll be, of course, covering it for you right here on France 24.
42:17Peter O'Brien, many thanks for joining us live there.
42:20I want to thank Thierry Coulomb, Ruman Chowdhury, Jeremy Kahn.
42:24Thank you for being with us here on the France 24.

Recommended