AI requires broad 'oversight, safeguards because of impact and versatility they have' in marketplace

  • last year

Visit our website:
http://www.france24.com

Like us on Facebook:
https://www.facebook.com/FRANCE24.English

Follow us on Twitter:
https://twitter.com/France24_en
Transcript
00:00 across now to Strasbourg, Dragos Tudorakis,
00:03 one of two members of the European Parliament
00:04 leading negotiations with the European Commission
00:07 and member states, all 27 of them,
00:09 on an AI act to regulate artificial intelligence.
00:13 Thank you for speaking with us here on France 24.
00:16 - You're very welcome, good evening.
00:19 - Before we ask you about how your bill is progressing
00:23 ahead of a December 6th deadline, I believe,
00:27 first off, your reaction to what you've just heard
00:29 and what you make of the sacking of Sam Altman
00:32 and the turmoil at OpenAI.
00:34 - Well, I'm also not very keen in speculating
00:40 over a lot of things that we actually don't know
00:43 as to what was behind the decision of the board
00:45 to sack Sam Altman.
00:48 I agree that OpenAI is a very important company
00:50 in the global ecosystem of artificial intelligence.
00:53 We've all been watching over the last nine, 10 months,
00:57 the evolution of CGP.
00:59 To a large extent, we owe the fact
01:02 that everyone now speaks about artificial intelligence,
01:04 we owe it to CGP, because that is what brought
01:07 everyone's attention to the transformation
01:10 that AI was doing to our economies,
01:12 our lives, and our societies.
01:13 But again, we can only speculate
01:16 as to what were the reasons.
01:17 What I think is a takeaway,
01:20 at least for us as Europeans now trying to make sense
01:23 of the kind of rules that we want to put in place
01:25 for artificial intelligence, and in particular,
01:27 for these kind of models like CGP,
01:29 is that we need an oversight,
01:33 we need safeguards for these models
01:34 because of the impact they have,
01:36 because of the versatility that they have,
01:38 and the fact that we're going to soon find them
01:40 in a lot of the products and services that are around us.
01:43 So I think it is something that speaks volumes
01:47 as to the need for increased responsibility,
01:50 increased accountability around how these models
01:53 are being developed, the ecosystems in which they live,
01:56 in which they grow, and also safeguards
02:00 related to how they are rolled on the market.
02:03 - Yeah, and we've had news on that front
02:06 in the last 24 hours.
02:08 France, Germany, and Italy reaching an agreement
02:11 on how AI should be regulated,
02:13 penning a joint paper to make sure the legislation
02:17 does not hamper Europe's own development
02:19 of what's called foundation models.
02:22 That's to say artificial intelligence infrastructure
02:25 that underpins large language models
02:28 like OpenAI's ChatGPT, like Google's BARD.
02:31 The paper suggests self-regulating foundation models
02:35 through company pledges and codes of conduct.
02:39 Self-regulating foundation models.
02:41 Are you in favor of that?
02:42 - Well, let's be very clear,
02:46 this was not the position,
02:48 and is not the position of the European Parliament.
02:50 In fact, as an institution,
02:51 we were the first ones that recognized the need
02:54 to bring in a regime of obligations,
02:58 of rules for foundation models.
03:01 We did so in the mandate that we voted back in June
03:04 with a very large majority, I underline.
03:06 The council on their side,
03:07 they had not in their common position last year,
03:10 they had not introduced anything on foundation models.
03:13 So we started from these very different positions
03:16 in negotiation.
03:17 I saw, of course, together with my colleagues
03:19 in the negotiating team in Parliament,
03:20 we saw the paper from the three governments.
03:23 We are negotiating with the presidency,
03:25 the rotating presidency of the council.
03:27 These are the rules.
03:28 It is up to the council to come before us.
03:31 - 'Cause their argument is--
03:33 - The position that really represents the council.
03:35 - Right.
03:36 Their argument is that,
03:37 well, the Americans are first out of the block
03:42 with companies like CHAT GPT.
03:44 They control a lion's share of the market.
03:47 We're gonna regulate and keep ourselves small
03:49 while they continue to grow.
03:53 - Nothing in this regulation,
03:54 in fact, I always say that this regulation
03:56 is as much as pro-business as it is
03:59 trying to protect the values and the rights
04:01 of our citizens and society.
04:02 So there is nothing in the regulation,
04:04 even in the one that we had,
04:06 and in the position that we had outlined as Parliament
04:09 up to now, that prevents the growth of companies.
04:13 I think rules related to transparency,
04:15 rules related to accountability,
04:17 and again, the responsibility and the transparency
04:19 that you owe to citizens,
04:22 the ones to whom you address your products,
04:24 is a bare minimum that I think companies working
04:28 and developing artificial intelligence owe to us
04:31 as consumers, to society,
04:33 so we don't see these rules as putting
04:35 hindrance or burdens on companies
04:39 developing artificial intelligence.
04:41 And we have been in Parliament actually
04:43 paying quite a lot of attention in balancing out
04:45 the rules that we want to put in place
04:47 to provide the safeguards to citizens
04:49 in the use of artificial intelligence
04:51 with rules that are helping to promote innovation,
04:54 to create at European level the ecosystem necessary
04:57 for us to also grow these models of our own
05:01 and also be competitive on a global stage.
05:04 - Dragos, if I may.
05:05 - We still think that there is a balance to be found here.
05:07 Yes, please.
05:08 - Yes, if I may, this push by these three countries,
05:11 I mean, it risks torpedoing the act entirely, doesn't it?
05:14 - I wouldn't say that.
05:18 It is a position that, of course,
05:20 has to be taken into account.
05:22 Again, we are not negotiating with France, Germany, Italy.
05:25 We are negotiating with the Spanish presidency
05:27 who represents the whole council.
05:28 It is up to them to come and bring us a position
05:31 that represents the council.
05:33 It is not uncommon in these sort of very political,
05:36 very sensitive negotiations to have positions
05:39 outlined by one government or several set of governments
05:41 that ultimately it is the common position
05:44 of council that matters.
05:45 We are negotiating with the council
05:46 represented by the presidency,
05:48 so we are very much looking forward to see
05:49 what would be the views of the presidency,
05:52 what would be their position in the negotiations
05:54 to come in the next two weeks before the next trilog.
05:56 And then we will see how we accommodate the views
05:59 on their side, on our side, so that we reach a compromise.
06:02 We are committed on the 6th of December to find agreement.
06:06 We hope that the council comes to the table
06:08 as committed as we are to find an agreement.
06:10 And of course, in a negotiation,
06:12 everyone has to give a little and take a little,
06:14 and we are approaching this negotiation
06:16 in a spirit of finding agreement,
06:18 finding a lending zone.
06:20 And if everyone plays into these negotiations the same way,
06:23 I'm convinced that we can find a solution.
06:25 - Is there a chance you could miss
06:26 that December 6th deadline that you've imposed
06:28 for yourselves?
06:29 - Well, again, I would not speculate
06:33 on the outcome of the 6th of December.
06:35 I'll repeat what I said.
06:36 We, on the parliament side, are committed
06:38 to come on the 6th of December to close.
06:42 I hope the council will do the same.
06:44 And then we will see at the end of the day and night,
06:46 'cause most likely it's going to take
06:49 the whole day and the whole night to negotiate,
06:50 we'll see whether we will be close to an agreement
06:53 or maybe we will have an agreement.
06:54 If not, again, it is not also the end of the world.
06:57 We still have time until the end of the year,
06:59 which is what we said would be the goal for us,
07:03 the target, to close by the end of the year.
07:05 So we will approach constructively the 6th of December
07:08 and see what that gives.
07:10 - Chargos, let me ask you,
07:13 have you been coordinating with the United States,
07:17 with Britain, who've also been talking up
07:19 the prospect of regulation,
07:21 or will it just be the European Union?
07:23 - Listen, we've been talking to the US
07:29 and many other jurisdictions around the world
07:31 for the last three and four years.
07:32 We've had here in parliament a special committee
07:34 on artificial intelligence,
07:35 which I had the honor of chairing.
07:37 And throughout the work of that committee,
07:38 already prior to legislation,
07:40 we had been in contact constantly,
07:42 not only with policy makers in the US
07:44 and other jurisdictions,
07:45 but also with representatives of governments
07:47 who were all considering at the time,
07:49 what path to take.
07:50 Should they go down the path of creating hard rules
07:53 like the ones we're considering in Europe,
07:55 whether they should have a more light touch,
07:57 so on and so forth.
07:58 The conversation has been much more focused this year
08:00 after CGPT with all the emerging discussions
08:03 about risks, about the existential threats,
08:05 many discussions that we in the European Union
08:07 had had already,
08:08 and that helped actually galvanize
08:11 more of the international community
08:13 around the need to bring forward some safeguards,
08:17 some safety rules,
08:17 particularly for what are now called the frontier models.
08:21 And now we saw the results in the Hiroshima process,
08:24 the G7 has come forward with a code of conduct.
08:27 We saw the outcome of the summit in London.
08:30 So I think all this speaks for the need of convergence
08:34 at a global stage.
08:35 The fact of the matter is that we in union,
08:37 if we get this negotiation done,
08:39 we'll be the first jurisdiction in the world
08:41 that will not have only principles,
08:42 so only voluntary commitments,
08:44 we'll actually have hard law.
08:46 And that is no easy or--
08:50 - If there isn't a hard law, very briefly--
08:51 - And of course once we have that--
08:53 - Very briefly, if there isn't a hard law,
08:55 how worried should we be?
08:58 - Well, what we have seen over the last couple of months,
09:04 what I mentioned earlier,
09:05 the G7 or the discussion in London,
09:07 very much focused on the,
09:09 what we'll call the existential threats,
09:11 or the bigger risks of these frontier models.
09:13 But let's not forget that artificial intelligence
09:15 also poses everyday risks.
09:17 When you go to a bank,
09:18 when you go to an insurance company,
09:19 when you send your kids to school or to a hospital,
09:21 we are already interacting in the day-to-day life
09:24 with artificial intelligence
09:25 that is open to bias sometimes,
09:28 that is open to discrimination.
09:29 So there are also these risks that need to be mitigated,
09:31 and this is what our legislation is about.
09:33 So I think the absence of an AIR Act adopted
09:36 as soon as possible will leave in our societies
09:41 an open field for potential discrimination,
09:46 and also for our societies,
09:48 our societies not trusting enough the technology
09:50 to interact with it.
09:51 And I think that's the duty that we have as policymakers
09:54 to remain committed to deliver this legislation
09:57 for the union.
09:59 - Dragos Stoudurakia, I wanna thank you so much
10:00 for joining us from Strasbourg Live.
10:02 - You're very welcome.

Recommended