• last month
Carme Artigas, Co-Chair, U.N. AI Advisory Body; Senior Fellow, Belfer Center, Harvard University
Elizabeth Kelly, Director, U.S. Artificial Intelligence Safety Institute
In conversation with: Ellie Austin, Deputy Editorial Director, Fortune Live Media
Transcript
00:00Hello, everyone. It's so great to be here and Karma and Elizabeth, thank you very much for joining us.
00:07I'm sure this is not the first time you've heard it today, but we will go to audience questions.
00:11So please get them ready. Now, I want to start with what might seem like a very basic question,
00:17but an important one. The title of this session is Can AI be governed?
00:22What does governance or governing mean to you within this context? And Karma, let's go to you first.
00:30First of all, thanks so much. And as you say, it's not only can be governed, it must be governed.
00:35But I want to make a clear differentiation between what is ethics, governance and regulation,
00:40because there are three different things.
00:43Ethics is how we should behave according to our principles or values.
00:46And that applies to the guidelines of companies, implementing services companies also or users.
00:53But the total different thing is governance. Governance means what are the instruments we must put in place
00:58to ensure that corporations and governance are behaving ethically, because ethics is nice to have,
01:04but there is no rights that come from that, I would say, self-regulation or self-ethical guidelines.
01:11And regulation is one way to achieve governance, but it's not the only one.
01:15You can achieve governance through transparency, through monitoring tools, through market incentives.
01:21So that's what I want to make this differentiation.
01:23And Elizabeth, the same question to you.
01:25Well, I think you put it very well. I would just agree that I think we've achieved a huge amount
01:30through a lot of voluntary frameworks. You look at the work that the NIST AI Risk Management Framework
01:36has done being adopted by companies nationwide.
01:40You look at the work that we've done at the Safety Institute through voluntary commitments.
01:44I think this is so important, is that technology is evolving so quickly.
01:48We need to be nimble and iterative. And a lot of these sort of frameworks enable that.
01:53And you talked about the work that you've done at the Safety Institute.
01:56Now, I believe the Safety Institute only began this year.
01:59It was announced last year by Vice President Harris. You kicked off your work this year.
02:03Can you maybe explain to us one thing that you've achieved so far at the Institute that you're incredibly proud of
02:08and one area where you feel there is still more to do?
02:12Our job at the USAI Safety Institute is to identify and mitigate the risks of advanced AI systems
02:21so we can harness this breakthrough technology for all of the many positive use cases that it enables.
02:27And we do that through testing and evaluation, guidance, and what I consider ecosystem building.
02:33You asked what I'm proud of that we've achieved.
02:36We've already signed Memorandum of Understanding with OpenAI and Anthropic
02:41and just completed our first round of testing of a pre-deployment model with Anthropic's upgraded Sonnet 3.5.
02:49This is really a classic example of how we're able to leverage the technical team that we've built
02:54drawing from Berkeley, MIT, the big labs, to understand the risks and capabilities of these models
03:02and to leverage their tremendous expertise in national security across the U.S. government.
03:07And were there any surprises from those test results?
03:09We'll have more to say soon, but I think in general, I'm just excited to continue to build this out,
03:15leveraging our colleagues at the Department of Energy, at the Department of Homeland Security,
03:20building on the President's recently released National Security Memorandum,
03:23which really positions us as the tip of the spear for the U.S. government's work with the companies
03:29in testing these models prior to deployment.
03:31You asked what we still want to achieve.
03:34I think part of what we're really excited to do is make sure that the U.S. is leading with substance
03:40and that we're helping advance the global conversation around what safety should look like.
03:46Both because we know that AI does not stop at borders.
03:49We all need to have shared best practices aligned at interoperable evaluations to advance safety,
03:55but also to enable innovation.
03:57We're very aware that it's largely U.S. companies that are developing the leading AI models,
04:03and we want to make sure we don't have a patchwork of regulations globally,
04:06but are instead marching in alignment so we can continue to unleash that breakthrough innovation.
04:11And on the subject of shared knowledge, shared regulations,
04:15Kadma, you have many, many job titles, as we heard in the introduction,
04:19but let's focus for a second on your role as the co-chair of the U.N.'s advisory board on AI.
04:25Now, the board has recently developed guiding principles for international AI governments.
04:30These include the proposal of an international scientific panel on AI,
04:34a central AI office at the U.N., a global AI fund.
04:38But one thing you did not recommend was the creation of an international agency for AI.
04:43Can you explain why you came to that decision?
04:46Yes, and as you know, this body was an initiative of the Secretary General, Antonio Guterres,
04:50when they say, OK, what can we join, like a weather of 39 people,
04:54experts from the academia, private sector, policymakers all over the world,
04:58very representative, very diverse, to think about if there is a need or not need
05:02for a global governance of AI on top of the existing guidelines from the industry
05:07or even national or regional regulations.
05:09And the conclusion was that, yes, we need the global governance
05:12because there is important global deficit in three main areas.
05:15One is a deficit of inclusiveness.
05:18Out of all international initiatives we have been naming,
05:21only seven countries in the world are party of all, G7 basically,
05:25but there are 118 countries in the world that are party of none.
05:28So they are not even sitting in the conversation about what are the limits,
05:32what are the safety guidelines, what are the ethical principles on development.
05:36They are not participating in discussion.
05:38Second, there is an important gap on accountability
05:41because there is a lack of transparency.
05:43This technology has not been developed by the academia,
05:45that is obliged to publish everything,
05:47but it's developed on the R&D centers of private companies,
05:50which are not transparent.
05:52And this is why we need a scientific panel in the same way
05:55that the ICC panel was used for the climate change,
05:58to agree on the state of the development, on the risk and all the opportunities.
06:02And there's a deficit on implementation because if we really can benefit
06:05from the real advancement that AI means for humanity
06:09in terms of achieving huge advances in scientific research,
06:13in accessibility to education, in the achievement in SDGs,
06:17if we leave this ungoverned, we are not going to reap all these benefits
06:21because we are going to exacerbate the existing divides.
06:24And the reason why we have not recommended as an instrument
06:27international agency, because mostly international agencies
06:30refer to vertical domains.
06:32This is a horizontal domain.
06:34So every single UN existing agency will have to revisit the role
06:37on the eyes of AI.
06:39We don't need to invent something about how is the implications of AI in defense
06:42because we already have the Geneva Convention.
06:45We want to talk about intellectual property.
06:48We already have the WIPO.
06:50What it lacks is a coordination of these national, international efforts
06:53and this holistic view.
06:55And the problem is so important because it's not only affecting
06:57the safety in the frontier models of the future,
07:00it's affecting fundamental values and principles
07:03and human rights of the present.
07:05And this is why the problem is so important, is so huge,
07:08that no single company, no single country can solve it all.
07:12So you've made the recommendations. What happens now?
07:15These recommendations that were on the sides of the UN General Assembly
07:19were adopted formally.
07:21Of course, the UN also issued a resolution prior to those recommendations.
07:26So there's important leadership for most of the countries on the UN system.
07:29And they were adopted in part of what we call the digital pact of the future.
07:35Therefore, now the UN system is starting on the implementation
07:39of those recommendations that we expect to be implemented
07:42in less than one year's time.
07:44Less than one year's time?
07:46I mean, any recommendation is as good as the implementation
07:48of that recommendation.
07:50If we are not able to make it in one year's time,
07:52at the path of innovation, they will become obsolete.
07:54So we expect that at least the scientific panel,
07:56the policy dialogue, and probably the capacity-building programs
08:00are set up in less than one year's time.
08:02Now, almost every panel today has touched on the election.
08:05We are not going to be any different.
08:07Elizabeth, before his re-election, President-elect Trump
08:10indicated that he would repeal Biden's executive order,
08:14which I know you were one of the key authors of,
08:16if he returned to the White House.
08:18Because, and I'm paraphrasing here, he said it hinders innovation.
08:21What's your response to that?
08:24So I would point out that I think there's multiple parts
08:26of the executive order that are focused on
08:28how do we enable innovation?
08:30How do we put more money towards R&D?
08:32How do we attract talent to the U.S.
08:34that will help us continue to maintain our lead?
08:37How do we ensure this is a robust and competitive ecosystem
08:40where people have access to data, chips, other things
08:43that will ensure that we can really continue this going forward?
08:47And I think what's important to note is the work
08:49that we're doing at the Safety Institute.
08:51That's distinct from the EO.
08:52But two, is that we see it as part and parcel
08:55of enabling innovation.
08:57As the business leaders in this room know,
08:59when you have safety and governance that enables trust,
09:03which enables adoption, which ultimately enables innovation,
09:06and that, we believe, is the work that we are doing
09:09and what we think is so important.
09:10So you, am I right in saying that you don't believe
09:12there's a tension between regulation and innovation?
09:15I think that we are all keenly focused
09:17on getting the balance right.
09:18There should not be a tension between safety and innovation,
09:21if we get it right, because we want to enable that trust,
09:24that adoption, ultimately that innovation.
09:26I think that all of us are incredibly excited
09:29by the tremendous potential of this technology.
09:31You touched on some of them.
09:33But as we think about new drug discovery and development,
09:36individualized education that could help address inequities
09:38in education, carbon capture and storage,
09:44new tools for agriculture.
09:46There's so many things that we all want to see unleashed.
09:49And as we see these models become more powerful,
09:52become more capable, we get that much closer to that future.
09:55And so we need to make sure that we're understanding the risks
09:57so we can unleash all those benefits as well.
09:59And this is a question for either of you.
10:01What would worry you about the executive order being repealed,
10:04if anything?
10:06Well, I think, as I mentioned, I mean, some countries,
10:08we use regulation as a tool for governance.
10:10I come from Europe, and I was leading negotiator
10:12of the European AI Act, and that's a way to create certainty
10:15in the market, a way to put order in the market
10:17and generate trust for consumers and citizens.
10:20But I would say whatever works.
10:22There are other geographies that probably are more based
10:24on voluntary commitments, on codes of conduct.
10:27We are more prone on codes of practice.
10:30But at the end of the day, what we want is that
10:32these technologies are developed with responsibility,
10:35that we have accountability, and different parts
10:37of accountability along the value chain.
10:39It's not the same accountability you put on the developer,
10:41on the implementation phase, or in the user.
10:43From the European perspective, we are not thinking
10:45that the European AI Act is something
10:47against innovation at all.
10:49In fact, we are introducing measures like real-time
10:51work testing scenarios, sandboxes.
10:54But what I really think is that we are as concerned
10:57as the bad use of, I would say, misuse of technology,
11:01and intended misuse, but also the intended misuse
11:04of bad actors.
11:06And that involves either corporations or even governments.
11:08I mean, this is a very powerful tool that,
11:10in the hands of alternative regimes,
11:12really erodes very basic principles.
11:14And this is why, at the governance level,
11:16we don't pretend that the UN becomes
11:18a global regulator at all.
11:20But we think there is room for sitting at the same table,
11:23multi-stakeholder conversations,
11:25and agreeing on very basic things.
11:27That anything we do on AI at the global level
11:30needs to be under international law,
11:32UN charter, and human rights.
11:34I think that's very well put.
11:36These are dual-use foundation models.
11:38So even as we see incredibly exciting new developments
11:41in terms of assisting engineers with developing new code
11:45or enabling new breakthroughs,
11:47that also means that you could have the same models
11:49enabling more dangerous cyber attacks
11:51by a broader range of actors.
11:53So we need to be tracking those capabilities
11:56and those risks, so we can harness the flip side.
11:59I want to talk a bit about the AI divide,
12:01which I think Matt mentioned in his introduction.
12:03That, of course, refers to the uneven adoption
12:06of AI globally, and how this can really deepen
12:10the global inequity that exists
12:12between high-income and low-income countries.
12:15Cardinal, let's go to you.
12:17Is this divide inevitable?
12:19No, we have it already.
12:21So it is an inevitability.
12:23So how do we mitigate it?
12:25Exactly. So it's already there,
12:27because everything we have developed on AI
12:29is just based on data from the Global North,
12:32with competing capacities of the Global North
12:35and the maximum talent we have.
12:37So what's the UN doing to expand its data pool?
12:39So first of all, for example,
12:40out of 110 high-performing computers in the world,
12:44there is no single one in the Global South.
12:46Not in Latin America, not in Africa.
12:48So we want to really have equality and the benefits.
12:50We must ensure equality in the access.
12:52And ensuring equality in the access means access to data,
12:55and that means creating data which is representative
12:57of all the different diversity of the world.
13:00Also, accessibility to computing power
13:03and skill development programs,
13:06so that we don't go to the Global South
13:08with a techno-colonialism approach.
13:10This is a techno-social program.
13:12And that we need to be able to allow
13:14for development of new ecosystems,
13:16new entrepreneurial ecosystems,
13:18that flourish in all the new Global South.
13:21Because at the end of the day,
13:22just put it only from a economic perspective,
13:24all the big amount of money and investment
13:26that is being poured on this AI sector
13:30can only pay back if there is a massive adoption,
13:33mainstream, globally.
13:35This cannot be supported only by the investment
13:37of companies in the US or in Europe.
13:39So at the end of the day,
13:40we are interested that these benefits get to everyone.
13:43But that's why we need to create
13:44these global development programs and capacity programs
13:46that we are encouraging from the United Nations.
13:49And also a global fund for AI,
13:51devoted for development of those capacities,
13:53as well as the application of AI
13:55for the achievement of sustainable development goals.
13:57Because, I mean, it's not by chance
13:59that all the same countries of the Global South
14:01that are lacking access to these technologies
14:05are also the same ones that have
14:06the biggest problems on sustainability.
14:09So there's two tensions go hand in hand.
14:11And where are you hoping that the capital
14:12will come from for this global AI fund,
14:14which presumably is going to go on to fund,
14:16you mentioned the skills and development programs,
14:18for example.
14:19Where does the money come from in the first place?
14:20Our proposal, and this can be inside the UN system
14:23or outside the UN system, we really don't care.
14:26We encourage to have a global fund for AI
14:28that includes contributions, public sector,
14:30development banks, but also private investors,
14:33philanthropy, and also contributions in money and in-kind.
14:37It can be even more valuable that NVIDIA provides chips
14:40or that some cloud providers free some capacity
14:44to provide these capacities to the Global South
14:46than money itself.
14:47So I think that's a proposal that needs to be developed
14:49in the following year or so.
14:51I think one of the really interesting things
14:52that shows up in the polling data
14:53is there's actually more optimism about AI
14:56in a lot of the Global South
14:58than there is amongst the U.S. population
15:00because there's a recognition
15:02of all of the beneficial use cases
15:04and how game-changing that could be.
15:06And it's really incumbent on us
15:08to help realize those advantages.
15:10So certainly the work that you're doing at UN,
15:13the Secretary of State recently announced
15:15$100 million in commitments by all of the leading AI labs
15:18to make data, compute, others available
15:21to ensure that we're seeing this innovation across the globe.
15:24It's part of why the work we're doing at the Safety Institute
15:27and with the Safety Institute Network
15:29being launched next week in San Francisco
15:32is making sure that we have the Global South at the table.
15:34Kenya's part of that network.
15:36We've got folks from South America
15:38and across multiple continents who will be at the table
15:41so we can help shape that future.
15:43Yes, building on that is very important
15:45because the exceptional role we are leading, Elizabeth,
15:48on gathering all the different national institutes
15:51because there are things that we need to come to agreements.
15:53I mean, we cannot pretend that every country in the world
15:55has the same regulation.
15:56We need to have a common alignment on
15:58this is not hampering the human rights, for example.
16:00But in the safety side, I mean,
16:02we need to agree a global framework for risk.
16:04So risk that is in this country of the world,
16:07it must be the same risk in another country of the world
16:09because we need to align international response
16:11for AI issues, safety issues.
16:14So these are the things that I think are needed
16:16to be discussed in the same table.
16:18And the UN has its flaws, but I think it has the mandate,
16:20it has the region, it has the legitimacy
16:22to discuss this governance issue at the global level.
16:24I was saying to Elizabeth before we came on,
16:26obviously discussion, conversation is brilliant,
16:28but what's the next step?
16:30How do you ensure that that translates
16:32into concrete action that changes the game
16:34in the global south, for instance?
16:36So I think what is both fortunate and challenging
16:39is that we are still so early in the conversation.
16:41AI is unusual in that it was one of, I mean,
16:44the first breakthrough technologies
16:46in over 100 years we developed outside of government.
16:49It was private sector driven.
16:50And so I think governments are still moving very quickly
16:53to adapt to this new reality.
16:55We've seen laws like the EU AI Act passed.
16:58We've seen the UN resolution,
17:00but most countries are still figuring out
17:02what their regulatory frameworks will be,
17:04what their testing and evaluation will look like.
17:06And so I think by having these conversations early at the UN,
17:10through our network of AI safety institutes,
17:12we can help move towards both aligned
17:15and interoperable testing, but also shared best practices
17:18that will inform that globally.
17:20I think the moment is really now,
17:21and the US was able to lead with substance here.
17:23If you look at the G7 commitments,
17:24they look very similar to the voluntary commitments
17:26made by 15 developers to the White House.
17:29And I think we continue to do that.
17:31Are there any questions for Karma and Elizabeth?
17:34If so, please raise your hand
17:36and we will bring the mic to you.
17:38If not, I will continue because I, oh, yes.
17:41Have we got a question?
17:42Yes, brilliant.
17:43If you could just stand up, hello, say your name,
17:45where you're from before you ask your question.
17:46I'm Dr. Paula with Eurofins.
17:48We are traded on the French Stock Exchange,
17:50headquartered in Luxembourg.
17:52And in 62 countries.
17:54And AI is a tool that we would like to use,
17:57but we worry about IP infringement and confidentiality
18:00with the clients that we serve.
18:02And obviously in Europe, there's GDPR
18:04and there's other aspects that could
18:07potentially more easily violate laws.
18:10And you mentioned it doesn't apply everywhere.
18:12So do you see a change in the regulation
18:16or regulations like GDPR as it relates to AI?
18:21A revision of such.
18:23So as you say, the AI framework, for example,
18:25that we are using in Europe, which is the European AI Act,
18:29is an act that means that it applies to 27 countries
18:33at the same time.
18:34So the problem of competitiveness
18:35that sometimes we assign to Europe
18:37is not because of regulation.
18:39It's because of regulatory fragmentation.
18:41And we need a single digital market altogether
18:43to apply the same rules.
18:44And I think at the global level, it's the same.
18:46So the protection of data privacy is already covered in GDPR.
18:48But for example, in the AI Act, we cover,
18:50as you mentioned, the issue of copyright, for example.
18:52So that we cannot infringe copyright
18:54in the training of the models.
18:56And for example, the right or the obligation
18:58to mark when something has been generated
19:00by generative AI.
19:02So probably I don't care, but I must know
19:04if that video is a real video,
19:06if that article is being produced.
19:08So I think that's a way to approach it.
19:10It's not the only one.
19:11But I think what we are all aware
19:13is that the development of AI is compatible.
19:16And I would say the competitiveness,
19:17and sometimes it is difficult to say it in the US,
19:19but I think competitive is not a question with regulation.
19:23We have like pharma companies who are highly regulated
19:26and very competitive and very profitable.
19:28And I would never take a bill without FDA approval.
19:32So the same happens.
19:33We pretend that really AI is becoming
19:35a transformational technology.
19:37It's not a technology, it's a meta technology.
19:39It is a technology that allows
19:40for development of new technologies.
19:42So we may have some guarantees.
19:44I would say the approach we follow in the UN
19:46is called meaningful openness.
19:49So there are two extremes.
19:51Totally openness, which makes no sense
19:53because we need to have your secret sauce,
19:55which is a competitive advantage versus your competitor,
19:57and then a black box.
19:59So we really need to regulate this technology
20:01based on the use case and ask for some public scrutiny
20:04and some external auditing in very particular use cases,
20:07especially the high-risk ones.
20:09Anything you want to add, Elisabeth?
20:11No, I would just add that I think AI is a great example
20:13of where there were some existing challenges
20:15that are exacerbated by AI.
20:16So for example, we do not have comprehensive
20:19privacy legislation in the US.
20:21I think people on both sides of the aisle
20:23have been calling that for a long time.
20:25And we know that AI both increases demand
20:29for people's data in a way that's harming to privacy,
20:31but also enables seemingly disproportionate information
20:33to be put together in a way that can be
20:36additionally damaging to privacy.
20:39So it sort of highlights the need
20:42for additional legislation in other areas.
20:44Just with that, of course,
20:46the problems we have with data privacy
20:47are not new because of AI.
20:49They're already there.
20:50There are present issues of where they have in platforms.
20:52Because at the end of the day,
20:53the reason why these private companies
20:55haven't capitalized on the development of AI
20:57because they were already acting as monopolies
20:59in the access of data.
21:00So based on the supremacy of data access,
21:02this is the reason why this technology
21:04has not been developed in academia.
21:07Not because academia didn't have the talent,
21:09not because academia didn't have the money,
21:11because academia didn't have access to the data.
21:13So based on the monopoly of data,
21:15you're building a sort of monopoly of AI.
21:17But this is not a problem of AI.
21:19It's a question of how the market is structured
21:20because it's in concentrating power
21:22to very few companies in the US,
21:24few companies in China,
21:26and that's the situation we have today.
21:27So one of the challenges we have at the global level
21:30is to prevent this highly concentrated power
21:33allowing for others to develop
21:35because either we open the models
21:36or we open the data or we open access
21:38to technology and capacity.
21:40And I'm going to finish with a quick fire question
21:41for each of you.
21:42Is there a country other than the US
21:44or a region other than Europe
21:46who you think has a particularly effective approach
21:49to AI governance at the moment?
21:52Elizabeth, we're going to come to you first.
21:55The answer may well be no.
21:58So I think there's really interesting work
22:00being done tactically in a lot of countries.
22:03So you look, for example,
22:04the work happening in Singapore
22:06where a quarter of all their graduates
22:08are computer scientists
22:09and they just launched a multilingual evaluation
22:12bringing in the ASEAN countries.
22:14You look at the expertise that the British have built up.
22:16I think it's really inspiring as a public servant
22:19to see the number of people
22:20that are coming into government
22:22to help make sure that we are able
22:24to keep pace with this breakthrough technology.
22:26And those are two examples of countries
22:27where I think they've really prioritized that.
22:29Great. Kadma.
22:30I think we can learn from each other.
22:31I think, for example,
22:33I don't like Chinese approach to many things.
22:35For example, the Chinese,
22:36we must admit that they protect much better
22:38their children than we do in the access of technology.
22:41And, for example, I think the challenge we have here
22:43is anything we do must be future-proof.
22:46So the way we structure regulation
22:48needs to foresee the fact
22:50that it needs to be iterating
22:51as long as technology evolves.
22:53So it's not a fixed photo.
22:55It's like an evolving photo thing
22:57as long as we create mechanisms
22:58that work for keeping the path of technology
23:01but giving certainties.
23:02Because, again, if I cannot trust what I see,
23:05what I read, or what I watch,
23:07I will not adopt it.
23:09And if I don't have clearly
23:11how this black box is constructed
23:13or if the system is still hallucinating,
23:15I'm not going to adopt it in my car industry
23:18or my pharmaceutical company.
23:20And that's why we need.
23:21We need adoption, massive adoption.
23:23And I think that's why we need to build this trust
23:25based on technology and regulation and governance.
23:27Kadma and Elizabeth,
23:29thank you so much for being here.
23:30Thank you for your time.
23:31And we'll watch closely
23:32to see what happens over the next 12 months.

Recommended