Martin Casado, General Partner, Andreessen Horowitz, Dawn Song, Professor, Computer Science; Co-director, The Berkeley Center for Responsible, Decentralized Intelligence, University of California, Berkeley
Category
🤖
TechTranscript
00:00Hi folks. Hello. Thanks for being with us
00:04Yeah, we figured we'd end with a nice light-hearted topic, you know
00:09Yeah, oh, yeah, of course, of course, um, let me just level set here does AI need to be regulated Don
00:18What does my team think? Oh, I
00:21Think our answer is this
00:23Very similar, but I'll probably present a little bit different than that you did which is
00:27AI is a software system software system is regulated by rules that have been developed for the last 30 years
00:32They're very robust. They've been part of an ongoing discourse
00:36and so in as much as software is regulated AI already is regulated and then there's a question of does it need new regulation and
00:44The answer that is we'd have to understand what is new about it, which is still a research question
00:48And so until we understand that I would say if you regulated it
00:52You may be regulating the wrong thing if you did it too early and that could be you know
00:56Even more negative than if you didn't regulate it. So my view is we have existing robust regulatory regime
01:02It is sufficient for now. And if we understand
01:06You know that AI is different in some way we should we should evolve that but not until we understand that Don
01:12Does that sound like responsible AI?
01:14Okay, so I can I too a lot of that was cribbed by the way from a lot of that was cribbed from a paper
01:21That Don wrote by the way, I think
01:23So first let me first summarize what is the current lay of land, please do
01:29So if you ask if you ask this question to the policymakers, they essentially already, you know speak with their acts
01:36So I call this year. We have a sudden proliferation of AI bills
01:41So if you look at the federal level
01:43there are about 120 air bills in flights and if you look at the state level there are at least 45 states that
01:51Have AI bills in flight about around 600 of them. So it's crazy
01:56So I would say that if you ask this question, there's a madness about AI regulation
02:02And going going back to what Martine said, I think there's really a lot of nuance that we need to look at
02:09We're going to AI regulation if we don't do it
02:11Well, then for example if we do it in ad hoc way, which is mostly what currency air regulation is
02:18We have a lot of problems we can have
02:22We can have a lot of potential unforeseen negative consequences and also even worse
02:28we can actually have lost opportunity to prevent actually disastrous outcomes and
02:34Also, we can have fragmentation in community
02:38Which is what we have seen unfortunately through you know, the recent process of some of the bills that won't
02:46Right, right. So that's why actually recently we launched a new proposal with a number of leading AI researchers
02:54caught on the path for
02:57Science and evidence-based AI policy, which is excited to address this question. Should we regulate AI or not?
03:03How do we even answer this question? So here then the answer is very simple. It should be science and evidence-based
03:09We don't just draw out of the hat whether we should today we think we should regulate AI tomorrow
03:14We think no, we shouldn't we should have science and evidence to speak for us
03:18and so that's what the key of the proposal is about is that a policy should be informed by
03:25Science and evidence and we should do science and evidence-based approach for AI policy
03:31And also I strongly believe that this is the best approach forward for us to develop the best
03:36AI policy and also it's the best approach forward to unite the community to reduce
03:42Fragmentation and lead us to a safer AI world. Now, we we are
03:47Sitting right now in the state of California. We three are residents of the state of California
03:53So let me start at the state level and then we can talk about the federal level if you don't mind
03:58We obviously had SB 1047 come and go as a candidly a flashpoint for the industry
04:05Was that science and evidence-based and by the way, that's hardly the only bill that went through
04:13California
04:14So that's a very good question. So that's actually the reason I mean now the reason but that's one of the motivation is
04:21What we have seen in the process again, like I said
04:25So currently most of this AI regulation efforts is quite ad hoc
04:31You can talk about right the different thresholds
04:34Whoever has a lot but a lot of these things we need better
04:38More rigorous approach that gives us the science and evidence and also to really analyze
04:43so for example in the proposal that I mentioned we have to propose five priorities right for AI policy and
04:51We propose how you should actually analyze AI risks. It should be a marginal risk
04:57Actually, I wrote them down Don it's understand risks increase transparency develop early warning mechanisms develop technical mitigation and defense
05:05mechanisms and
05:07Trust to reduce fragmentation. Those are the five of the proposal, right?
05:15So so I think I see SB 1047 is a great example for us to talk about because I you know
05:21There were so many policies
05:23So why is this the one that got people up in arms?
05:25right, and so I'll tell you why which is um
05:28Again, we have 30 years of discourse on regulation and policy and tech, right?
05:32And so we kind of understand basic principles
05:35We kind of understand the basic paradigms by which we apply them and SB 1047 was a paradigmatic
05:41Shift in the following way, which it actually included something called developer liability and with developer liability means
05:47which is like if I'm building a piece of software and
05:51Somebody else uses it and that causes some catastrophic event that I am liable
05:55Now in like 40 years or 30 years of tech policy
05:58We've never done this and and maybe maybe it's something worth doing with AI maybe but it's a paradigmatic shift that is divergent
06:05from 30 to 40 years of
06:08Policy and so we're like whoa, wait, you know timeout like listen if you want to do policy fine
06:12We kind of understand policy. This is a discourse that's been going on for a long time
06:16You've got a lot of experts, but you're not listening to any of that
06:18You're doing something entirely new and and and based on you know
06:22As Don who's a world expert on this would say no evidence. And so I would say listen, it's a very important problem
06:28AI policy is a very important problem and like listen
06:31We can even do spurious policies if like we want to because that's what policymakers do and they're gonna do spurious policies
06:38Like we're gonna definitely see that happen
06:39But but please let's not
06:41Create a new paradigm and policy that we don't understand the adverse effects of without actually understanding the risks
06:47And the reason is is we're gonna set a precedence and we may end up like policing entirely the wrong thing
06:52and so again, there's a lot of like
06:55there's a lot of
06:57New laws and policy being proposed all of the time
07:00But certain ones of them are particularly pernicious and this was one of them for exactly this reason and so it you know
07:06Governor Newsom vetoed it. So it's it's a moot point. But where do we go from here Martine?
07:11I mean, do we think that we're actually gonna take the big step back and do what you're saying?
07:15So, I think I think they're very legitimate efforts that are tangible and near-term and like winding their way through the courts, right?
07:23Things like copyright I think is incredibly legitimate and we should understand it things like child protections are incredibly legitimate
07:28And we should understand it. So everybody agrees on this and there's consensus and so we should focus on those
07:33So let's focus on the tangible ones that we know are real a lot of SB 1047
07:38Like the proponents are s X risk people like they're they're like, oh AI is gonna take over the world
07:42So we need to put in these precautions that don't make any sense, right?
07:44And so I would say like listen, that's like put aside the science fiction
07:47Let's put aside like this kind of existential risk stuff
07:50Let's focus on stuff that we know is kind of near-term and real and impacts people's day-to-day lives
07:54And and and by and large, I think that's actually what is happening. I think that we've kind of regained some sense
07:59it's been five years in this journey like we
08:01As far as we could tell like this AI stuff is actually pretty safe and like we're actually focusing on tangible practical near-term regulation
08:09Okay. So again coming back actually we
08:12Again in the proposal we made these five priorities and and the number one actually is to understand AI risks and
08:21Actually in there. We actually propose that we do need to understand a risk in a very comprehensive way
08:26So there are other risks that's in my dimensions, but there are also longer-term risks
08:30So I think people need to write I think it's helpful for the society to do
08:37Comprehensive study to analyze to understand their risks
08:41But right now I think as a as I mentioned, right we want to be science the evidence-based
08:46so the first step is that we need to better understand the science and we to collect the evidence and
08:51So I think it is important that we study a wide-ranging different types of risks
08:56But we need to understand them and then the policy need to be
09:00Essentially need to be appropriate to the to the level of understanding that that's we have
09:07actually, it's actually
09:08So a lot of people that have been following this don't know how it remarkable it is that dawns up here
09:12and so one thing about like these policy words in the past like 20 years ago is like the forefront was academia and
09:19Hobbyists and we had these robust debates now
09:22It's been very remarkable about the AI one is like when SP 1 0 4 and 7 came like like
09:27Academia was largely silent or hadn't weighed in or so forth
09:30And so I think another thing that's been fantastic about this first wave of policy
09:34It is started the conversation and has brought in stakeholders that really I would say are far more neutral than say myself
09:40I'm a venture capitalist, right?
09:41So clearly I've got a bias right so I should be a voice but I should not drive the conversation though in that case
09:46I actually was driving the conversation, right?
09:47So I think we're now at a great place where we've got people like Don who are real expert or like hey, listen
09:52Like this is a very important problem
09:54Like and I think academia is engaged and I think that's actually where we should start moving these discussions, too
09:59And so the very concrete next step, let's all fund Don
10:03Yeah
10:07So so let's I do want to come to the audience for questions in a minute
10:12But I I want to make the jump up to the federal level. Yeah, okay
10:15We're gonna have a change in the White House very imminently the the outgoing president outlined an AI framework
10:22The incoming one it is certainly planning to do just that
10:26How are we going to stick to our science and evidence-based guns in this environment on?
10:31That's a very good question, I think
10:33Especially in the changing environment. I think it's even more important that way focus
10:39right the science evidence-based approach and
10:43In fact, yes, we are discussing at all levels
10:48including the federal level and
10:50Even including like the National Academy and and so on
10:54So so far actually this proposal has received a huge positive feedback. We
11:00It's only launched maybe about two months we already have volunteer contributors from about 200 different institutions
11:08actually
11:10volunteer to to help with this with this effort and
11:14Also, we have received like a lot of positive huge support from civil societies from right a lot of rights
11:22different different entities and so on so we really hope that we can
11:27Bring this as a whole community society efforts. I think it's a hugely important. It sounds like a plan
11:34All right, let's go to the audience
11:36Yes
11:38Nobody knows what's gonna happen under Trump when it comes to like a policy. It's like nobody knows right?
11:44Like I don't know, you know
11:45And so like and one hand I'm cautiously optimistic because like what happened under Biden was terrible
11:50It was like you had this executive order that made no sense
11:52And then you know
11:53It was like this kind of all this kind of crazy I policy stuff on the other side like let's be honest like SB
11:571 0 4 7 which I think is probably the worst
12:00Like probably the worst bill
12:01I would say the United States personally because of the paradigmatic shift was supported by Elon right and so anybody that thinks that they know
12:08What's gonna happen? I don't think that they really understand the players. It could go either way. So I'm cautiously optimistic
12:12I think it's good to have a change
12:13I think what we're coming from was actually very bad when it came to AI policy, but no idea
12:17You know what? It'll actually look like. All right. That's my prognostication. Yeah. Yeah. Yeah
12:22We should ask an AI agent about it. All right, let's let's go to the audience for questions comments about
12:26Regulation and policy. Oh, we've stunned them. Look at that. Oh, yes right over here, sir. Your name and who you're with
12:33John Durkin chief technologist with IDA Ireland. So I'd be very interested from the panel's
12:39point of view like particularly looking at the EU AI acts that came out recently and the
12:44Perspective that that has and we saw what GDPR how that kind of set regulation around the world
12:48But in the context of the EU AI act, what's your kind of feelings on that has it been too much focused on regulation
12:56Or is it because it took a risk-based approach and it's kind of looking at the idea of setting kind of this idea
13:02It'd be more based on research to make the informed decisions as it goes through. So just interested
13:07I
13:13Think
13:15In general, they do take a lead a lot of these regulatory efforts
13:20And now I should still they have this code of practice
13:24After this is ongoing
13:26So I think a lot of details are actually still in the in the progress to figure out
13:31Exactly, right what the company needs to do to comply. What are the regular recommendations are?
13:38And I think they are taking a first step and I think it's a leading
13:46I mean certainly it's helping making progress in the space
13:49But of course as I mentioned earlier, there's a lot that we actually really do need to study to figure out Don's very diplomatic
13:56So listen, so I come from a very very poor part of Spain actually like I've got a dual citizen
14:01It's it's called Murcia, which is like the the Appalachia of Spain. I love it. It's a great place
14:06You know, we have Alcatraz as a tennis player. He's great. We also have a phenomenal founder
14:11Who started actually a great company in gen AI?
14:14And the company's called Magnific. His name is Javier and he's done all of this great work and today as we speak
14:21He's back in Murcia, you know this this, you know, it's great part of Spain, but like not known for its tech founders and entrepreneurs
14:27And he and by the way Magnific is in the gen AI pixel space
14:32So it's like images and videos and stuff like that and he cannot access Sora
14:37So Sora is the video model that was just released by open AI
14:41Yesterday, right?
14:42and so like you tell me but from my standpoint like when you have like a
14:48You know a great founder who can't even engage in his own area
14:53Because he can't access to something that like a company that go but I release I think that they're strictly bad
14:58So I think that the EU is totally bungling it
15:00I think it's putting itself backwards and I think it's got to change its tune or it's gonna get in trouble
15:06But how do you really feel that's how?
15:10More questions, please. Yes. We have one way in the back your name and who you're with
15:15Hi, I'm Alana Cohn with hacker one and
15:19Admission. I'm a former federal regulator. So I understand the instinct to say we don't need anything
15:26We're already regulated. We've got this on lock
15:30But the federal regulation process specifically the rulemaking process
15:34It does take science and evidence into consideration because of course
15:39There has to be notice and comment and you have to be able to provide that information
15:43Which they take into consideration when they do those rules in the absence while we're off here saying we don't need anything
15:50The EU Singapore everyone else is essentially
15:54Regulating for us. So like privacy, we don't have anything here in the US
15:59We've been fighting it for years. And so the EU essentially runs our regulation
16:04So is that the right approach from your perspective and it doesn't make sense to continue to say we don't need anything
16:09And we'll let all these other countries do it for us
16:13Great question. Thank you
16:15So first, I don't think we are saying that we don't need anything, right?
16:18I mean the the proposal for a science and evidence-based approach for a policy does not mean that we don't need anything
16:25Actually, in fact in that proposal, we actually talk about certain priorities for regulation including for example transparency
16:32adverse events reporting so we can write to like
16:38Monitoring for post-deployment harm and do this adverse event reporting and so on. So
16:43Nowhere, we say that no
16:46No, no regulation, I think it's more just providing a more grounded
16:53Approach that as grounded in science and and evidence and also again, like I said
16:59And also actually in the end. We actually have
17:02Additionally call to action proposal is actually to have the community come together to discuss and propose what we call the candidates
17:11Air policies for we call the conditional response so that the community can discuss and what?
17:18Conditions what policy would be appropriate? It's just in this way that we can have
17:25You have modestly hold a open dialogue discussion in a low-stake environments
17:30So we hope that this approach actually can lead to better consensus building process for better policy understood Martin
17:37Can I take a quick crack? I really appreciate this question because I do think it gets to the heart of it
17:41So I think that our posture as a nation shouldn't change and it's changing
17:47I want to give you an analog which is like the Internet
17:49So I was in the National Lab during basically the advent of the web and the Internet, right?
17:53We had this new stuff and like very quickly
17:56Like it was clear. This was marginally different, right?
17:59We had computer worms, which we didn't have before we had this notion of asymmetry
18:02Which means we were more vulnerable than countries that didn't use it
18:06So clearly it was marginally different which we don't have for AI. So what do we do as a nation?
18:10We invested heavily in understanding it and building out capability
18:14Like we became the leaders of the Internet we turn it into a discipline and like and yes
18:19Of course, there's an entire cybersecurity, you know, like I've known Don for 20 years
18:23He was like one of the preeminent researchers in cybersecurity
18:25So it's not saying that we ignore it and ignore policy says we've become the leaders
18:30We understand it the best and we make the most practical solutions. That's not what's happening with AI
18:34I mean the EU is doing whatever it's doing right? It's like oh, there's this new thing
18:37We're gonna regulate it and for us for me, which has been so shocking is like academia wasn't even involved in the discussions
18:44They've been silent. That's all I'm saying. Yes. We need to regulate. Yes, you need to understand what we need to change our posture to engage
18:50Research invest understand the marginal differences and then yes, let's do like we've done in the past
18:56Which I think is the right thing for the country
18:59To wrap this up and thank you for that to wrap this up
19:03How are we feeling about where we go from here used in in in one or two words yours is cautiously optimistic
19:09Yeah, yes Don. Yes
19:17Got it. All right a round of applause, please. Thank you very much