OpenAI CEO Sam Altman spoke at the World Economic Forum in Davos on Thursday and commented on the company's leadership drama and the concerns over Artificial Intelligence.
Category
🗞
NewsTranscript
00:00 At some point, you just have to laugh.
00:02 Like at some point, it just gets--
00:04 it's so ridiculous.
00:05 But I think-- I mean, I could point to all the obvious
00:09 lessons that you don't want to leave important--
00:13 you don't want important but not urgent problems out there
00:16 hanging.
00:16 And we had known that our board had gotten too small.
00:19 And we knew that we didn't have the level of experience
00:21 we needed.
00:22 But last year was such a wild year for us in so many ways
00:24 that we sort of just neglected it.
00:27 I think one more important thing, though,
00:31 is as the world gets closer to AGI, the stakes, the stress,
00:39 the level of tension, that's all going to go up.
00:43 And for us, this was a microcosm of it,
00:45 but probably not the most stressful experience
00:47 we ever face.
00:50 And one thing that I've sort of observed for a while
00:54 is every one step we take closer to very powerful AI,
01:00 everybody's character gets like plus 10 crazy points.
01:05 It's a very stressful thing.
01:07 And it should be, because we're trying
01:09 to be responsible about very high stakes.
01:11 And so I think that as--
01:15 I think one lesson is as we get--
01:19 we, the whole world, get closer to very powerful AI,
01:24 I expect more strange things.
01:27 And having a higher level of preparation, more resilience,
01:33 more time spent thinking about all of the strange ways
01:36 things can go wrong, that's really important.
01:39 Well, I don't think they're guaranteed to be wrong.
01:41 I think there's a spirit.
01:43 There's a part of it that's right,
01:44 which is this is a technology that is clearly very powerful
01:49 and that we don't know-- we cannot say with certainty
01:53 exactly what's going to happen.
01:54 And that's the case with all new major technological
01:57 revolutions.
01:58 But it's easy to imagine with this one
02:03 that it's going to have massive effects on the world
02:05 and that it could go very wrong.
02:09 The technological direction that we've
02:10 been trying to push it in is one that we think we can make safe.
02:15 And that includes a lot of things.
02:16 We believe in iterative deployment.
02:19 So we put this technology out into the world along the way
02:23 so people get used to it, so we have time as a society,
02:26 our institutions have time to have these discussions,
02:28 figure out how to regulate this, how to put some guardrails
02:31 in place.
02:32 If you look at the progress from GPT-3 to GPT-4
02:36 about how well it can align itself to a set of values,
02:40 we've made massive progress there.
02:41 Now, there's a harder question than the technical one,
02:44 which is who gets to decide what those values are.
02:46 And what the defaults are, what the bounds are,
02:48 how does it work in this country versus that country,
02:51 what am I allowed to do with it versus not.
02:54 So that's a big societal question, one of the biggest.