• 6 months ago
This conversation from Davos is about what the potential limits of generating AI can or should be, how far away we are from other transformative advances in AI, and what we even mean when we say say ‘generative AI’.


Subscribe to FORBES: https://www.youtube.com/user/Forbes?sub_confirmation=1

Fuel your success with Forbes. Gain unlimited access to premium journalism, including breaking news, groundbreaking in-depth reported stories, daily digests and more. Plus, members get a front-row seat at members-only events with leading thinkers and doers, access to premium video that can help you get ahead, an ad-light experience, early access to select products including NFT drops and more:

https://account.forbes.com/membership/?utm_source=youtube&utm_medium=display&utm_campaign=growth_non-sub_paid_subscribe_ytdescript

Stay Connected
Forbes newsletters: https://newsletters.editorial.forbes.com
Forbes on Facebook: http://fb.com/forbes
Forbes Video on Twitter: http://www.twitter.com/forbes
Forbes Video on Instagram: http://instagram.com/forbes
More From Forbes: http://forbes.com

Forbes covers the intersection of entrepreneurship, wealth, technology, business and lifestyle with a focus on people and success.

Category

🤖
Tech
Transcript
00:00 Can we play video number one?
00:02 All open source AI must be banned
00:05 because it threatens the profits of monopolistic tech companies.
00:09 Number two?
00:11 Universities should stop doing AI research
00:16 because companies are much better at it.
00:19 And the number three?
00:21 To be honest, I don't care about all of this AI safety stuff.
00:24 It doesn't matter. Let's focus on quantum computing instead.
00:28 All right, let's give it up for our panelists.
00:31 There are many, many current challenges
00:37 with AI, of course, that we need to deal with.
00:40 Deepfakes is very relevant this year
00:43 because of elections and fraud and all sorts of other things.
00:46 If you want to learn more about it, we have a deepfake demo
00:49 where you can deepfake yourself back there
00:51 and share your ideas for what to do about it.
00:53 But that's not what we're going to talk about now
00:55 because we are going to look a little farther into the future.
00:58 The ultimate goal of AI from the beginning
01:01 has always been to really solve intelligence
01:04 and figure out how to make machines that can do everything humans can do,
01:08 ideally much better.
01:10 That is both exciting and terrifying.
01:13 And I have taken the prerogative as moderator
01:15 of sorting the panelists from least worried to most worried.
01:23 I hope you feel I...
01:24 Oh, wait, you switched.
01:26 You should switch with Stuart.
01:28 Please, please switch places.
01:29 We're not seating you by your deepfake opinions now,
01:33 but by the real ones.
01:35 The main goal we have here is not to have just yet another
01:38 debate about whether we should worry or not,
01:42 but rather to brainstorm about solutions in the MIT spirit.
01:46 And I have the very radical and heretical belief
01:51 that you all actually agree on a lot more things
01:55 than the average Twitter user probably realizes.
01:59 And we're going to see if we can find some of those shared...
02:04 those things that you agree on that we can all go out and do.
02:08 So I'm going to warm you up just with some lightning round questions
02:12 where you can basically just say yes or no.
02:14 Or no, okay.
02:15 So who is...
02:16 First question.
02:18 Are you excited about improving AI in ways
02:24 so that it can be our tool and really complement and help humans?
02:31 Yes or no?
02:32 Yes.
02:32 Yes.
02:33 Yes.
02:34 Yes.
02:34 All right.
02:35 Next question.
02:37 Do you believe that AI in a few years will be a lot better than it is now?
02:42 Yes.
02:43 Yes.
02:44 Yes.
02:44 Yes.
02:45 All right.
02:47 Now, okay, let's make it a bit harder.
02:49 So if we define artificial general intelligence as AI
02:55 that can do all, basically all cognitive tasks at human level or better,
03:01 I think...
03:03 Do you feel that we already have it now?
03:06 No.
03:07 Absolutely not.
03:08 No.
03:09 No.
03:09 Okay, four no's.
03:12 Do you think we're probably going to have it within the next thousand years?
03:16 Maybe.
03:18 Sure.
03:19 Yes.
03:20 Yes.
03:20 Do you think we're probably going to have it within the next...
03:23 Did I say thousand or hundred?
03:25 Thousand.
03:25 Next hundred years.
03:26 Maybe.
03:28 Quite possibly.
03:30 Very probably.
03:31 Bearing nuclear catastrophe, yes.
03:33 All right.
03:34 So if you were to put a number on like how many years we're going to need to wait
03:39 until we have a 50% chance of getting it, what year would you guess?
03:44 Not any time soon.
03:47 Decades.
03:50 Decades?
03:51 A lot less than I used to think.
03:55 Okay, and you?
03:57 5.4.
03:58 5.4 years.
04:00 Okay, a lot of precision there.
04:02 So I think you'll all agree we put them in the right order.
04:05 And you should see that the level of alarm they have is correlated with how quickly
04:09 they think we're going to have to deal with this.
04:11 So clearly if you have some of the world, if you have leading world experts who think
04:16 it might happen relatively soon, we have to take seriously the possibility.
04:20 So the question is how do we make sure that this becomes the kind of tool AI
04:24 that we can control so we can get all the upside and not the downside?
04:31 One thing that has really struck me here in Davos actually is that the vast majority
04:36 of what I hear people being excited about in AI, medical breakthroughs, eliminating
04:46 poverty, helping with the climate, making great new business opportunities,
04:51 doesn't require AGI at all.
04:55 So I'm actually quite curious if somehow there were a way where we could just agree
05:02 that, say, let's do all the great AI stuff but maybe not do super intelligence
05:09 until 2040 at the earliest or something like that.
05:13 Is that something you all feel you could live with or do you feel that there is
05:16 a great urgency to try to make something super intelligent as fast as possible?
05:23 What would you go in this direction this time? What would you say?
05:26 Can live with that?
05:28 Can you say it again?
05:29 I can live with that.
05:30 You can live with that? What about you, Stuart?
05:33 You can elaborate a bit more this time.
05:35 I could live with that but it's not actually relevant what I think.
05:39 What's relevant is what are the economic forces driving it.
05:43 And if AGI is worth, as I've estimated, 15 quadrillion dollars, it's kind of hard
05:51 to tell people, "No, you can't go for that."
05:54 Jan, what about you?
05:57 So first of all, there is no such thing as AGI because we can talk about human-level AI
06:02 but human intelligence is very specialized so we shouldn't be talking about AGI at all.
06:08 We should be talking about what kind of intelligence can we observe in humans
06:12 and animals that current AI systems don't have.
06:16 And there's a lot of things that current AI systems don't have that your cat has or your dog.
06:23 And they don't have anything close to general intelligence.
06:26 So the problem we have to solve is how to get machines to learn as efficiently as humans and animals.
06:32 That is useful for a lot of applications.
06:35 This is the future because we're going to have AI systems that we talk to help us in our daily lives.
06:43 We need those systems to have human-level intelligence.
06:45 So, you know, that's why we need it. We need to do it right.
06:50 Daniela?
06:52 Well, I'm with Jan, but let me first say that I don't think it's feasible to say
06:58 we're going to stop science from developing in one direction or another.
07:02 And so I think knowledge has to continue to be invented.
07:07 We have to continue to push the boundaries.
07:10 And this is one of the most exciting aspects of working in this field right now.
07:16 We do want to improve our tools.
07:18 We do want to develop better models that are closer to nature than the models that we have right now.
07:25 We want to try to understand nature in as great detail as possible.
07:32 And I believe that the feasible way forward is to start with the simpler organisms in nature
07:39 and work our way up to the more complex creatures like humans.
07:46 Stuart?
07:48 So, I want to take issue with something.
07:52 There's a difference between knowing and doing, and that's an important distinction.
08:01 But I would say, actually, there are limits on what is a good idea for the human race to know.
08:07 Is it a good idea for everyone on Earth to know how in their kitchen to create an organism that will wipe out the human race?
08:17 Is that a good idea?
08:20 Daniela?
08:21 No, of course not.
08:22 Of course not. Right.
08:23 So we accept that there are limits to what is a good idea for us to know.
08:28 And I think there are also limits on what is a good idea for us to do.
08:32 Should we build nuclear weapons that are large enough to ignite the entire atmosphere of the Earth?
08:42 We can do that.
08:44 But most people would say, no, it's not a good idea to build such a weapon.
08:49 So there are limits on what we should do with our knowledge.
08:52 And then the third point is, is it a good idea to build systems that are more powerful than human beings that we do not know how to control?
09:02 Well, Stuart, I have to respond to you.
09:05 Please do.
09:06 And I will say that every technology that has been invented has positives and negatives.
09:14 We invent the knowledge and then we find ways to ensure that the inventions are used for good, not for bad.
09:24 And there are mechanisms for doing that.
09:26 And there are mechanisms that the world is developing for AI.
09:30 With respect to your point about whether we have machines that are more powerful than humans, we already do.
09:38 We already have robots that can move with greater precision than you can.
09:42 We have robots that can lift more than you can.
09:48 We have machine learning that can process much more data than we can.
09:52 And so we already have machines that can do more than we can.
09:56 But those machines are clearly not more powerful than humans, in the same way that gorillas are not more powerful than humans, even though they're much stronger than us.
10:05 And horses are much stronger and faster than us.
10:08 But no one feels threatened by horses.
10:10 I think there is a big fallacy in all of this.
10:12 So, first of all, we do not have a blueprint for a system that would have human level intelligence.
10:19 It does not exist.
10:20 The research doesn't exist.
10:22 The science needs to be done.
10:24 This is why it's going to take a long time.
10:26 And so if we're speaking today about how to protect against intelligent systems taking over the world or the dangers of it, regardless of what they are,
10:36 it's as if we were talking in 1925 about the dangers of crossing the Atlantic at near the speed of sound when the turbojet was not invented.
10:45 We don't know how to make those systems safe because we have not invented them yet.
10:51 Now, once we have a blueprint for a system that can be intelligent, we'll have a blueprint probably for a system that can be controlled as well.
10:59 Because I don't believe we can build intelligent systems that don't have controlling mechanisms inside of them.
11:05 We do as humans.
11:07 Evolution can build us with certain drives.
11:11 We can build machines with the same drives.
11:13 So that's the first fallacy.
11:14 The second fallacy is it is not because an entity is intelligent that it wants to dominate or that it is necessarily dangerous.
11:22 It can solve problems.
11:24 You can tell it.
11:25 You can set the goals for it, and it will fulfill those goals.
11:31 And the idea that somehow the system is going to come up with its own goals and take over humanity is just preposterous.
11:38 It's ridiculous.
11:39 What is concerning to me is that the danger from AI does not come from any bad property that it has, an evilness that must be removed from the AI.
11:49 It's because it's capable.
11:51 It's because it's powerful.
11:53 This is what makes it dangerous.
11:55 What makes a technology useful is also what makes it dangerous.
11:59 The reason that nuclear reactors are useful is because nuclear bombs are dangerous.
12:05 This is the same property.
12:07 As technology progresses over the decades and the centuries, we have gotten access to more and more powerful technologies, more energy, more control over our environment.
12:18 What this means is that the best and the worst things that can happen either on purpose or accidentally grow in tandem with the technology we built.
12:31 AI is a particularly powerful technology, but it is not the only one that could become so powerful that even a single accident is unacceptable.
12:41 There are technologies that exist today or will exist at some point in the future.
12:46 Let's not argue about whether it's now, in 10 years, in 20 years.
12:49 My kids are going to be alive in 50 years, and I want them to live in a world where not a single accident can be the end.
12:56 If you have a technology, whether it's AGI, future nuclear weapons, bioweapons, or something else, you can build weapons or systems so powerful that a single accident means game over.
13:10 Our civilization is not set up in how we currently develop technologies to be able to deal with technologies that don't give you retries.
13:20 This is the problem.
13:21 If we have retries, if we can try again and again and we fail and some stuff blows up, and maybe a couple people die, but it's fine, then I agree with Jan and Daniela that I think our scientists got this.
13:34 I think Jan's lab will solve this.
13:36 I think these people will solve it.
13:38 But if one accident is too much, I don't think they will.
13:42 To that point, and to the point that Stuart and Connor just mentioned, you can imagine an infinite number of scenarios when all of those things will go bad.
13:51 You can do this with any technology.
13:53 You can do this with AI, obviously.
13:54 Sci-fi is full of it.
13:56 You can do this with turbojets.
13:58 Turbojets can blow up.
14:00 There is lots and lots of ways to build those systems in ways that would be dangerous, wrong, they'll kill people, et cetera.
14:08 But as long as there is at least one way to do it right, that's what we need.
14:12 And so, for example, there's technology that was developed in the past that was developed at a prototype level and then was decided that it should not be deployed because it would be too dangerous or uncontrollable.
14:25 Nuclear-powered cars, people were talking about this in the '50s.
14:28 There were prototypes.
14:29 It was never deployed.
14:31 Nuclear-powered spaceships, same thing.
14:33 So, there are mechanisms in society to stop the deployment of technology if it's really dangerous and to not deploy it.
14:42 But there are ways to make AI safe.
14:45 I actually do agree that it's important to understand the limitations of today's technology and set out to develop solutions.
14:56 And for some cases, we can develop technological solutions.
15:01 And so, for instance, we've been talking about the bias problem in machine learning.
15:06 We actually have technological solutions to solve that.
15:11 We are talking about size.
15:14 We're talking about interpretability.
15:16 The scientific community is working on addressing the challenges with today's solutions and also seeking to invent new approaches to AI, new approaches to machine learning that have other types of properties.
15:34 And in fact, at MIT, a number of research groups are really aiming to push the boundaries to develop solutions that can be deployed on safety-critical systems and on edge devices.
15:48 This is very important, and there is really excellent progress.
15:52 So, I am very bullish about using machine learning and AI in safety-critical applications.
15:59 So, I would say I agree with one thing that Stuart said, but also with a lot of the observations Jan shared.
16:06 Several of you independently say that we need new architectures, new technical solutions.
16:11 So, to wrap up, I would love if some of you want to share just very briefly some thoughts on this.
16:20 What kind of new architectures do we need that are more promising to make the kind of AI that complements us rather than replaces us?
16:28 Do you want to go first, Jan?
16:29 Sure. Yeah, I can't really give you a working example of it because this is work in progress, but these are systems that are goal-driven.
16:37 And at inference time, they have to satisfy fulfill a goal that we give them, but also satisfy a bunch of guardrails.
16:43 And so, they cannot be -- they're planning their answer as opposed to just producing it autoregressively one way or the other.
16:50 And they cannot be jailbroken unless you hack into them or things like that.
16:56 So, this would be an architecture that I think would be considerably safer than the current types that we are talking about.
17:03 And those systems would be able to plan and reason, remember, perhaps understand the physical world, all kinds of things that current LLMs cannot do.
17:10 So, future AI systems will not be on the blueprint that we currently have, and they will be controllable because they'll be objective-driven.
17:17 Liquid networks, which are brain-inspired by the brains of small creatures, and they are provably causal.
17:26 They are compact. They are interpretable and explainable, and they can be deployed on edge devices.
17:35 And since we have these great properties, we also have control.
17:39 I'm also excited about connecting some of the tools we're developing in machine learning with tools from control theory.
17:47 And so, for instance, combining machine learning with tools like barrier net and control barrier functions to ensure that the output of a machine learning system is safe.
17:58 The actual technology that I think is most important is social technology.
18:02 It is very tempting for tech people, tech nerds like all of us here on this panel, to try to think of solutions that don't involve going through humans.
18:10 But the truth is that the world is complicated, and this is both a political and a technological problem.
18:15 And if we ignore the technical and the social side of this problem, we will fail, reliably so.
18:20 So, it is extremely important to understand that techno-optimism is not a replacement for humanism.
18:26 Great. So, let's thank our wonderful panel here for invoking us.
18:31 And I hope you also take away from this that even though they don't agree on everything,
18:35 they all agree that we want to make tools that we can control and that complement us,
18:40 and that they're all very nerdy and have exciting technical ideas for doing this.
18:45 Thank you all.
18:47 [Applause]
18:48 [End of Audio]
18:50 [End of Audio]
18:53 [BLANK_AUDIO]

Recommended