I don't think we can control AI much longer. Here's why

  • 3 months ago
I don't think we can control AI much longer. Here's why
Transcript
00:00Will we be able to control artificial intelligence if it becomes more intelligent than us?
00:06This debate heated up significantly last week after Geoffrey Hinton gave an interview in which he says
00:12he's very worried that we'll soon lose control over super-intelligent AI.
00:17Meta's AI chief Jan Neckar disagrees. I think they're both wrong. Let's have a look.
00:23Geoffrey Hinton is a computer scientist who played a key role in the development of deep
00:28neural networks that current large language models are based on. Last year he left his
00:33position at Google to speak more freely about the risks posed by AI. In an interview with
00:39BNN Bloomberg he now explains his worries. But most of the researchers I know are fairly
00:46confident that we'll get more intelligent than us. So the real question is how long will that take
00:52and when it does will we still be able to keep control? What are the risks associated with that
00:56in your opinion? Okay so how many examples can you tell me about where a more intelligent thing
01:02is controlled by a less intelligent thing? Not a long list. Right not since Biden got elected anyway.
01:09So basically his argument is that more intelligent beings usually control those of lesser
01:15intelligence. Meta's AI chief Jan Neckar thinks that this isn't so. Intelligence he says comes
01:21in many different types and doesn't imply an ability to dominate humans. He's also said on
01:27earlier occasions that AI researchers have made good progress in telling AI what they're allowed
01:32to do. Constraints that he refers to as guardrails. The question becomes one of designing appropriate
01:40guardrails. We are familiar with that. He seems very confident and doesn't worry that AI could
01:46get out of control. Here's what Geoffrey Hinton had to say about our odds of survival. I think we've
01:52got a better than even chance of surviving this. That's very cheerful. What are we to make of this?
01:58Well first of all it's arguably true that a species of higher intelligence doesn't necessarily
02:05control one of lesser intelligence. We do not for example control viruses and bacteria and it's not
02:12for want of trying. So Neckar arguably has a point when he says that intelligence is not per se
02:18a tool of control. It doesn't even necessarily bring a desire for control. No one really wants
02:23to control fish or birds. They do their thing. We do ours. But I think that looking at it from the
02:30perspective of control is not particularly useful because it would require you to figure out what
02:36control means in the first place. Do we control cats or do they control us? Sometimes I wonder.
02:44I think it's easier and more insightful to look at it from the perspective of competition for
02:49resources. Species take up space and ecosystems because they require land and energy and nutrients
02:55and materials. This makes me think that the relevant question will eventually be whether
03:00an intelligent AI will leave humans sufficient resources to continue making progress or whether
03:08they'll push us into an ecological niche and leave us in the dust. The problem is that the more
03:14intelligent they are, the easier they'll outcompete us. To see how this might play out, we have to
03:21distinguish two different parts of the AI species so to speak. There is the code that's being trained
03:28on the data. Let me call this the mother code. Training the mother code takes up a lot of time
03:33and energy and data. And then there is the fully trained result. If you're working with deep neural
03:39nets, then that'd be the weights. The trained result is something that you can copy and deploy,
03:45for example, in robotic devices. While it doesn't have the same ability to learn as the mother code,
03:51it can maintain a limited ability to learn and also you can set it up to communicate with the
03:57mother code. I'm not just making this up as I go along. There are many companies already
04:02working on doing that, putting a small trained AI on your computer locally and only on occasion
04:09accessing a bigger one through the cloud. But I think a lot of computer scientists
04:14overestimate how deterministic increasingly large systems are going to be. I believe that the larger
04:22they become, the more random deviations you're going to get from the original code. Indeed,
04:29at some point, the non-determinism will probably become essential for the systems to continue
04:35improving. That is the very thing that'll make AI super intelligent is the thing that'll also
04:42allow them to circumvent all guardrails. I know we tend to think of computers as perfect reproduction
04:48machines, but they're not. Your computer doesn't work the same way as my computer, even if you have
04:54the same hardware and software. It's because they're physically subtly different. There's
05:00even a way to track your computer that way known as GPU fingerprinting. It basically forces your
05:07graphics processing unit to render an image and that contains details about the exact way that
05:13your GPU works, which depends on the exact variations in the way it was produced and how
05:19you handled it and its entire history. Now, at the moment, that doesn't make much of a difference,
05:25but I believe and I want to be really clear that this is just a belief that these small physical
05:32differences are going to become increasingly relevant. They'll make a difference to what an AI
05:38learns, what they conclude, and what they do next. And this is why I think it won't matter
05:44what guardrails you put on your training model. They'll grow out of it anyway.
05:49So, what can we do? The solution is clearly to build AI on Mars, and while we're at it already,
05:56maybe we can send all the moon landing deniers there too. That should be interesting.
06:00One of the most annoying things about news reading is that everyone covers the same story,
06:07but from their own angle, and you have to read it like a dozen times to get the full picture.
06:13But there's a much better way to do this, which is by using Ground News. Ground News is a news
06:20platform that provides you with a lot of extra information that you don't find in the standard
06:25media. Take for example this recent story about how the state of Kansas sued Pfizer over misleading
06:33information on vaccine safety. Ground News will tell you right away that this story has almost
06:38exclusively been covered by the right. You also get a factuality rating for each news item that
06:44shows you that this is, well, somewhat of a mixed bag. It also tells you whom the media outlets are
06:50owned by. Lots of information at one glance. Ground News also has this cool feature which
06:57they call Blind Spot. It shows you news which has been covered almost exclusively by one side
07:03of the political spectrum and has been ignored by the other. Ground News works both on the web
07:10and comes with a phone app. If you want to give it a try yourself, use our link ground.news.sabina
07:17so they'll know I sent you. This will get you a big discount on their vantage plan with access
07:24to all their features. It's never been more important to be well informed than today,
07:29so go and check this out. Thanks for watching, see you tomorrow.

Recommended