• last year
How will we know if artificial intelligence or AI is becoming conscious? That remarkable question is being increasingly asked by many in the AI community even as there are as many who say so far AI with a measure of sentience is an exaggerated notion. Nevertheless, it is being felt that the world needs to be prepared for the eventuality that AI could develop human-like consciousness and decide how to deal with it.

Neuroscientist Anil Seth, Professor of Cognitive and Computational Neuroscience at the University of Sussex, as well as Director of the Sussex Centre for Consciousness Science, has been tracking the AI and consciousness debate closely. Prof Seth, who is the Sunday Times bestselling author of "Being You : A New Science of Consciousness", spoke to Mayank Chhaya Reports about AI and consciousness.
Transcript
00:00 (dramatic music)
00:02 (dramatic music)
00:05 (dramatic music)
00:08 (dramatic music)
00:36 - How will we know if artificial intelligence
00:39 or AI is becoming conscious?
00:43 That remarkable question is being increasingly asked
00:46 by many in the AI community,
00:48 even as there are as many who say so far,
00:51 AI with a measure of sentience is an exaggerated notion.
00:56 Nevertheless, it's being felt that the world
00:59 needs to be prepared for the eventuality
01:01 that AI could develop human-like consciousness
01:04 and decide how to deal with it.
01:07 Neuroscientist Anil Seth,
01:09 Professor of Cognitive and Computational Neuroscience
01:12 at the University of Sussex,
01:14 as well as Director of the Sussex Center
01:16 for Consciousness Science,
01:18 has been tracking the AI and consciousness debate closely.
01:23 He has called for more precise
01:25 and well-tested theories of consciousness before we think
01:28 in terms of attributing any kind of human-like sentience
01:32 to AI.
01:34 Professor Seth, who is the Sunday Times bestselling author
01:37 of "Being You, A New Science of Consciousness,"
01:40 spoke to "Mayan Chai" reports about AI and consciousness.
01:45 - Welcome to "Mayan Chai" reports, Professor Seth.
01:47 It's a great pleasure to have you on.
01:50 - It's a pleasure to be here.
01:51 Thank you for inviting me.
01:52 - I was reading parts of your book,
01:55 "Being You, A New Science of Consciousness,"
01:58 and a couple of things jumped out at me.
02:00 You talk about a surgery that you underwent
02:03 when you were under general anesthesia.
02:05 I had two, in fact.
02:07 One was a colon resection,
02:08 the other was a quadruple bypass.
02:11 And I know exactly what you mean.
02:13 Basically, that period just disappears from your life.
02:17 You cease to exist.
02:19 And in that context, you say something quite remarkable.
02:22 You say general anesthesia doesn't just work
02:25 on your brain or mind.
02:27 It works on your consciousness.
02:29 Tell me a bit more about that.
02:32 - Well, I think that's why anesthetics are so useful.
02:36 I mean, they wouldn't be that great
02:38 if they just made you forget about conscious pain
02:41 that you experienced during the operation,
02:43 or did something else that didn't change your experience
02:46 in the moment.
02:47 The whole marvel of anesthesia
02:50 is that it takes away experience itself.
02:52 Now, for me, it was one of these things
02:54 that made it just very, very clear that consciousness,
02:57 and that this fact that we all share
03:00 that we have subjective experiences,
03:03 and this is something that is amenable to scientific study
03:07 because we can manipulate it.
03:08 Anesthetics manipulate consciousness.
03:11 They turn it off and they turn it back on again.
03:15 And of course, that's the other miracle about anesthetics.
03:18 They can turn consciousness back on.
03:20 There would not be very much use
03:21 if they didn't work on the other side as well.
03:25 - What was your experience coming out?
03:27 Mine was very bizarre in the first one,
03:30 very bizarre visuals,
03:32 apart from you talk nonsense, delirious stuff.
03:34 What was your experience?
03:35 - Yeah, pretty similar.
03:37 I mean, I've had several general,
03:38 well, three or four general anesthesias now,
03:41 and I think the most recent one,
03:44 and the one that led me to write about it in the book
03:46 was a relatively minor operation.
03:48 So by this time, I was not so much worried
03:52 about the surgery,
03:53 and I was really trying to pay attention
03:55 to the experience of anesthesia,
03:59 or the non-experience of anesthesia.
04:02 And two things struck me.
04:05 Firstly, on the way in, on the way under,
04:10 it's really quick.
04:11 It's so quick.
04:12 You try to count and you're just out.
04:16 And then you're back.
04:17 And could have been any amount of time
04:19 that would have passed.
04:19 It could have been five minutes.
04:20 It could have been five hours.
04:22 Could have been five years.
04:24 It's just, time has joined up at either side.
04:28 And when I was coming around,
04:29 then it's a bit more, it takes much longer.
04:33 Of course, there are other drugs you often have,
04:35 as well as the anesthesia,
04:36 that might contribute to the delirium,
04:39 the weird kinds of hallucinations,
04:41 the feeling of not being really sure where you are.
04:46 I think the one thing that I really remember
04:48 of one of the occasions was that,
04:50 I think about three hours after the operation,
04:55 I was in another room entirely.
04:58 And I suddenly noticed that my legs had been
05:01 sort of covered with these plastic tubes
05:04 to do something about blood flow.
05:08 And this, to me, this was again, remarkable.
05:10 It was like, how did this happen without me noticing?
05:12 And just really, again,
05:13 of course they'd done a whole bunch of other stuff too.
05:16 But the fact that so many things could have happened
05:18 without me noticing, again,
05:19 has underlined how deeply changed we are
05:25 from any normal state, including sleep,
05:27 including deep sleep.
05:29 Another way I think I put it in the book is that,
05:32 it really turns us into objects,
05:35 and then back again into people.
05:38 And that's just really stayed with me
05:41 as something that's both practically very, very useful,
05:44 but existentially and philosophically
05:46 and neuroscientifically, incredibly fascinating.
05:49 - But if consciousness is something so intangible,
05:53 indefinable, even ephemeral,
05:55 and when you say general anesthesia
05:58 essentially suspends that,
06:00 what do you think it suspends?
06:02 - Well, I don't think it is ephemeral or undefinable.
06:05 I think it's difficult to define.
06:07 Consciousness has been a challenge,
06:09 not just for neuroscience, but for philosophy,
06:12 for religion, even for any school of thinking.
06:16 But we can make a lot of progress with it.
06:21 So I think there's a broad consensus definition
06:24 of consciousness is the presence of any kind
06:27 of subjective experience whatsoever.
06:29 And this means it's not the same as intelligence.
06:31 It's not the same as self-awareness.
06:34 It's not the same as mere sensitivity
06:37 to something in the environment,
06:39 that it feels like something to be conscious of the world
06:42 and of being a self within that world.
06:46 So when we're studying consciousness,
06:48 I mean, that's what we're studying.
06:49 We're studying the phenomenological,
06:50 the subjective aspects of brain and body states.
06:55 - Before we get into the question
06:58 of artificial intelligence and consciousness,
07:02 let's start with human consciousness,
07:04 more specifically what you call
07:06 the neuronal wetware inside our heads.
07:10 What's that and why wet?
07:12 - Well, if we want to understand consciousness,
07:18 probably the best place to start is with systems
07:20 where we know or are very, very sure
07:23 that consciousness exists.
07:25 This is human beings and some other animals too.
07:28 I think probably lots of other animals,
07:30 but let's say primates, mammals, these kinds of things.
07:33 And of course, we start by examining how the brain works
07:38 with respect to consciousness.
07:40 It's probably the oldest observation in neuroscience
07:43 that consciousness is very intimately related
07:46 to brain structure and dynamics.
07:50 And so this is where the neuroscience
07:52 of consciousness starts.
07:54 And I use the term wetware here
07:57 when talking about the brain,
07:58 I think just to emphasize that the metaphors
08:03 that we often use when we talk about the brain
08:06 can be very useful, but they can also be misleading.
08:09 And one of the most powerful metaphors
08:12 over the last decades has been that the brain
08:15 is some kind of computer
08:17 and that it performs information processing,
08:19 that the mind is some sort of software
08:22 running on the brain's hardware
08:25 or the mindware running on the brain's wetware.
08:28 And so I prefer to use wetware just to emphasize
08:30 that it's not the same thing as hardware.
08:32 In fact, there is no sharp distinction in the brain
08:36 between the mind and its physical substrate
08:40 as we find in a typical computer
08:42 between the software that it runs
08:44 and the hardware that it runs on.
08:46 Every time two neurons fire,
08:48 the structure of the brain changes.
08:50 And there's many, many different levels
08:53 on which we can think about brain structure and dynamics
08:56 from even within individual neurons.
08:58 A single neuron is a highly complicated biological machine.
09:02 And of course, we've got over 80 billion of them
09:05 and a thousand times more connections.
09:08 So I think we often can give ourself a false,
09:12 either a false sense of understanding
09:14 or put false constraints on what we think can be understood
09:19 if we use metaphors which can take us so far,
09:23 but no further.
09:24 - Brains, as you point out again in your book,
09:27 are both chemical machines as well as electrical networks.
09:32 And once again, they have always existed within a body.
09:36 That is a fundamental and defining difference
09:38 between humans and any AI-propelled human-like system,
09:43 is it not?
09:44 - Well, there are some AI systems that are embodied,
09:49 that we have robotics as well.
09:50 I think robotics has not enjoyed quite the limelight
09:53 that other aspects of AI has enjoyed lately.
09:57 But yeah, you're right in the main.
10:00 Brains of human beings, of any other animal,
10:05 are fundamentally embodied organs.
10:07 I mean, they evolved, they develop
10:10 in order to keep the body alive.
10:14 And the body here is not just an object
10:16 that's there to perform functions, to move around.
10:19 Now, the body is a complex biological system itself.
10:22 And a large part of what the brain does
10:25 is sense and regulate the body from within.
10:28 It senses the blood pressure, heart rate,
10:30 all these kinds of things, blood oxygenation,
10:32 controls and regulates our internal physiology.
10:35 This is very, very different from AI systems,
10:39 which typically take in some input and generate some output.
10:43 And even if they're embodied,
10:45 it's typically the body is not something
10:47 that the AI system is actively maintaining.
10:50 It's something that the AI system might be controlling.
10:54 - Since AI is not a consequence
10:58 of any comparable biological or chemical processes,
11:03 or electrical exchanges even, of the kind that we have,
11:07 why would it have the potential
11:09 to develop human-like consciousness?
11:11 - Well, this is a very good question.
11:13 And I'm not sure that it does,
11:16 at least along the trajectory
11:17 that it's currently developing.
11:19 And one of the things that's been somewhat noticeable
11:21 is that this assumption that as AI develops,
11:26 that at some point the lights come on
11:29 and it's not only intelligent, but also conscious.
11:33 And I think this idea is poorly supported.
11:38 And I think I know where it comes from.
11:41 We humans, we tend to be quite exceptionalist
11:45 in the sense that we associate, we think we're special.
11:49 We think we're special in all of creation.
11:51 And we like to put things together
11:56 that we think define this specialness.
11:59 So we know we're conscious and we think that's special.
12:02 We also think we're intelligent.
12:05 And to some extent we are, though,
12:06 I think we can argue about how that's going in the world.
12:10 But there's this tendency to therefore associate
12:12 consciousness with intelligence
12:14 because they go together in us in a particular way.
12:17 And I think that's just a very poorly justified assumption.
12:22 Intelligence gives us some ways of being conscious
12:26 with different forms of cognition.
12:28 I can have different kinds of conscious experiences.
12:31 But the most basic forms of conscious experience
12:33 that are probably shared by many other animals
12:36 that don't reach human,
12:39 or at least what we think of
12:40 as advanced human levels of intelligence,
12:43 there are experiences like thirst, hunger, pain, pleasure.
12:48 You don't have to have too much cognitive sophistication
12:51 for these to be useful conscious experiences to have.
12:56 So just by making machines smarter doesn't mean
13:00 that they'll inevitably become conscious.
13:04 You can't entirely rule it out
13:06 because one of the issues here is that
13:09 one of the reasons why working
13:10 in the neuroscience of consciousness is really exciting
13:12 is that there is yet no consensus agreement
13:17 about the minimal sufficient conditions
13:20 for a system to be conscious.
13:21 It could be that it just happens by accident,
13:24 but the idea that it's guaranteed to happen
13:27 just in virtue of things becoming smarter,
13:29 I think that's definitely wrong.
13:30 - When you point out consciousness is about experience,
13:34 right, we feel self inside.
13:38 How do we know if an AI system is feeling it?
13:41 The presumption is based on what?
13:45 - Yeah, this is a really tough one to ask
13:48 because the kinds of tests,
13:50 the kind of criteria that we might apply
13:52 are gonna be very different for an AI system
13:55 than for, let's say, other ambiguous cases
13:58 where we're not sure whether something is conscious or not.
14:00 Take like a newborn human infant or a non-human animal
14:04 or somebody after major brain damage,
14:07 or even sort of a new form of neurotechnology
14:10 like a brain organoid,
14:11 which is a bunch of neurons in a dish.
14:14 What kinds of tests can we apply?
14:16 Now, for humans, the kind of gold standard is to ask them.
14:19 Now, we ask for some verbal report.
14:22 Are you conscious?
14:22 Can you hear me?
14:23 Back to anesthesia thing.
14:25 Can you hear me?
14:25 Are you there?
14:27 And if we say, yeah, I can hear you,
14:28 then that's pretty good evidence
14:30 that the person is conscious
14:32 because we have a whole evolutionary history
14:33 of not being able to do that when unconscious.
14:37 But of course, this doesn't work for AI.
14:39 Specifically doesn't work for large language models
14:43 or these chat bots that are based on large language models.
14:46 Because they are designed to utter fluent responses
14:53 to our input.
14:55 So we need some other way of developing our credence
15:00 in whether AI is conscious or not.
15:04 And for me, and I think for a number of others
15:07 working and thinking in this area now,
15:09 the best way to do that is to take
15:12 our existing scientific theories of consciousness
15:15 and see what each says about what it would take
15:20 for an artificial system to be conscious.
15:22 And then we can begin to talk
15:24 a little bit more precisely about this.
15:26 Now, we can think how confident we are in a given theory,
15:29 how confident we are that it applies
15:32 to the AI system in question
15:34 or which of the conditions of this AI system display.
15:37 And then we can get at least,
15:40 we can do better than just saying nothing at all.
15:44 Now, I don't think we don't yet have
15:46 a fully reliable consciousness meter
15:49 that we can point at the next iteration of chat GPT
15:53 or whatever it might be.
15:55 But we can also say more than nothing.
15:58 And what we need to do is just iterate that process
16:03 and be also very mindful of the fact
16:05 that because we are very, very susceptible
16:10 to projecting qualities like consciousness into things,
16:14 we might be very easily misled
16:17 if we go on our kind of just gut intuition.
16:20 Again, we've seen this with language models.
16:22 There was the case of the Google engineer last year,
16:24 Blakely one, who thought that Lambda,
16:28 this Google chatbot was sentient.
16:29 And this, I think, just underlines
16:35 how if we rely on indicators
16:39 that would mean something in a human being,
16:42 if we rely on those indicators,
16:43 we can be massively misled in other systems.
16:47 - Right.
16:48 Presuming that AI becomes as seemingly sentient as us,
16:54 would it still not be a wholly parallel kind of intelligence
16:57 that's merely soaked up every piece of information
17:00 that we humans created?
17:02 - Well, now we're moving into an interesting
17:04 kind of speculation, right?
17:06 The range of other types of conscious minds.
17:10 - Right.
17:12 - When we think about what it's like to be a human being,
17:14 and we're all different anyway,
17:15 I mean, there's no single way
17:18 of being conscious even for humans.
17:21 When we look at other animals,
17:22 I spent some time with octopuses
17:24 during my career over the last few years,
17:27 and their way of being in the world
17:28 is likely to be entirely different from a human being's.
17:31 And so if it is possible to generate an AI system
17:36 that has subjective experience,
17:39 it might have, and was likely to have an entirely different,
17:42 a very, very different kind of conscious experience.
17:46 And this could be,
17:48 I think this is really important to underline,
17:50 because when we're thinking about the risks
17:52 and the benefits of conscious AI,
17:54 we don't wanna assume that conscious AI
17:59 will be like human AI.
18:00 It may be very, very different.
18:02 So for instance, human consciousness is very tied up
18:06 with emotion, with motivation, with agency,
18:11 and with experiences of volition, free will,
18:13 these kinds of things.
18:14 Now, it may be that it's possible to have consciousness
18:18 without any of these things.
18:20 Now, I personally don't think entirely.
18:23 I think consciousness for me is something
18:25 that is fundamentally tied to our nature as living creatures
18:28 and is probably bound up with emotion
18:32 at some unavoidable level,
18:34 but I might be wrong about this.
18:36 But some of these other aspects of consciousness
18:38 as it emerges in humans
18:41 may very well be completely optional
18:45 in some future conscious machine,
18:46 whatever that might look like.
18:49 And we face some design choices about that,
18:51 and we also face some ethical dilemmas about that too,
18:56 because a lot of people talk about the risks to humans
18:59 of developing conscious AI,
19:01 which usually, I think, arise from Terminator-style
19:06 misunderstandings of what's going on here,
19:09 that as soon as AI develops consciousness,
19:11 it will want to take over the world and all that.
19:13 I mean, that's just, I don't think
19:14 that's a particularly useful way of thinking.
19:17 But from the other side, of course,
19:19 if something is conscious,
19:22 then we have a moral and ethical obligation towards it too,
19:27 and we mustn't neglect that side of the equation.
19:29 - Aren't we too susceptible to anthropomorphizing machines?
19:35 We even anthropomorphize the so-called aliens.
19:39 We see them in our image.
19:41 - Yeah.
19:42 - Isn't that a problem?
19:43 - Yeah, it is.
19:44 I mean, as we were saying,
19:45 the Google engineer last year,
19:47 and it's very difficult to avoid doing this.
19:50 Now, it's the way our minds naturally work,
19:53 to projecting qualities into other things.
19:56 I mean, we do it with each other too.
19:58 We project understanding into other people's minds
20:01 where it may not exist sometimes,
20:03 it may exist differently.
20:04 But we definitely do it to non-human systems as well.
20:09 I mean, we do it all the time to non-human animals,
20:12 those of us who have pets.
20:13 We're always projecting things into the minds of our pets
20:18 that may well not be there,
20:19 and we do it more generally.
20:21 So I think it's really,
20:25 this becomes almost a sociological question.
20:27 I mean, what are the cues that cause us
20:29 to make these anthropomorphic projections?
20:32 And language is a big one,
20:33 which is why language models have been so interesting,
20:36 but also disruptive here,
20:38 because when something communicates with us,
20:41 even if it's in a disembodied way,
20:43 just by exchanging text,
20:45 even if it makes horrendous errors
20:46 and confabulates all the time,
20:49 it's quite difficult to resist the temptation
20:52 to project qualities into that system.
20:56 And that itself is very disruptive.
20:59 Now, if we believe that something is conscious,
21:03 or if we can't help feeling that something is conscious,
21:05 even though we know it isn't,
21:08 that I think opens us to making assumptions
21:13 about how it might behave, what it might do,
21:15 that might be very wrong,
21:16 because it would be what we would do
21:18 because we are conscious.
21:20 And if this thing isn't, or is conscious different ways,
21:22 it's no guarantee it's going to do that.
21:24 - To me, I mean, I'm speculating here.
21:28 To me, so far, it seems that the best AI may be able to do
21:32 is mimic human consciousness
21:35 in a way that is indistinguishable for us
21:39 when we look at it.
21:40 Do you think that might happen?
21:42 - I think that's very likely to happen
21:45 in limited domains, right?
21:47 So I think, right now, in the language domain,
21:50 it's already getting pretty close.
21:52 Language models are improving a lot.
21:55 They're still quite easy to catch out
21:57 if you try to catch them out,
21:59 but perhaps these things will get ironed out.
22:02 And we may well live in a situation
22:05 relatively soon where we will simply not be able to tell
22:10 whether we're interacting
22:11 with a non-conscious large language model
22:15 or with a conscious human on the other side.
22:18 Or, relatedly, we may not be able to resist
22:23 attributing consciousness to a large language model,
22:26 even though we know it's not a human,
22:28 even though we know or have good reason to believe
22:31 that it isn't really conscious.
22:33 In the same way, there are some visual illusions
22:35 that even when you know what's going on,
22:37 like two lines may look the same length
22:42 but might be different lengths,
22:44 even when you know what's going on,
22:45 you can't help see it in a particular way.
22:48 And I think we may well enter that kind of world very soon.
22:52 And that will be problematic.
22:55 It's gonna be problematic, as we said,
22:57 in terms of our predictions about what these things do,
22:59 but also ethically,
23:02 because we may well then start caring about these systems
23:06 when our time and moral and ethical reserves
23:09 might be better put to caring about other people
23:12 or other non-human animals.
23:15 And on the flip side,
23:16 if we learn to not expend any kind of ethical resources
23:21 on these things,
23:22 we are in the danger of brutalizing our minds.
23:26 I mean, Immanuel Kant had this nice way of putting this.
23:29 It's a kind of, it's why we don't destroy plastic dolls
23:34 in front of children.
23:36 I mean, we know they're just dolls,
23:38 but it's something psychologically unhealthy
23:40 to treat things that might seem conscious
23:45 in ways that we wouldn't treat them
23:47 if they really, really were.
23:50 So there's a lot to be confronted here
23:52 because while actually conscious AI is either impossible
23:58 or very far away, or we just don't know,
24:01 this world of conscious seeming AI in limited domains,
24:06 I think is really quite close.
24:08 - Remarkable.
24:09 In order to go off slightly off a tangent,
24:12 in the Indian, rather, Hindu civilization context,
24:15 where even the most inanimate is often seen as something
24:20 which has some kind of life,
24:23 be it a tree, be it a piece of rock,
24:27 from that standpoint, I'm beginning to think
24:30 that of all the civilizations which would take to AI
24:35 as having consciousness easily,
24:36 perhaps it could be the Indian civilization
24:39 because they see it everywhere.
24:41 - Maybe, and maybe also because, I mean, Hinduism,
24:45 my family's half Indian,
24:47 and there's something delightfully pantheistic
24:49 about Hinduism as well.
24:51 So you know, you're saying you just incorporate it
24:53 as part of the pantheon of Hindu deities.
24:57 So you might be right.
24:59 There's another aspect though to,
25:01 like a Hindu perspective on the world,
25:04 or an Indian in general perspective on the world,
25:06 which I think is interesting here,
25:08 which is about the soul.
25:12 And in the Western tradition,
25:15 broadly associated with Descartes,
25:17 but of course, many more philosophers,
25:19 we tend to think of the defining element of the human mind
25:24 and the human conscious mind
25:26 as this immaterial, rational agency.
25:31 And this is really, you know,
25:32 it's this idea that you can think of,
25:34 that's the heart of,
25:36 that's the frame in which AI has been developed in the West.
25:38 You know, there's development of something that's rational
25:41 and it's embodyable and could run on anything,
25:44 could be uploaded to the cloud and so on.
25:46 That's the image in which AI is being generated,
25:51 even though it may not be explicitly put that way.
25:54 And in Hinduism, the soul, you know, as you will know,
25:58 is not the kind of disembodied rational agent.
26:01 It's much more, at least in my understanding,
26:03 much more associated with the body and with breath,
26:06 you know, this notion of Atman.
26:08 There is of course still the idea that you can, you know,
26:10 you can sort of skip from one person to another,
26:12 but transubstantiation.
26:15 But this idea that soul and breath are closer together
26:21 than say like soul and rational intelligence,
26:24 I think that's really important.
26:25 And for me, that fits much more comfortably
26:29 with what I think consciousness really depends on,
26:34 which is the body,
26:36 the flesh and blood mechanisms that we're made of,
26:40 the wetware, you know,
26:42 the raw biological materiality of us.
26:45 Because there's just a big open question here,
26:48 that computers, however sophisticated they are,
26:51 however many GPUs we have running,
26:54 are we just simulating something
26:58 or are we actually instantiating that thing?
27:01 And you know, this gets back to your question.
27:05 And there are some things for which
27:08 the distinction doesn't really apply, right?
27:10 We have computers that play chess, they play chess.
27:12 Now that's what they do.
27:13 And you can argue they play the history of chess
27:15 or whatever, but no, they play chess, that's fine.
27:18 But there are other things for which a simulation
27:22 is just always going to be a simulation.
27:25 And I think here of things like
27:27 big weather forecasting simulations,
27:30 you can make them as accurate as you like,
27:32 as detailed as you like,
27:34 but it never gets actually wet or windy
27:37 inside any of these computers
27:39 running a detailed weather forecast.
27:43 So this is the question,
27:44 like is consciousness more like playing chess
27:46 or is it more like the weather?
27:49 - In your book, you make a striking observation.
27:55 Consciousness has more to do with being alive
27:58 than being intelligent.
28:01 Elaborate.
28:02 - Yeah, I think this is kind of what I've been getting to.
28:05 So on the one hand, we have this fairly
28:09 Western inspired perspective of human exceptionalism
28:12 that consciousness and intelligence go together.
28:15 And when we try to create systems in our own image,
28:19 we try to build artificial intelligences,
28:21 we therefore think that consciousness
28:23 is gonna come along for the ride
28:25 because we make this association.
28:27 I just think that's wrong.
28:29 Consciousness is not the same as intelligence.
28:32 And also the idea that consciousness
28:36 is something that could be implemented inside a computer
28:40 is a massive assumption.
28:42 Now, I don't know whether it's right or wrong.
28:44 I suspect that it's wrong,
28:45 but I'm in a minority here.
28:46 Most neuroscientists would probably say consciousness
28:51 is something that could be implemented in a different,
28:53 in a system made out of different kinds of things.
28:57 But the more that I've been thinking about it,
28:59 the more I try to understand every form of consciousness
29:02 as a kind of perceptual experience,
29:05 now, whether it's of the world or of the self,
29:07 and every perceptual experience,
29:11 I think can be understood as a form of prediction
29:14 the brain is making about something.
29:16 And all of these predictions, for me,
29:20 are ultimately grounded in regulating the living body,
29:25 in keeping us alive.
29:27 Now, Descartes, again,
29:28 Descartes had this phrase called the beast machine.
29:31 And he used that to basically,
29:35 with respect to non-human animals,
29:40 to kind of make the point that they weren't conscious,
29:43 they didn't have the kinds of conscious, rational minds
29:45 that we did, that humans did.
29:47 So he called them, they're just beast machines.
29:49 The fact that they might yelp if you kick them
29:51 or bleed when you cut them, doesn't really matter.
29:54 And I've come to think almost entirely the opposite,
30:00 that what might endow us with consciousness
30:04 is the fact that we are beast machines,
30:06 that everything we experience arises with, through,
30:10 and because of our living bodies.
30:14 And to the extent that this is on the right track,
30:18 conscious AI becomes even further away.
30:21 Because before you have conscious AI,
30:23 you would first need living AI.
30:26 And that's not the route that people are taking,
30:28 at least not most people.
30:30 - I want to broaden a little bit to philosophical areas.
30:34 How do you view Gotham Booth's non-self doctrine,
30:39 which broadly argues that there is no permanent self,
30:43 underlying that idea is one of momentariness,
30:47 which is shanikva, that it's known as,
30:49 shanika being moment.
30:51 - Okay.
30:52 - It's an extraordinary conception, 2,500 years ago,
30:57 that it is moment to moment to moment.
31:00 - Yeah.
31:01 - What lies in between those two moments?
31:03 Perhaps nothingness, that's the idea.
31:05 How do you look at an explanation of that guy?
31:08 - So I think there's a lot to it.
31:12 And of course, there's many traditions,
31:14 Buddhism as well, many, many Buddhist traditions
31:16 think of the self as a process,
31:18 rather than an essence or an entity.
31:22 Even in the Western philosophy,
31:23 David Hume talked about the self
31:26 as a bundle of perceptions.
31:28 I think the key thing here is to understand
31:33 that the self is not the thing
31:35 that does the perceiving of the world,
31:38 and that is exercising free will and making decisions.
31:42 All of these things arise in consciousness,
31:45 experiences of agency, experiences of self,
31:48 experiences of the world, experiences of the body.
31:50 All of these arise within consciousness.
31:53 And I think are different forms of perception.
31:57 And indeed, this then means
32:00 that there is an impermanence to the self.
32:03 There is a continuous change
32:06 of the experience of being a self.
32:10 And it's something that is quite hard to recognize,
32:15 because it doesn't always seem like that to us.
32:20 And I think this is why practices like meditation
32:23 can be quite useful,
32:24 because they can draw our attention
32:26 to the nature of the self.
32:28 If we look for it, if we look for the self, where is it?
32:31 Where is it?
32:32 It doesn't seem to be anywhere.
32:34 And of course, there's a clue in that
32:36 when we try to do that and repeatedly fail
32:39 to identify the thing that is the self.
32:42 So I'm, yeah, I at least in theory,
32:45 am very much aligned with that idea.
32:47 In practice, of course, for me, most of the time,
32:50 it still feels like there is a self,
32:52 and it's more or less the same self.
32:54 And it is more or less the same.
32:56 And these changes, they happen.
32:59 But just like change blindness
33:00 in our experience of the world,
33:01 if something changes slowly
33:03 and we're not expecting it to change,
33:05 then we don't perceive the change.
33:07 And I think we have a kind of self change blindness
33:10 where the experience of being who we are
33:12 is always changing.
33:14 But most of the time, we never experience the change itself.
33:18 - Yeah.
33:19 You know, life as a series of moments,
33:22 always transmigrating is an astounding concept.
33:25 Now look at this particular quote from Buddha.
33:29 In place of an individual,
33:31 there exists a succession of instance of consciousness.
33:36 It's poetic, profound, and beyond the cerebrum.
33:40 - Yeah, and I think that's, I mean, I like that.
33:42 So it does raise an open question though,
33:44 whether there are instance or whether it's a dynamic,
33:48 whether consciousness arises in frames like in a movie,
33:52 or whether there's something intrinsically dynamic to it.
33:56 And that's a fascinating,
33:57 but it's a slightly different question.
33:59 - Right.
34:00 - But zooming out from that,
34:02 the idea that all there is moment to moment
34:05 are conscious experiences,
34:07 not a separate self and world,
34:12 not the self having a conscious experience of the world,
34:14 but the self is part of that conscious experience too.
34:17 I think from that perspective,
34:19 a lot of the philosophical confusion
34:23 about consciousness can dissolve.
34:25 You know, things like free will can be understood,
34:28 I think, much more simply as experiences
34:30 rather than as causes without cause,
34:33 as often people like to try to preserve
34:36 a sense of free will as something
34:38 that can change the course of physical events in the world.
34:42 If we think of it as a form of perception,
34:45 a form of experience,
34:47 then it becomes much easier to accommodate
34:49 within kind of a naturalistic world view.
34:52 - The reason I'm citing this seemingly esoteric idea
34:57 is, I mean, these are all products
35:02 of a very conscious mind,
35:03 namely that of Gotham Bulls.
35:05 I seriously wonder whether an AI system
35:08 has the capacity to produce something like this.
35:10 - Yeah, I mean, I think we're still only just beginning
35:15 to figure out how to use,
35:19 how to interpret things that AI can produce.
35:24 Now, one of my mentors, Daniel Dennett,
35:26 is well-known for saying that,
35:28 "Hey, we should always remember that when we create AI,
35:32 we're creating tools and not colleagues."
35:36 And, you know, sure, we can use language models
35:40 to generate things that can be prompts
35:43 for our own creativity,
35:44 and that can be very, very useful.
35:47 But what meaning we attribute to that
35:52 is going to be very different.
35:55 So, I mean, I guess it's a little bit like,
35:56 you know, the old story about monkeys
36:00 bashing away on typewriters,
36:01 and if you leave them long enough,
36:02 you know, they'll come up with the work Shakespeare, right?
36:05 So, I think now what we've got is something similar,
36:07 but it's not just monkeys and type.
36:09 We've got something much more, much better designed,
36:13 not just sort of some random key pressing stuff going on.
36:16 So, the space of things that will come out
36:19 is much less random.
36:21 It's much more interesting.
36:22 It's potentially much more useful.
36:25 But the meaning in these things still comes from us
36:28 rather than the systems themselves.
36:29 - Yeah, there is an interesting history
36:32 of science fiction writers
36:35 presaging things which are already in place now.
36:39 One point being HAL in 2001, a space odyssey.
36:44 - Yeah.
36:45 - Which was, I mean, it was a conscious system
36:48 in many ways, right?
36:49 If there is a trend in that,
36:52 do you think we may end up with what HAL was?
36:56 - Well, I think what science fiction does a great job of
37:00 is being very prescient about some of the dilemmas
37:03 and some of the mistakes that we might make.
37:06 So, in 2001, HAL, HAL 9000,
37:10 yeah, certainly we're encouraged to believe
37:13 that it's a conscious system,
37:14 and in there, it's also kind of disembodied,
37:17 although the whole ship is its body in some way.
37:21 The thing that I remember from that film
37:23 that stayed with me is the reliance
37:27 of individuality on memory.
37:29 And there's this scene where Dave Bowman
37:31 removes the memory banks,
37:32 and it starts to, his personality begins to disintegrate.
37:37 It's incredibly moving scene, I found, I still find.
37:41 And I think that's prescient
37:45 in how we might interact with
37:47 large language model-based systems.
37:50 That if they, again, we're gonna project identity into them,
37:53 and if they start to exhibit memories,
37:55 then that's going to play on our own
37:58 anthropomorphic tendencies in some ways.
38:01 I think often AI, so another film I think
38:03 that's very prescient is classic film, "Blade Runner."
38:07 Now, in "Blade Runner," instead of this Turing test,
38:10 which is this test of whether you can tell
38:12 the difference between a machine and a computer
38:15 based on disembodied language exchanges.
38:18 So arguably, language models pass
38:21 or are close to passing the Turing test.
38:24 But in "Blade Runner,"
38:25 you have this thing called the Voigt-Kampff test,
38:26 which is all about emotional response.
38:28 And that's what really matters.
38:30 That's what marks the difference
38:33 between a human being and a replicant in those films.
38:37 And I think that's beautifully prescient
38:40 about the importance of the body.
38:42 And nature is living creatures for real consciousness.
38:47 And I think, so that may come to the fore.
38:50 So you're right, there has been,
38:52 I think it's both, it's a very productive interaction
38:56 in some ways, but I also think there's a bit of a tendency
38:59 for the dramatic elements of AI to get overemphasized
39:05 in our projections about real AI.
39:09 And this is the Terminator type thing.
39:12 We tend to fixate on those things,
39:15 which are not, I guess,
39:20 not the cinematic philosophical thought experiment,
39:23 but the more dramatic big narrative.
39:28 And those aspects of the films can, I think,
39:31 sometimes lead us astray in making projections
39:34 about future AI scenarios.
39:37 - To conclude, the specific question
39:40 of people in California creating guidelines
39:44 in the event AI becomes conscious,
39:46 and they seem to think that it's not that far.
39:50 What is your take on that?
39:51 Do you think it's premature,
39:54 or you think it's good to create those guidelines?
39:56 - I think it's certain,
39:59 yeah, I do think it's good to create guidelines.
40:01 I'm not sure what would be the specific guidelines,
40:05 but I certainly think a few things
40:07 are really important right now.
40:09 Firstly, there's this attitude still prevalent
40:13 in some parts of the tech sector,
40:15 which is this, let's build it first
40:17 and worry about the consequences later.
40:19 And we've seen how damaging this can be in tech already
40:23 in things like social media.
40:25 And so this idea that wouldn't it be great
40:28 to build a conscious AI, let's just do it.
40:30 Now, I think that is not to be encouraged.
40:34 I think we should make it very clear
40:35 that that is a bad thing to do.
40:39 The problem is, of course, we don't know what it would take.
40:42 So we don't know exactly how to dissuade it
40:44 from while still being able to pursue AI research,
40:47 'cause of course AI has huge potential
40:51 to benefit humanity and the planet and society.
40:54 So it's difficult to know exactly
40:57 where the specific guidelines,
40:58 but we should change this sort of attitude of just,
41:01 yes, it'd be great to have a conscious machine.
41:02 No, it would not be great.
41:04 It would be very problematic to have that.
41:07 - Yeah.
41:08 - But then there may be things we can,
41:12 maybe guidelines we can establish
41:18 which make real conscious AI less likely,
41:22 but also, and I think equally,
41:23 or also important is to make sure that we build systems
41:28 which are going to minimize risks
41:32 even when they just simulate consciousness,
41:36 because there's a lot of problems that will arise
41:38 as we've discussed,
41:39 the systems that just give the appearance of being conscious.
41:44 And so we need to make sure that we develop systems
41:48 being mindful of those risks.
41:51 And this may mean, for instance, developing systems
41:55 that instead of trying to persuade us that they're conscious
42:00 make it clear that they aren't in some way.
42:03 Or don't use emotional words in some way
42:06 or don't use agency type phrases in some ways.
42:11 So I think there are many things we can do.
42:13 I think the other really important general principle here
42:17 is that research in AI needs to be increasingly coupled
42:21 with research into consciousness
42:24 and neuroscience more generally,
42:25 because we are still flying a little bit blind in this area.
42:30 There are a number of different theories
42:32 about the biological basis of consciousness.
42:36 They all make different claims
42:38 about what the necessary and sufficient conditions would be
42:41 for consciousness in non-biological systems.
42:44 Mine is pretty, it's a bit of an outlier
42:49 because it suggests, as we've said,
42:51 that consciousness is not something
42:53 that will arise just in a simulation.
42:56 It has to be the hardware, the wetware really matters.
43:00 Most other theories don't make that assumption.
43:03 So I think iterating and AI research
43:09 together with consciousness science is absolutely key.
43:12 Not gunning to build conscious machines
43:15 just because they're cool is really key.
43:17 And figuring out the risks and ways to avert them
43:22 for systems that merely seem to be conscious
43:24 I think is also key.
43:25 - You know, sometimes I get the sense
43:28 that there's almost a race to create a new species.
43:31 And as you said, it could be deeply problematic.
43:37 On that note, Professor Sets,
43:39 it was absolutely riveting for me to talk to you.
43:42 I hope you thought it was worth your while.
43:45 (silence)
43:47 [BLANK_AUDIO]

Recommended