• last month
A leading academic has suggested that a new 'right to reality' is needed to help people navigate their way through a complex online world where the lines between truth and lie are increasingly blurred.

Professor Lilian Edwards says we are all operating in a "miasma of uncertainty" where it is extremely difficult to tell the difference between truth and falsehood.

In an episode of The Scotman's Data Capital podcast, the internet law expert says: "I think people increasingly feel they're in a world where they don't quite know what's true.."
Transcript
00:00Hello, I'm David Lee, and welcome to the latest episode in the Data Capital Podcast series, brought to you by the Scotsman and the University of Edinburgh's Data Driven Innovation Initiative.
00:16This episode is called the Miasma of Uncertainty and examines the challenges in understanding what is real and what is fake in our increasingly complex online world.
00:28This issue will be central to the annual data conference in Edinburgh on September the 26th, Disinformation, Deep Fakes and Democracy.
00:37And when we think about disinformation, deep fakes, how can we tell whether the words we read or the images and videos we see are true or false?
00:46To discuss this today, I'm joined by Professor Lillian Edwards.
00:51She's a leading academic in the field of Internet law and has worked at a number of universities and is currently Emerita Professor at the University of Newcastle.
01:01Welcome, Professor Edwards, who is also director of Pangloss Consulting.
01:06Now, if I may, Lillian, first of all, you've used the phrase miasma of uncertainty.
01:13Can you just tell me exactly what you mean by that to begin?
01:17Well, it was a casual phrase. I mean, it's not a legal term of art. You know, you might find it in the statute.
01:24But what I was referring to is what we're seeing.
01:27You know, I think we're both quite concerned today to have seen pictures that appears to show Taylor Swift vigorously endorsing Donald Trump.
01:39It's quite funny because we think probably that most people won't believe it.
01:43But there may be people out there who aren't very in touch with popular culture who might.
01:48You know, it's not impossible. Russell Brand is a Trump supporter, apparently, and also popular culture, not immune from his appeal.
01:57But that's just one example of the way in which I think people do increasingly feel that they're surrounded by a world where they don't quite know if it's true.
02:08Right. And what has particularly aggravated this, I think?
02:12I think this feeling began with the emergence of political deep fakes, especially, which on the whole have not really convinced anyone who didn't want to be convinced, but have given a general sort of the layer of uncertainty.
02:29But with the rise of chatbots, with large language models, with chat GPT and all the rest of them, and particularly, I think, with their integration into search engines, which seems to proceed inexorably, even though most people don't actually seem to like it.
02:47Sometimes just looking up a Reddit thread, which consists solely of people saying, how do I turn this thing off? You know, because Gemini now, I think, has been rolled out even to Europe by default to give you summaries when you search, as opposed to the old fashioned page of links.
03:07What we're seeing is we know that these systems hallucinate, and it's always seemed to me insane to incorporate them into search engines by default. Search engines are something that you want to give you accurate data and ideally links to that accurate data so that you can verify it yourself.
03:25But the last thing you want the summaries for a machine that makes up lies for a living, right? So, given that we've moved on really from just seeing pictures that we pretty easily can say that's not real, to stuff that we're going to be much less certain about.
03:45So, chat GPT is well recorded now to be bad at maths, for example. If you asked it questions that involved arithmetic or division or multiplication, it might well produce results that are wrong and you don't know. It's almost certainly going to start making up facts about history and geography. In fact, we do, again, have examples of these.
04:07It's advised people badly about which mushrooms are poisonous, which I think has led to illness, if not death. There was a case the other day about a book containing AI generated pictures of mushrooms in a mushroom foraging guide, right, which is just a classic scenario.
04:27It's advised people to add glue as an ingredient to pizza.
04:32It's advised people to eat rocks to feel better.
04:36Again, most of these are pretty blatant, but they will get less blatant. And as people begin to trust these systems more or to not know that they are there as they become seamlessly integrated into our search engines and our customer service portals and all the rest of it.
04:54We do begin to think, as we do a bit now, I think the spam messages. Is this real? Is this really real? You know, I've been waiting for a delivery and I've had a series of texts from one of the major delivery companies saying we've got your address mangled and so click here to fix it.
05:14And I've gone, no, no, I think that is fishing. And now I feel a bit like the whole world is going to be like that where you're constantly making these judgment balls as to reality with very little to draw on to validate your hunches.
05:31And the sort of counterpoint, I suppose, of this miasma of uncertainty, another phrase that you used is this right to reality. So, you know, you've talked about the idea of the right to reality as potentially a new human right in the online information age.
05:45So can you just tell us a bit more about where that idea came from?
05:50Yeah, my thinking on this is actually morphing a bit. I mean, it came from quite basic legal research that I've been doing in an ongoing basis about what rights you have to combat disinformation, fake news, things that are put out about you that are not correct.
06:09Right. Reputation management is a big business. Yeah. And, you know, two of the most basic tools you have in Europe, at least, are your rights under data protection.
06:23So your right to control really what is known about you to control your personal data. And one of the key rights there is your right to erasure, to have your personal data erased in the hands of another host or platform, sometimes called the right to be forgotten.
06:41So I think that could be very useful. But at the moment, there is a very partial take up by the platforms, by people like OpenAI, and even less so by platforms like Twitter, about giving you access to these rights, because you are really dependent on their goodwill, you know, unless and until they get sued in a big way.
07:05So that was one obvious legal avenue, but it has problems. It only really applies in Europe. The US doesn't have data protection law, though some states do. California does in a big way.
07:19There's also libel. That's another one where it's very obvious to look at. And indeed, I've spent, you know, too much time this morning, advising people that Taylor Swift ought to sue Donald Trump in libel for endorsing these deep fakes of her, which he clearly knows about, which are clearly fake, you know, because US law is very difficult on libel, but this seems to me to be a perfect case.
07:42But these are very partial remedies. I mean, they both, and I could go on for the rest of your life, they both have a lot of legal hang ups, right? They both don't really apply uniformly throughout the globe, let alone Europe and the US. As I said, the US isn't keen on libel law, especially in relation to public figures.
08:04They both depend to a very large extent, but this is going to be true of any remedy, on the complicity of the agreement of the platforms, right? But also, and this is where I began to think of a wider route, both these remedies only apply to what is known about you, your reputation, right?
08:27So Taylor Swift's reputation could be met by a data protection remedy, by a libel remedy, possibly, and I won't even get into this, I think, by some kind of intellectual property remedy, like saying that her personality rights are being impinged, which is sort of her right to merchandise her image.
08:46And I actually suspect that's what she's suing, because it's the most likely to succeed in the US. But these are all about you. And some of the things I mentioned before are not about you, you know, knowing about basic maths and history and geography and looking up the history of who was the American president in 1832 and stuff like that, right? So that takes you into a much wider idea of this right to reality.
09:12And I think a lot of people listening to this, Aline, it will very much resonate, this idea of not quite knowing, maybe something that we would have believed intrinsically, maybe six months or a year ago, just starting to kind of niggle away at us now, and something we might have clicked on or not clicked on, being uncertain about it. And, you know, they're really very relevant to everybody's life.
09:36And just to bring it kind of up to date to what's happened this summer in the UK, you know, the riots and the social disorder in England, there's a lot of been a lot of discussion about that kind of, you know, false information being disseminated online. And as you say, some people will reject that, some people will take it on, etc.
09:57You know, where do we start? Where do we start in trying to say, how can we figure out what's true and what's false? Where can our brains start to kind of process that challenge? And let's try and relate that specifically to what we've seen over the summer, particularly on X and the way that Mr. Musk's behaved.
10:18Well, it's hard.
10:22We're not very good at spotting false information. I mean, we already knew this from, you know, centuries back in terms of trials and witness behavior. You know how easily suggestible witnesses are and you can convince them they've seen things or seen people that they haven't very easily or the reverse.
10:41We're not good at it. When you do studies, I was involved in one study with the Royal Society, when you try to get people to pick out the deepfake video from say four videos, the results were worse than chances I recall.
10:59I think they got it right, just about. Yeah, there were four videos, they get it right about 25% of the time. You know, so really, we don't have the discrimination on this. Also public knowledge, as it were, tends to lag behind the reality, because there is an arms war, certainly in the creation of deepfake images and videos.
11:20So for ages, it was all count the fingers, you know, they'll be the fingers will be all wrong. And you do see some of that still but sophisticated deepfake technology can fix that now can move past that.
11:33So we have an arms war where the means of detection are constantly trumped by the people making the fakes. Having said that, most of the fakes you see that come up in these kind of discussions about politics and fake news and immigration, whatever, are what they call shallow fakes.
11:54But there is a line here again, or is there a line, there is a penumbra between people just saying inflammatory things, which again is something we've dealt with for the whole of human history, and people creating fake reality.
12:11Right. So I think we shouldn't go mad and say this is a whole new thing that we never ever thought we'd have to deal with. Right. That's one point.
12:20Should I go?
12:23No, it's okay. I was just going to come back in there and just say, you know, the conference in September is going to look at, you know, disinformation deepfakes in relation to the democratic process specifically.
12:34And in terms of going back to this phrase, you know, the miasma of uncertainty, how do you, how would you expect someone like Donald Trump and his supporters to approach November's election in that context?
12:46Is it all about putting fake information out there or is it just about throwing loads of stuff out there and creating that sense of, I don't know what's right and what's wrong.
12:57I don't know whether this is true about Kamala Harris or not. You know, how would somebody like Trump exploit this?
13:04Well, I'm not a political observer, you know, I don't know exactly what Donald Trump is going to do.
13:12This idea of throwing out as much strange stuff, you know, as possible all over the internet so that it essentially confuses people and they don't know what's politically up or down, is reputed though, you know, what the Russians have been doing for some time, what Putin has been doing.
13:31I mean, that's the general vibe on the street, you know, that they have these troll farms, these clickbait farms, etc, etc, just piling stuff out.
13:41And often, it does seem to be almost random as opposed to projected.
13:49Donald Trump, who knows? And in fact, Elon Musk, who knows? But it is worth saying, I think, at this point, that there are a couple of tools or inflection points that we can look to here and which there isn't a lot of talk about right now.
14:05One is, it isn't just the value of the speech itself. If I say something inflammatory about immigration, then, you know, well, I have a reasonable amount of followers on Twitter, but you know, if the random person with 10 followers says something, probably it will go nowhere.
14:21So part of the problem, a big part of the problem, is the role of the platforms in developing algorithms that disseminate upsetting, provocative content, right? Well, we're very familiar with this idea.
14:36So it's old news, it's very old news, again, that we are trying, we, for some, the West is trying to rein this in, you know, so that we have seen, particularly in the UK, the online harms bill that's now an act, the Online Safety Act, wow, that's the iterations of it.
14:58And in Europe, the Digital Services Act and the Digital Markets Act. So, you know, without going into the detail of that, we are trying to get the platforms on site.
15:09The problem, again, as I already slightly sketched out, is the platforms are mostly in America. These legislations so far are in Europe and the UK. How are we going to enforce them? And that's what's going on right now in this kind of standoff of words between, for example, Terry Britton and Elon Musk.
15:30People are saying that Musk is trying to stress test the Digital Services Act to destruction, but he is actually, you know, we're talking about Trump's motivation. I think Musk's motivation, apart from just being a troll, may be to see how far he can push it before they start looking around to see, for example, what resources Twitter has in Europe.
15:54And that's really, you answered my next question there was really, I think when the EU sort of challenged Musk, he responded with a fairly foul mouthed meme that made it clear that he was not really taking that seriously at all.
16:10So how do legislators and regulators respond in that context where you're suggesting that Musk may be stress testing how far he can go, but how do they respond when his approach is just to abuse them? You know, where do they go in terms of legislative or regulatory or technological solutions when all they're getting is foul mouthed abuse back?
16:40Well, I mean, it's fair to say that Musk is an outlier, right? Usually in the past, the kind of litany has been about mice and elephants, right? But the mice who were like scammers and spammers would scurry around and they were hard to find and they probably wouldn't respond to any legal threats and you didn't know where they were and so you couldn't impound their bank accounts, and you just have to kind of roll with it.
17:03But their influence wasn't that high. Whereas the big corporations, the Googles, the Metas, you know, Microsoft, they would be responsive to international law because they operate, I should say multinational or transnational, I'm not really talking about international law here, they operate transnationally.
17:23They have offices in Spain and the UK and Dublin and whatever, they do not want their employees and their customers to be subject to sanctions. They don't want famously to not be able to travel and find that they get put in jail, which, you know, once happened to a Google executive visiting Italy when Google had broken some law relating to YouTube.
17:47So there are plenty of levers normally, apart from just PR and brand, it doesn't look very grown up, it doesn't look very corporate, it doesn't look very trustworthy, for even companies, primarily those in the US, to try and negotiate, which is what they do, really.
18:03They lobby, when these laws are made, to try to get to a place that they're happy with. That is not entirely, that's what we'd like in a democratic legal process, but that's what we have. Whereas when you get someone like, indeed, Trump or Musk, they are outliers in this very strange period in history we're going through, in which people seem to be denying the rule of law, which as a lawyer I find quite upsetting.
18:32But yeah, you were saying, what can these legal systems do effectively? Well, there was a discussion about this in relation to the Online Safety Act, which I thought was quite interesting.
18:45Even though Europe is probably a much more important adversary to both of them. Apparently, the Online Safety Act purports to have measures to be able to act against companies in America, for example, by, for example, cutting off their advertisers, which is an interesting tactic.
19:08It was used as far back as pirate radio, if you recall, right? They were, they set up their pirate radio in the high seas outside the jurisdiction of the UK police. But the UK then passed a law saying that if you advertise with Radio Caroline, then you would be guilty of a crime. I think it was actually criminal law, right?
19:29So it's a really old fashioned approach. A similar kind of approach is to try and talk to payment providers. And this is how the US tried to bring down WikiLeaks, actually, is you talk to Visa or MasterCard or whoever processes payments to the company and you go, I think they're breaking the law, stop enabling them or possibly we'll come at you.
19:54I don't think anyone's mentioned that in relation to Twitter yet. But I do wonder if someone might be thinking about it. The advertisers point I thought was quite funny, because it has been a very effective tool in the past, right? You can't live without the money from advertising.
20:09But Twitter's already going down the tubes in terms of advertisers, as it is. They're already straight in the barrel. Most of the adverts they have right now seem to be either adverts for themselves or adverts from really bad gambling sites, is the impression I have. So maybe that threat won't actually cut much mustard with them.
20:31Okay, that's really interesting. And coming back to how the tools that maybe can help us to try and sort of pilot our way through what's true or false. What about some of those very pragmatic solutions like watermarking, for example? Can watermarking help us establish, to a degree, what's true and what's false? What's AI generated? What's not? Can you tell us a bit about how that might work?
20:59Yeah, um, I think there is some optimism here, you know, but it's up and down. So once upon a time, you know, you put a watermark in a document so you could establish its provenance, as they say, and it was just a little gif. And if you had two minutes of computer skills, you could take the gif out, right, or replace it with a different one. So that wasn't very effective.
21:21Since then, people have basically been combining aspects of digital signature technology with watermarking, and they've come up with a standard that's called C2PA. I always forget what it stands for. But there's been a lot of buy-in for C2PA from not only trusted brands that provide rules, like the BBC and the Guardian and the Washington Post have been quite heavily involved in this.
21:51But also from various platforms. So one of the really interesting developments lately was that OpenAI, I don't know whether they have promised to do it or whether they've actually started doing it, but we're going to put the C2PA standard watermark into images generated from their site.
22:16Right.
22:18Now that's very interesting, because one of the big problems with this idea, and it also ties into the AI Act, there's a lot going on, is you can require people perhaps to label or to watermark.
22:32But is that going to be a legal requirement? Right, because that's what the AI Act essentially says. AI Act Article 15 essentially says that if you make AI generated content that looks like a human's made it, then you've got to give it some kind of label that says it's AI made.
22:52You may have noticed that on Facebook, not sure about Google, but that on Facebook, a little thing has appeared at the top now where you can click it and say, this is AI generated. You know, I haven't had any cause to use it, but it suddenly did appear and I think that's a pre-act before the AI Act comes into force.
23:17But this is another discussion I have lately. It's very hard to make people do this and it's very hard to enforce it. So it happening by default, say, on some large image or large language generating models is a really good start, because most people take the default.
23:41Most people don't go looking around to turn the default off, right? But the bad actors almost certainly will look to turn it off. Or if they can't, and I don't think that's likely, but if say OpenAI or DALI or whoever were to say, right, you've got no choice in this, the watermark will be inserted, then they will just go and look for an open source product where they can make that choice themselves.
24:09It's not surprising that you forgot what C2PA meant, because wait for it. It's the Coalition for Content Provenance and Authenticity. So it's not the easiest collection of words to remember.
24:23But I mean, that's really interesting. But again, you've touched a little bit on this, Lillian, but, you know, there's estimates that some people are saying 90% of AI content could be AI generated by 2026. I mean, they just sound like two numbers and years that have just been plucked out and, you know, choose your own, you know, insert your own numbers here.
24:45But the point is there, if there is so much AI generated content being created, a watermark that proves it's AI generated is helpful to an extent. But still, you know, where do we then begin to, as you've said, there's a lot of rubbish stuff being generated by generative AI at the moment.
25:07How does a watermark help in a context where the vast majority of content is or could be AI generated?
25:16I don't know. It's such a cluster of things. I think this whole idea of watermarking or labelling is sort of beginning to be all things to all men. Again, I've been having discussions with people online.
25:29Artists, for example, our artists coalitions are very, very keen on watermarking because the CTPA standard, again, is capable of doing more than just saying I'm AI generated. It's also capable of giving a little dossier of details, perhaps, about when it was created and where and how and even who died.
25:49And these people are very, very keen on this kind of watermarking or labelling because it might enable them to prove their copyright claims.
25:59So for the copyright people, there's a lot of points in it. Although, as I've also said, I think the people who are actually deliberately viewing copyright trolls will be the people who don't include the watermarker.
26:12But the average unsuspecting person who's just enjoying playing with stability or whatever, they're quite likely to leave in the watermark if it's included by default in software, let's say.
26:25And we were talking a while back, it's gone a bit quiet, but I think it'll come back, about the possibility of these watermarks being included by default when you take a picture using your smartphone.
26:37Yeah, that already happened with high end upmarket cameras, but it hasn't yet got into the smartphone software. So that would affect the ecology of generation of images quite a lot.
26:51So, the question then is what do you do with these watermarks? As I said, for the copyright and artists trying to prove they were ripped off, it could be quite effective.
27:06For trying to prove what is real and what is true, it doesn't matter.
27:13The fact that something is AI generated does not mean it's fake.
27:17The fact that something is being generated without AI does not mean it's true.
27:23And one of the worries I've had throughout this process, although it doesn't seem to really happen, perhaps I've just been too worried, is that platforms would start to filter material on the basis of these watermarks.
27:38And they would regard being AI generated as being a proxy for truth.
27:46And that could have been quite bad, really, because again, what do you mean by AI generated? I mean, I think these figures, again, they don't reveal what lies beneath them.
27:55I mean, very, very, very many people in the world are now using chat GPT, Grammarly, Deeple, all kinds of generative AI to help with their writing, to translate it, to improve it when they're not first English language speakers and they need to prepare a report or a dissertation in English.
28:15Et cetera, et cetera. So, I mean, it would be really bad if these disparate voices found their stuff blocked or filtered from major platforms because it appeared to have come through an AI filter.
28:27On the other hand, it's very good for academics who are trying to look for essays that have been written using chat GPT.
28:35So that would be a good use case.
28:38Yeah, I mean, you've covered a lot of ground here, Lillian. You've talked about some potential for watermarking.
28:44You've talked about practical areas like hitting social media platforms in terms of advertising and through the payment providers.
28:52You've touched a little bit on some of the legislative attempts that are being made to get a grip on what's happening online.
29:00The Online Safety Act here, you know, what the EU's doing. What do you think, what legislation do you think, can the EU's AI Act, for example, can that help us to find or find our way closer to a right to reality?
29:18Or do you think legislation will always lag behind what is happening online?
29:29I don't think the purpose of the AI Act is to provide a right to reality.
29:34As I said, the only provision in the AI Act that even kind of points in that direction is this rather, rather, you know, on its own little article, Article 50, that really tries to provide transparency in what it actually calls circumstances of limited risk.
29:54It wasn't regarded as very important, I think, in the original drafting of the AI Act.
29:59And it's been thrust into prominence, really, by the arrival of generative AI, deep based political chicanery, all the rest of it.
30:08But all it, again, still really says is that in a chatbot situation, you know, where a machine appears to be acting like a human, or in a situation where you're sharing content that you have generated using AI, then you are meant to indicate that it is AI.
30:28That's all. It's just a labour requirement. Now, and then you have obviously layered on to the problem that the AI Act only operates in Europe.
30:39It will operate to people because that's the way it's thought about. It will operate on Americans if they have a product that they're selling into Europe, but Trump talking on his podium in Florida is not selling into Europe.
30:54So what does he care, right?
30:57I don't know. I have a gut feeling that it's not going to be a very good solution for bad actors. It's quite a good solution for neutral actors, you know, who would generally like to be legally compliant.
31:14But I think it will be a bit of a non-event is my actual feeling. There is a parallel, way back when, my god, about 20 years ago, when spam was a big deal, right? We hardly think about spam anymore.
31:30You know, we have these great spam filters, which again use machine learning to decide what spam and type it away so we don't have to deal with it. But I think still about 90% of email is spam, which used to be a terrible problem.
31:44And one of the proposals the EU made way back then was that if you sent spam, then you should label it as spam. You know, you were meant to put like marketing or something in the email header.
31:57Not that many people did that. Spammers don't have much of a vested interest in labeling their communications as spam. And I just think that we may see something similar.
32:09You know, the way forward on that is to embed watermarks or similar into the product as it is produced. Right.
32:19And that's why I think this equivocation, this alleged equivocation that's been reported by OpenAI about putting these watermarks into text is really interesting because AI generated text is used to make bots convincing.
32:37Bots are what mess up the revenues of Twitter and Facebook and so on. Bots are useful to many people in ways that are not very useful.
32:49And just to kind of try and come to some conclusion here, Lillian, in this very complex world, how do you deal with your own personal right to reality?
33:00You know, what what helps you sort of through the day in in reading stuff and looking at images and kind of processing that and making your own decisions as to what you what you write about, how you respond, etc.
33:16And, you know, what advice would you give from your your own observations and your own actions to those of us who do feel a little bit paralyzed by all of this stuff, very confused by it and very uncertain about where it's all going?
33:33That's a really good question. I think one thing we could do is we could be a little bit more restrained about quantificating about things we don't know about.
33:46I mean, I try at least in the public domain to say if I don't know about something, you know, so I rely on the limits of my own expertise.
33:54You know, I'm not a big expert on cybersecurity. I'm not a big expert on raising gold digs.
34:01And that leads me to another point I was thinking about this, that you will like, which is, I think the best guidance we still have are in fact the old fashioned traditional trusted media brands.
34:15Right, because they do have a reputation to keep up, which will be destroyed if they become homes for fake news or fake images or fake videos.
34:25And certainly I know that's why these publications, sorry I know you're in all the discussion, but the Guardian and the Washington Post and the BBC and the Canadian broadcasting company and various like that, were really, really, really worried that content would start to get out there with their brand attached, which would destroy their reputation.
34:46And that's why they were so keen to have this opportunity to put in a watermark that would kind of indelibly, because it's like a digital signature, it's hard to pull out.
34:56It would indelibly either say if it was from the BBC or it wasn't, you know, they have just faked a logo. Right. So, and this is the bit you will like, I think the best thing we can actually probably do is keep funding traditional journalism.
35:13Which, as we all know, has not been having a great time.
35:17It's very worrying I think how, you know, these constant figures that come out that young people by which I know probably in anyone under 40 are getting most of their news from social media.
35:30But usually what that means is that they're reading links to traditional media, right, maybe not on TikTok, I don't know.
35:39That still goes back to maintaining an economic business case for traditional media, and I think, you know, this call is strong in Europe, actually, where there are actually rules about a certain percent of culture and media being supported in different languages,
35:58and different political groups and so forth. We don't really go in for that as much in the UK or the US, where the market reigns supreme. Perhaps we could do with looking at some of those systems.
36:13We've touched on legislation, regulation, we've touched on policy, we've touched on technology, but bringing it back right down to a couple of the things you've said, you know, trusted news brands is important, and also that potential to hit some of those rogue actors in the pocket.
36:31So two quite old-fashioned remedies in a strange kind of way. And in the light of all this, final question, Lillian, are you kind of optimistic for the future of the way we all consume news and our human brains, and how the hell we kind of cope in?
36:53I just love the phrase miasma of uncertainty. I know it's always interesting when we all kind of occasionally make things up and then it just becomes a thing that's discussed and talked about. Are you optimistic?
37:05That's a totally negative note. But unfortunately, there is one negative thing that I haven't really said, which is, with all this AI-generated content being spewed out, and AI-generated meaning a lot wider than just like a database at Taylor Swift, it might seem to me, you know, using any of the programs I've mentioned routinely on your laptop, etc., but it can go wrong.
37:29Again, we have a lot of evidence that these outputs are going to become the training sets for other large models, right? In other words, a bit like putting toxic waste into the waterway system, this stuff is going to spread into the ecosphere and kind of synergetically rebuild itself.
37:55And we're already seeing that. That has at least two really worrying consequences. One, and the physics of this is beyond me, but there's a lot of work coming out saying that when large models feed, as it were, for their training set on the outputs of other large models, you get a spiral down to nothing.
38:13You get something called model collapse, where they basically stop making sense. So if you care about the ecology of large models, then this is something to worry about. And the other more comprehensible problem, I think, is that this stuff again will then be called on by humans and by institutions.
38:33And we're already seeing that. We're already seeing in stuff that's user generated to a very large extent, like Wikipedia. We're seeing falsehoods that are being generated by large language models, working their way into what people are putting on Wikipedia, and to some extent working their way into bad journalism.
38:53So a friend of mine had that experience where a very bad site reported stuff about him, which looked very much like it was wrong. It wasn't bad, but it was wrong. It looked very much like it had been generated by ChatGPT at some point, maybe a few links down the line, and there it was getting into not reputable journalism, but at least stuff on websites that somebody might have Googled if they were looking for his name.
39:19So yeah, sorry, that's not the optimism you were hoping for. I think these are really quite existential threats.
39:26Yeah, I think you've gone through the looking glass once, if not twice there, Lillian, in terms of AI regulating AI and then regenerating itself. But listen, thank you so much for those fascinating insights today.
39:41And you can hear more from Professor Edwards and many other expert speakers at Disinformation, Deepfakes and Democracy on September the 26th. Please go to nationalworldevents.com forward slash SDC hyphen 2024 forward slash. That's a very complicated address, but it'll be written down as well.
40:04To listen to all the episodes in this series of Data Capital, search for Data Capital on your favorite podcast platform. And Data Capital is presented by me, David Lee, and produced by Andrew Mulligan.

Recommended