Increasing prevelance of deepfake images promts calls for better legislation

  • 7 months ago
The increasing prevalence of pornographic deepfake images generated by artificial intelligence has prompted calls for better legislation. Experts say it's unsurprising that it's mostly women who have become the victim of deepfakes.
Transcript
00:00 Sadly, I think women have often faced this problem, even in the days before AI with people
00:07 using Photoshop, for example, or other technologies to transpose women's faces onto other bodies,
00:14 for example.
00:15 Sadly, it is a part of our culture.
00:18 We are in a culture that objectifies and often takes action to demean women.
00:23 And so using the latest technology to do so is not a surprise.
00:28 It's really just a question of scale and now what's possible with these new technologies.
00:33 Is there any way of fighting back against deepfakes, against being victimized without
00:40 your knowledge or consent?
00:43 I think that is a real challenge.
00:45 I think one of the critical things is that we not put this into the hands of women to
00:49 solve.
00:50 I think the critical piece here is a combination of technology companies working together with
00:55 government legislators in order to put boundaries in place to prevent this type of thing from
01:01 happening.
01:03 We do have the technology, but that doesn't mean that we have to use it.
01:06 There are strategies companies can put in place to ensure that what they build is used
01:12 for good and not for bad, and for legislators to ensure that they're keeping tabs on how
01:17 these technologies are evolving and how they're being used.
01:20 More people will fall victim to these deepfakes or misinformation.
01:25 Are there any current legislation in Australia that can regulate parts of AI?
01:33 I think we certainly have to look at existing legislation that we have on the books.
01:37 For example, there may be issues around copyright.
01:40 There may be issues around libel.
01:42 There could be different aspects that someone could pursue if they were a victim of one
01:47 of these technologies and something was created that used their image inappropriately.
01:54 The challenge is that much of the legislation was written a very long time ago before these
01:58 technologies appeared on the marketplace.
02:01 Therefore, it's really going to be up to the courts and take quite a bit of time if we
02:06 wait through those avenues for change.
02:10 One of the critical pieces is really around critical thinking skills and for people viewing
02:14 these technologies not to promote them, not to be sending around things that you know
02:20 are incorrect, but also to be really questioning the sources and where has this information
02:25 come from.
02:27 This year is a year that democracy will be online.
02:31 There's something like 64 countries that we're going to the polls.
02:36 Deepfakes of course can be used to smear political opponents, to spread misinformation as well.
02:42 What do you think is the best way for governments to mitigate this sort of devastating outcome?
02:49 I think you're absolutely right.
02:51 Certainly one of the most critical pieces is around transparency.
02:55 That's even a spot where the media have a role to play in terms of making sure that
02:58 people understand that the images that might be putting forward to them are not correct,
03:04 but also making sure that we're not actually using these images and letting them proliferate.
03:09 If something is being created, we look at those images you just showed of Donald Trump
03:14 being in theory arrested, which we know were not correct images.
03:18 These were fake images.
03:20 Many of these could be used as part of a disinformation campaign.
03:24 So government, citizens, media, we've got to work together to ensure that what is being
03:29 put forward to the populace is accurate and helpful as they make choices around elections.
03:36 [BLANK_AUDIO]

Recommended