• 3 months ago
AI-generated images are now everywhere, and the tools used to make them are getting better, faster, and more accessible. Some initiatives like the C2PA have created systems to help us tell what’s real from what’s fake, and avoid being misled — but progress isn’t going well.

Category

🤖
Tech
Transcript
00:00Do you remember that fake image of the pope in the puffy jacket?
00:03Or how about when Donald Trump recently shared those pictures of Swifties for Trump?
00:08Or how about that fake AI-generated picture that showed an explosion happening near the Pentagon
00:13last year? Some of these examples are obviously more concerning than others, but whether it's
00:18for funsies or mayhem, they all illustrate the same thing. Generative AI is getting really,
00:25really convincing. This tech is now adept, pervasive, readily accessible, and increasingly
00:31fast. And there are countless reasons to be concerned about how that might impact the trust
00:35that we place in photos, and how that trust, or lack thereof, could be used to manipulate us.
00:43We've already seen a glimpse of this. Generative AI is driving an increase in scams, and the
00:48internet is full of political deepfakes in the run-up to the US presidential election.
00:53Photographic evidence does not mean much in a world where anything, believably,
00:58could be faked. Watermarking is easy to remove, and detection-based methods, like the websites
01:05that you drop an image into and they can supposedly tell whether an image is real or AI-generated,
01:10are notoriously unreliable. So what's actually being done to protect people from being misled?
01:16Good news! There are a bunch of initiatives that are working on how to resolve this mess.
01:21One of the best-known is C2PA Authentication, something set up by the Coalition for Content
01:27Provenance and Authenticity, C2PA itself, and the Content Authenticity Initiative,
01:32which Adobe set up back in 2019. This system has the backing of huge tech companies like
01:38Microsoft, Google, OpenAI, Intel, Arm, and Truepick, and their solution is data. Specifically,
01:46metadata, which uses hard-to-remove cryptographic digital signatures to
01:50attach key information to an image about its journey before it reaches us as a viewer.
01:55It's kind of like a nutrition label, but for digital content. In theory, when this is attached
02:00to an image and we can see it, it should help us to determine what's real, what's fake, and if it
02:06is, how that fakery happened. Here's a rough breakdown of how this works. Step one, the C2PA
02:12and CAI create this technical standard, and then a bunch of companies all across the industry,
02:17from photography to image editing to image hosting, then have to agree to use and support
02:24that standard. Step two, camera hardware makers and editing app makers then agree to embed their
02:30products with these metadata credentials. That could be in the form of content credentials,
02:34like what Adobe uses, or under any other name really. The important thing is that it supports
02:39the C2PA technical standard, and everything works together in tandem. Step three, online platforms
02:47will then scan images uploaded into them for these metadata credentials, and then provide the
02:52information in that to their viewers. Or, alternatively, if you just have any picture that you want to
02:58check, if it carries some kind of content credentials, you should be able to do so via a separately
03:02hosted database. For example, if I was going to take a picture on a Leica M11P camera, which supports
03:08the C2PA standard, it should log all the important information, such as the camera settings, and the
03:15date and time, and even location of where that image was taken, and embed it into the file itself.
03:20I can then take that picture and put it into Photoshop to make any edits with it, and whatever
03:25has changed, including if generative AI tools were used to make those changes, will also be logged in
03:31that metadata. Even if I put a picture in there that didn't already carry some kind of C2PA standard
03:36metadata credentials, Photoshop will still embed that metadata into the image, so that it'll show
03:43if I used generative fill, or any of the other generative AI powered tools that Adobe has.
03:48Then, image hosting platforms, like social media generally, will then be able to scan that picture
03:53if I upload it, pull that information out, and provide it to their viewers, because it isn't
03:57visibly available on the image itself. In theory, if all of these steps are adhered to, we should be
04:04able to more easily tell which images are authentic, which ones have been manipulated, and which ones
04:10specifically are AI generated, or have been manipulated using generative AI tools. Which all
04:16sounds way, way easier on paper compared to how this is actually going. So, here's the bad news.
04:24Progress is extremely slow. The problem is interoperability, and it's taking years to get all
04:30necessary players for this on board. And if we can't get everyone on board, the system might be doomed
04:36to fail entirely. C2PA support is currently only available on a handful of cameras, including Sony's
04:43A1, A7S3, and A7IV, and Leica's aforementioned M11P. And while other brands like Nikon and Canon
04:51have pledged that they're going to support it, most have yet to meaningfully do so. Smartphones,
04:57the most accessible cameras for most people, are also lacking behind with any built-in C2PA support.
05:04It's a similar situation across editing apps. Adobe is implementing this across Photoshop and
05:09Lightroom, but while some other services like Capture One have said they're looking into
05:13traceability features like C2PA, most don't, or have yet to express any interest in doing so.
05:19And one of the biggest roadblocks to all of this is figuring out the best way to present that
05:24information to viewers. Facebook and Instagram are two of the biggest platforms that do check for
05:29this information and do flag some of that to their viewers, but Meta's early attempt also angered
05:35photographers rather than helped them because they were flagging everything with a made-with-AI label,
05:41even if it was edited using regular tools that didn't use generative AI, like the cloning tool.
05:47Meanwhile, X, which is already completely saturated with all of these AI-generated images and deepfakes,
05:53hasn't implemented any kind of verification system, C2PA or otherwise. And that's despite
05:59having joined the C2PA back in 2021 before Elon Musk had purchased the platform. He had this to
06:05say at the 2023 AI Safety Summit.
06:08So some way of authenticating would be good. So yeah, that sounds like a good idea, we should probably do it.
06:18There you go.
06:19But nothing has actually materialized yet. There is this recurring argument that we shouldn't be
06:24concerned about the direction that generative AI is going to take us because this is nothing new.
06:30Photoshop has been able to manipulate images for 35 years, but do you know how f***ing hard it is to
06:35manually edit a photo like that in one of these apps? I looked up YouTube tutorials on how I would
06:41be able to do this and even if I wanted to add a lion to a picture, the videos giving me demonstrations
06:46for that are 10-11 minutes long. If I wanted to do that on a new Pixel or Samsung phone though, I can
06:53just tap an area and tell it to add a lion. It's already going to take all of those complicated
06:58nuisances like perspective and lighting into consideration. And even if, cool, you follow a
07:04tutorial and you do create a very realistically edited image in Photoshop, that's just one picture.
07:11These AI editing apps that are now free and on our phones can do that in seconds. And even if
07:17the first one doesn't look as good, you can just keep going until it looks right. And none of this
07:21even takes into consideration just how expensive this kind of software can be. Adobe stops just
07:27short of basically asking for your first born child and you have to dodge all of their really
07:32complicated cancellation policies. But even if you use a free alternative like GIMP, you're still
07:38going to need access to a desktop computer or a laptop. Which not everyone has now that we live
07:43in a world where smartphones can do just about everything. Meanwhile, it's taken barely a couple
07:48of years from generative AI apps to go from spitting out distorted Cronenberg-esque mashups
07:55with 17 fingers to creating something that's actually quite authentically believable. Something
08:01that takes texture and lighting into account and with no skill at all. In seconds. It's much easier
08:08to dismiss things like photojournalism in a world where anything could not be real. You can't always
08:15expect that people are going to do the right thing and now anyone with a smartphone, hypothetically,
08:21could churn out highly manipulated images at a speed and scale we've never had to experience
08:27before. And yeah, you're going to get some people that are going to argue, well, with Photoshop
08:31having existed, we shouldn't be trusting online images to begin with. And that might be the way
08:36forward. But do you really want to live in a world like that? Where you can't trust any picture that
08:41you see online? I don't. That sounds horrifying. And look, I don't want this to necessarily be a
08:48doomsday argument. This is just one possibility of where generative AI could be taking us.
08:54But even if we take a step back and we look at some of the far less serious implications,
08:59it's still incredibly f***ing annoying. Platforms like Pinterest that used to be really, really good
09:03for referencing materials for artists or finding haircuts or makeup examples that you're going to
09:09give to your stylist, right? You can't really use them anymore because the entire site is just
09:13populated with AI generated images and none of them are really flagged to indicate that that's
09:18the case. Even if, by some miracle, we woke up tomorrow in a tech landscape where all of this
09:23is working, the online platforms, camera makers and editing app providers are all on board and
09:28cooperating together, this system might still not actually solve the issue at hand. Denialism is a
09:35potent and potentially insurmountable obstacle in all of this and it doesn't matter if you're
09:40going to supply people with evidence that something is real if they're simply going to
09:45ignore it. And just to rub some additional salt into all of our wounds right now, despite the
09:51issues these systems are already facing, a cryptographic labeling solution is realistically
09:57our best hope to reliably identify authentic manipulated and AI generated content at scale
10:05and even then they were never supposed to be a bulletproof solution. The companies that created
10:10systems like ctpa authentication completely understand that bad actors exist and just because
10:16something is difficult to be tampered with doesn't mean it's impossible. Meanwhile, doctoring images
10:22with good or bad intentions is now the easiest and most accessible it has ever been. We're going
10:29to have to live with that and it could leave us in a precarious situation. Nations all around the world
10:35are struggling to introduce regulations that can police the more harmful aspects of this stuff
10:40without accidentally infringing on things like artistic expression or parody or, more importantly,
10:46free speech. And it's highly unlikely that AI companies are going to pump the brakes on
10:50development while we're figuring out how we can get to grips with it. As a result, we're dangerously
10:56close to living in a reality where we have to be wary about being deceived by every single image
11:02put in front of us. Thanks for watching! Speaking of which, can you guess which one of these objects
11:08is AI generated? How about this one?

Recommended