• last year
Connect with Deadline online!
https://www.facebook.com/deadline/
https://twitter.com/DEADLINE
https://www.instagram.com/deadline/

Category

😹
Fun
Transcript
00:00:00 Good afternoon, I'm very happy to see faces in the audience that are that is curious about AI,
00:00:08 something that everyone's been talking about in the past year, and we're happy that we could
00:00:14 partner with MidPoint Institute, get some media attention thanks to cooperation with Deadline,
00:00:22 and to actually discuss this topic in the context of storytelling, in the context of some of the
00:00:29 legal aspects that lead to it, and we have three great presenters that know a lot, and I will pass
00:00:38 the mic on to Zach who will tell us a little bit more of what's ahead of us. Thank you.
00:00:44 Hey, hello everyone, thanks for joining us. I'm Zach, film reporter at Deadline. I hope you guys
00:00:53 have managed to catch some of the rest of the industry program and have been enjoying it.
00:00:58 Today we're talking AI and storytelling, and as Hugo said, this is a, this panel is a collaboration
00:01:04 between Carla Vivari, the MidPoint Institute, and the Eric Pomme Institute. Today I'm joined by some
00:01:11 great panelists, and I'll let them introduce themselves and tell you a little bit about what
00:01:14 they do. Hi everyone, my name is Tatjana Samopyan, I'm a creative development consultant and a
00:01:23 MidPoint tutor as well for the series program. I work on development of films, series, and
00:01:29 documentaries, mostly during the script development phase, sometimes during editing as well,
00:01:34 and my interest in AI is very strong and long-lasting in different ways, mostly philosophically,
00:01:40 to be honest. I wear three hats, one is very playful and creative, the other one is totally
00:01:48 brutal and analytical, and the third one is chatty. So today I'm going to be chatty, and
00:01:55 whenever I'm really confused by something I'm noticing, I make a talk about it,
00:01:59 and I get in front of people and I confuse others. So we'll talk.
00:02:04 So hey everyone, my name is Julia Schaafdecker, I'm a lawyer based in Berlin, so
00:02:17 today we're going to have an insight on the legal aspects of AI due to my daily work, actually. I
00:02:25 advise national and international production companies in all matters of contracts relating
00:02:32 productions and copyright matters and so on, and the other huge part of my daily work is advising
00:02:43 platforms, and well, what happens to happen in the internet right now is AI everywhere, so
00:02:50 we're gonna see and yeah, talk a little law. That's what we do here, and I hop up to Gerhard.
00:03:01 Thank you. Hello, my name is Gerhard Meyer, I'm co-founder and program director of Serienkamp
00:03:10 Festival and Conference, an event that took place in Munich for the last eight years,
00:03:16 this year for the first time in Cologne, and we focus on serial storytelling,
00:03:21 case studies, keynotes around the topic, but also try to broaden the view on topics of
00:03:29 technological progress and changes, talking a little bit about new formats, about new forms
00:03:35 of storytelling, and try to broaden the topic, not only focusing on serious, and I'm into the
00:03:43 topic of AI since I think South by 2018, something like this, when I was first introduced to
00:03:50 JTPT 2.0 and had the opportunity to play around with it a little bit, and the interest has just
00:03:59 been spiked in the last few months, as most of you who are here might know and might anticipate
00:04:07 by being here that it's something very important to be on top of. So each of our panelists are
00:04:15 gonna do a sort of mid-length presentation, 20 to 30 minutes, with some information and graphs and
00:04:22 things like that, so we're just gonna keep, we're gonna let them flow through and then we'll keep
00:04:27 questions until the end, and then I'll give you more than enough time to ask whatever you want.
00:04:30 I take the mic that's a little bit more voice-friendly.
00:04:36 The first thing is, I think that anybody who's involved with AI in the last four or five years,
00:04:45 there's a lot of change, a lot of deep technological change that is also accelerating
00:04:50 the progress that's already happening, and I think last year we experienced something like
00:04:55 the iPhone moment of AI, where mid-journey and JTPT broke into the mainstream, a lot of people
00:05:02 engaged with those tools, and right now you seem to have a technological bubble that's similar to
00:05:07 what happened end of the 90s in terms of a big run, big bubble brewing up of people trying to
00:05:17 wrap their heads around how this will change the way they work, and it does change the way we work
00:05:23 a lot, and for me, because we noticed when we did this panel, this talk at Ziering Camp a couple of
00:05:29 weeks ago and a week before in Helsinki, that there seems to be a widening gap between people
00:05:35 who are only slowly grasping what JTPT and mid-journey is and people who are actively
00:05:42 working with it, so as a little bit of a poll, who here has played around with one of the generative
00:05:49 AIs like JTPT or mid-journey? Okay, and who is actively using it already in their creative
00:05:58 processes and working around with it? Okay, very interesting. I tried to keep it very
00:06:08 short and concise in terms of trying to stake a claim for which Tatiana and Julia will later
00:06:16 go a little bit more deeper into some of the topics. Generally speaking, for everybody who's
00:06:22 not that deeply into the AI topic right now, to make a quick distinction is between AGI,
00:06:31 artificial general intelligence, this is what people like Elon Musk and the heads of Google
00:06:37 DeepMind are talking about right now, this is trying to create, I use very simple terms,
00:06:45 because I have a very simple mind in that regard, but try to keep it simple, they try to create a
00:06:51 machine that thinks a lot like a human. So AGI, artificial general intelligence, is the big holy
00:06:57 grail of artificial intelligence research, but this is not what's happening right now, or we're,
00:07:03 depending who you talk to, it's either a couple of months away or decades away, people don't know.
00:07:09 The thing that we are most likely interacting with in the last few years, and that,
00:07:15 of which is Chachabitty and Midjourney are just a few instances of how it's used,
00:07:22 it's a narrow AI, or some of them LLMs, large language models. Narrow AIs are mostly
00:07:32 AIs, algorithms that are trained to do a certain task, a very specialized task, for example,
00:07:41 play chess, or find the best route on Google Maps between point A and point B,
00:07:45 or to recommend you what you should watch on Netflix. These are all more or less following
00:07:53 the same idea of being AIs, of being algorithms trained to find certain patterns, to make
00:07:59 probability predictions on which of those patterns are most useful to the user,
00:08:05 and then reproduce those patterns. Sorry. I just want to, this is a very quick run-through,
00:08:15 and it's just one sentence per one of these pillars. This is what most of the LLMs,
00:08:22 the large language models of which Chachabitty and Midjourney are just a few of the examples,
00:08:27 this is what they're based upon mostly. One of them is machine learning. Machine learning
00:08:33 means that you have a huge set of data on which you train your algorithm. That means,
00:08:37 for example, for Chachabitty it's billions and billions of pages from the internet of text,
00:08:43 and from this you train your model in. After that, that interlocks with deep learning. Deep learning
00:08:52 is mostly to learn the algorithm to create context and meaning out of this massive amount of data,
00:09:00 and the patterns that it's been trained on. The last, and the third point is natural language
00:09:05 processing. That means that you can interact with those systems by typing in a sentence,
00:09:10 and interacting with it on a very intuitive level compared to some years ago, where you still had
00:09:17 to interact on code with those systems. What has happened in 2017 is like the watershed moment that
00:09:27 led to where we are right now. There was a new engine or a new mode of computing, of finding
00:09:36 patterns in those stators introduced. It's called transformer technology or transformer model.
00:09:43 I don't want to go too deep into it, but what it did is it made it much more easier to find patterns
00:09:50 in large sets of data. What did this? It unified a lot of fields that were before on their own.
00:09:56 That means, for example, voice recognition, image recognition, text recognition, and always
00:10:03 generation. All of those used to be in different silos and following different orders. 2017 with
00:10:13 the transformer technology and a big paper that came out back then, it unified those fields and
00:10:18 accelerated them in a way that we are seeing right now, that every progress made in one of those
00:10:24 fields automatically helps the other fields also. This is what we are experiencing right now, is this
00:10:30 exponential curve. I always try to mention at least a little bit this exponential growth
00:10:39 that's happening in terms of AI, because it's very difficult for humans to wrap your mind around what
00:10:46 exponential actually means. When we are thinking exponential, most of the times we think that
00:10:51 something is just growing very quickly on a linear level. What it means that it's growing in the
00:10:58 factor two all the time. This is something that's very difficult for us to understand. This is very
00:11:04 important to keep in mind when thinking how fast this is moving right now.
00:11:08 What this led to, this big unifying of the field, is to something that right now I love the word.
00:11:18 It's coined by Tristan Harris and Azhar Raskin in their wonderfully insightful talk,
00:11:24 "The AI Dilemma". That's a difficult word. It's called Golem Class. Golem Class, I think most of
00:11:32 the writers here will appreciate the methodic quality of calling it Golem. It's called Generative
00:11:40 Large Language Multi-Model Models. That's even more difficult to pronounce.
00:11:46 This is under which, under this term, they gather everything that they call generation AI.
00:11:57 The first point of contact that we had with AI is curation AI. That means
00:12:03 AI that is either recommending you stuff, aggregating stuff, for example, news, articles.
00:12:10 For example, if you're on Reddit, you're on the front page. This is sorted by an algorithm and
00:12:16 in that way in curation AI. Social media algorithms, email filters, spam filters,
00:12:24 route navigation on Google Maps. This is also curation AI in a certain sense.
00:12:28 Generation AI is this what we're currently experiencing. It's mid-journey, JGPT,
00:12:34 these systems that actually produce new patterns. You have the machine algorithm
00:12:41 that's recognizing patterns, finding probability curves inside that raw data,
00:12:47 and that trying to reproduce the data that you as a user want to access.
00:12:53 To come back a little bit to this exponential growth curve, I think right now it's very easy
00:13:01 to see how quickly these systems are learning and how quickly they're evolving. This here,
00:13:08 I hope you can see it a little bit. I have it a little bit bigger here. This is the difference
00:13:14 of one year on mid-journey. It's the same prompt. It's the prompt of creating someone in India.
00:13:22 It's a small group of text and this is the same prompt and the difference of 12 months.
00:13:28 This is something that we see in mid-journey, JGPT, that they're evolving not on this progress,
00:13:36 on this linear progress level, but on an exponential growth level.
00:13:39 Like I said, I want to mention it because I think it's important if you look at some of the stuff
00:13:45 a lot of those programs can do right now to bear in mind what they will be able to do in 12 months
00:13:51 or two years. This here, for example, is I think a lot of you have seen those kind of images.
00:14:00 These are generated by an AI. It's called photoai.com. I come back later to it.
00:14:06 It's just an AI trained to create human portraits. It is not perfect, as you can see.
00:14:15 These are some of the stuff that it still puts out because it doesn't understand. It doesn't
00:14:21 understand what hands are for, what the function is, and why the function leads to certain aesthetic
00:14:27 things. It does not understand that bar that goes through his feet and looks a little bit
00:14:35 like Hellraiser. You see, there is still room for improvement. Nevertheless, this is not nothing,
00:14:42 it's not anything new. This here is from a page that's called "This person doesn't exist." This
00:14:48 page is a couple of years old. These are all persons that do not exist. They are generated
00:14:54 by an AI. It looks ultra-realistic. Right now, they are, for example, watermarked. It's not
00:15:03 possible to use them for nefarious things like trying to scam people or try to create fake ideas.
00:15:10 Nevertheless, that's the level right now that's been around for a couple of years. It never broke
00:15:16 into the mainstream. I think this is what Checipti and Midjourney are for.
00:15:21 What does it mean for the creative process right now in storytelling? This here is not something
00:15:31 that I put up there. That was on this landing page of PhotoAI, of which I showed you earlier
00:15:37 some of the photos. This is the claim with which they opened up. They changed it. I made a
00:15:43 screenshot before because I thought that this might not be the best thing to advertise your
00:15:48 service with. Nevertheless, I think there is one sentence. This is the most striking. Its
00:15:57 turnaround time is 28 minutes to create one of those photos of imaginary people.
00:16:04 This is a process that before a crew of photographer, set dresser, model, artistic
00:16:12 director sat together for maybe a week to create a couple of those photos. Now, you can do it
00:16:19 yourself on your computer in 28 minutes. This is just photo, but I promise you this will come
00:16:25 to anything. It comes to video. It comes to anything you can imagine. It's text to anything.
00:16:32 It means the moment you can prompt something properly, you can turn it into a video, a game,
00:16:38 something generated by the AI. One second.
00:16:45 Where are we currently seeing those technology used? For us at Zaring Camp, we noticed one of
00:16:56 the first instances where it was used was in pitch decks. This is what I here put together as
00:17:04 image generation. A lot of creatives and writers used Mid-Journey to create images for their pitch
00:17:13 decks or create images as reference points for certain kinds of collaborations with other parts
00:17:21 of the creation process of a series. This is one of the pitch decks that we got. As you can see,
00:17:30 some of the advantages that they use very freely here is you can easily create your visual space
00:17:37 just by being able to prompt Mid-Journey the right way, find a certain set of words that match the
00:17:43 kind of atmosphere, the kind of character that you want to have. Here, they use it to create
00:17:50 first impressions of the characters that's going to be in that show.
00:17:52 Rest of the pitch deck here, you have examples of the locations they want to use,
00:17:59 examples of the visual style they want to engage in. This was in February. Back then,
00:18:08 we had 450 submissions. 10% of them were already done with Mid-Journey. Those were the ones that
00:18:18 had bigger chances of being picked up because they had a bigger visual impression and bigger
00:18:24 coherence in their visual impression. It came from the creative vision behind it was more easily
00:18:33 addressed. We also had some of the pitch papers that were already written with JetCPT. You can
00:18:40 see it if you know what to look for and how to read it. You already know that some of the stuff
00:18:46 has been done by a machine or at least made more concise, made more approachable.
00:18:52 This leads to the second part and the creative sparring part where it interacts with text.
00:19:02 We already spoke to a lot of creatives, to a lot of screenwriters, people in creative producing
00:19:09 who are working with it. We ourselves have tried out most of the apps that are out there right now.
00:19:16 Those are changing weekly. For us, it's really difficult to keep track because anything that
00:19:23 has been on Vogue maybe two weeks ago, three weeks ago, there are already half a dozen or a dozen
00:19:29 programs that do something similar. Some of them, PseudoWrite, DeepStory, ScaleNaut, AI,
00:19:35 they're all aimed at people writing creatively and help them to write better stories, create better
00:19:42 scripts, better synopsis. One of them, for example, PseudoWrite, it's an AI writing partner,
00:19:51 it's called. It's very easy to use, very simple, and it helps you to get more inspiration,
00:19:57 find better words. It works like a better thesaurus in that way and helps you to create
00:20:03 better scripts in that way. There are a couple of those, but to be honest, most of them, we saw
00:20:10 people can also access them via JetGPT. If you prompt JetGPT properly, so if you find the right
00:20:17 block of text that tells JetGPT what to do and how to interact with what you're writing, you can
00:20:22 have something similar there. This, for example, is something a colleague of mine grew up where he
00:20:29 asked JetGPT for what should he look for in certain books and literature to be able to adapt
00:20:39 it better to series and film. Here, for example, it tells you you need an engaging block, you need
00:20:48 a well-structured narrative, and then he put in certain books and asked them how well do you think
00:20:55 this translates into a film. For example, he asked it here somewhere, Grapes of Wrath, how good would
00:21:02 this translate into a film? It gives him immediately the thing that it says, it's good as a
00:21:09 film, but you should take care of certain points that I pointed out earlier. It does a lot of,
00:21:14 don't want to say creative heavy lifting, but it helps you to structure your work.
00:21:20 I only go into a couple of examples here because there are dozens and dozens out there right now,
00:21:29 and I think the only limitation we see right now is your own fantasy or your own imagination,
00:21:35 how you want to use it. This, for example, is another colleague who created a sales text
00:21:40 out of the perspective of a copywriter for a show he's currently working on.
00:21:46 The show is called The New Frontier, and he just gave it, you see up here, this is the
00:21:54 short prompt, "Imagine you're an expert copywriter specializing," and so on and so on. It tells the
00:22:02 JetGPT how it should behave itself, and it will then create something like here, this wonderful
00:22:09 sales text that usually people would sit there maybe for, I know it for myself, maybe for a day
00:22:15 or two, and then you're not happy and someone else reads it. Here you have in a couple of minutes
00:22:22 this canvas from which you can better work on them. A third instance that we've seen a lot
00:22:33 engaged in a couple of screenwriters that we know is they prompt JetGPT to behave as one of their
00:22:40 characters out of their scripts. So, for example, they take the character description out of the
00:22:48 treatment, they prompt JetGPT into that, and then they can have a conversation on JetGPT with that
00:22:55 character. That means they can ask it for opinions. They can ask it, "How would you behave in a
00:23:01 certain situation?" and use that as a creative fuel to, in their opinion, write more authentic
00:23:07 scripts, to write better scripts, and to be able to develop more empathy for the characters that
00:23:13 they're developing. I think the biggest changes that are gonna come are the ones in video generation.
00:23:24 This here is something that Robert Franke from ZDF Studios put together for our panel in Helsinki.
00:23:36 It shows how in a couple of hours someone can put together this animation of a woman talking by
00:23:45 using JetGPT, by using MidJourney, by using a third app, but it is only, as I said earlier, it's four
00:23:53 weeks old and it's already a little bit obsolete. There is things like right now runway is one of
00:24:03 the big new things, I think, that will definitely shake the way people think about generating
00:24:09 audiovisual content. These are just some examples of backgrounds of landscapes generated with AI.
00:24:17 I think anyone involved in filmmaking can see the advantages of creating settings like this,
00:24:27 combining them, for example, with LED screens or something like that, where before you had to
00:24:33 travel somewhere, create that environment, make... Come on, don't know the English word right now for it.
00:24:44 But you have to record the environment and then put it on the LED screens. Here runway promises
00:24:50 that you can do something like this in a couple of hours on their system. Coming back to how fast
00:24:58 this is moving, these are some of the images that have been generated with Gen1 of runway
00:25:05 as a video creation software. Right now you see already how quickly it's evolving, how quickly
00:25:13 it's learning. Right now Gen2 is up for beta test and some of the things that it will be able
00:25:22 to do and handle is, depending on who you ask, but people think that by end of the year, beginning
00:25:28 of next year, it will be able to create photorealistic content directly text to video,
00:25:36 directly text to something like this. This is all generated with version 2 of runway.
00:25:44 And I think not only creating those audiovisual worlds will be revolutionized, but also the way
00:25:54 you interact with it in post-production and pre-production. Here are just some examples
00:26:00 from Adobe Firefly. It's here everything they try to emphasize on is streamlining the process
00:26:09 of post-production and pre-production in one program. This is all use cases that you can
00:26:17 already try out at the beta version of Adobe Firefly right now. For me, one of the most,
00:26:24 I don't want to say terrifying, but the most impressive definitely is the feature that comes
00:26:29 up here. This is like just loading your b-roll into the program and in a couple of seconds it
00:26:38 generates a fully cut trailer, a fully cut sequence out of this. What this means in terms of
00:26:44 saving time, working more efficiently, you can imagine what it means in terms of people who had
00:26:51 a job for a long time, sitting there for one or two days, cutting this, editing this, which can
00:26:57 now be done by an algorithm in a couple of minutes, is a little bit terrifying, at least for me.
00:27:02 To wrap it up a little bit, I think that what we see in the wider or the more used, the more
00:27:13 people adapt or the industry adapts to those tools, that you have for one more efficiency.
00:27:20 What exactly that means to be more efficient, what it means to be able to write, for example,
00:27:25 a script that you took two months, now in two weeks, what that actually means is another question.
00:27:31 But at least the promise that a lot of the people who make those apps are making right now is that
00:27:37 it makes your work more efficient, faster, and it's going to speed up creation, implementation,
00:27:43 distribution in a lot of ways. The other thing, and I think this is out of the question, is it
00:27:49 gives more creative freedom. To be able to create any kind of world that you want to imagine out of
00:27:55 a text prompt is definitely liberating, especially if it comes to the visual quality that was
00:28:03 reserved to big studios like Disney or any of the big VFX companies right now.
00:28:10 The third thing, and I think this is what we're already seeing in a lot of smaller points, is
00:28:18 low barriers of entry, that you don't have to train for years and years to hone a craft, to
00:28:25 understand the tools, but you can do it very simply if you learn how to text prompt or prompt engineer
00:28:31 the AI, the generative AI, in a correct way. The next point, the fourth point, personalization.
00:28:38 This is one of the biggest points of debate, at least with a lot of people that I talk to,
00:28:45 who say that the more inexpensive a production gets, the more personalized the content is, and even
00:28:52 niche topics, niche marginalized communities will be able to target it with ultra personalized content.
00:29:00 The biggest challenges coming out of this, and these are general AI challenges. I don't want to
00:29:08 go too deep into them. If you want to read up on them, it's highly fascinating. This is what everybody
00:29:14 with AI is grappling with right now. Bias, thorough centralized power, the things that are more
00:29:21 important for our industry right now, but that can be transferred to a lot of other
00:29:28 creative industries. For one, job crunch. That means that a lot of jobs, a lot of tasks that have been
00:29:37 laid out to a couple of specialists or specialized departments can be made more
00:29:47 efficient and are made redundant. This I already saw with people who work for big sales companies
00:29:53 who used to employ three people to create pitch papers for international markets. This is now
00:29:59 done by one person and there's no graphic designer involved, there's no photo guy involved anymore.
00:30:06 It's all done with one person in a couple of days. What does this, and you've seen earlier
00:30:14 that this also applies to editing, it applies to art direction, it can apply to a lot of fields where
00:30:20 one person can do the job that before that maybe two or three or four persons did.
00:30:25 What this will lead to, and this is something that a lot of people who talk about AI in a more
00:30:32 generalized term, is a pressure to adapt. That this creates pressure to also go into the direction of
00:30:40 working more efficiently to reduce costs and this means that anyone who's not willing,
00:30:46 especially big companies who are not willing to adapt to that pressure, that they will suffer in
00:30:52 terms of profitability. The last point I only want to touch on very shortly is this efficiency suck.
00:31:02 What does it mean that anytime you see that a big technological innovation is freeing up value
00:31:08 that was captured before in the structure of an industry, for example in the case of the streaming
00:31:13 industry or the streaming revolution and what it did with the movie and the TV industry,
00:31:18 there's a lot of freed up value. The value very rarely goes up the value chain,
00:31:26 very often goes up the value chain, meaning to the big companies who own the tools and very rarely
00:31:33 down the value chain to the people, to the creators who are working more efficiently. I think this is
00:31:39 maybe a very European and a very slightly socialist view on it, but this is something that you have
00:31:47 seen in the last 20, in the last two decades, anytime that technological change actually tried
00:31:53 to free up those values in the structures. To wrap it up, because I'm already talking too long,
00:32:02 there are three big possible scenarios right now being discussed.
00:32:07 That is for one, explosion of AI-assisted content through the low barriers of entry,
00:32:17 result of AI-assisted content in audio, video and text. There's going to be a huge explosion
00:32:25 of content out there. We already experienced this, for example, with written text. Anybody who's
00:32:31 in the space of, for example, auto-generated emails, auto-generated blog posts and AI-assisted
00:32:39 podcasts, right now there is a deluge of content out there that's very interchangeable, but also
00:32:47 very accessible. This is one of the scenarios discussed right now. The second one is this
00:32:56 monopolization of creativity. I think this is what Financial Times had a scenario drawn out
00:33:03 for this. This means that humans are drowned out by this deluge of AI-assisted content,
00:33:09 and AI will significantly change the incentive structure of creators. That means that it's not
00:33:20 worthy for you, or there's better use for your time than to create, which I find a little bit
00:33:26 of a sad scenario, to be honest. The third scenario is human-made stuff stories command a premium.
00:33:36 This means that humans are still the best in terms of finding social, cultural context,
00:33:41 and finding nuance and conveying emotions. This means that human-made movies, stories,
00:33:50 scripts will command a higher premium and be worth more. Which of those scenarios we will likely go
00:34:00 into, maybe we can just discuss later. I think I overdraw a little bit, so I'm ending here with
00:34:07 this last slide. Thank you very much. [Applause]
00:34:17 Thank you.
00:34:20 Stretch out. You'll be sitting here.
00:34:24 [Laughter]
00:34:33 Serious.
00:34:35 Get up.
00:34:35 Let me take a look at it.
00:34:38 [Pause]
00:35:06 Well, let's switch tone and focus a little bit. We are here to talk about AI, but I'm not going
00:35:12 to be focusing on AI. You'll understand that everything I say is relevant in the context of AI.
00:35:19 I want to talk about the fire in the eyes, and the relevance of authentic versus generic
00:35:27 for creative processes. I work with story creation, with script writing.
00:35:31 But before we go to the fire, which is the most fascinating thing,
00:35:35 I want you to consider the type of story that's most common out there. On a normal day, you'll
00:35:43 call it an average story. When you're really in a crappy mood, you'll call it mediocre.
00:35:48 Its official scientific name, meh. That's when you're asked about, "What do you think about?"
00:35:56 You go, "It's neither here, neither there. It's goodish. It has all the moving parts there,
00:36:02 but it's kind of not leaving any lasting impression. The majority of everything,
00:36:07 everywhere is mediocre." That's fine. Most of us are doing average work most of the time.
00:36:13 Sometimes we are brilliant. Often we are not, and that's completely okay.
00:36:19 But I'm focusing on average today, on understanding where human average comes from.
00:36:26 Because as Gerhard has been showing you very clearly, we have a new player in town.
00:36:32 It's not going everywhere. I sit in a number of committees in Europe for evaluating scripts for
00:36:42 financing. That's a bulk of what I do. I read scripts from all over Europe. I can tell you that
00:36:47 the first time I asked AI in November last year when it became public, I played with it and I
00:36:52 asked it to generate a story from something I just made up off the cuff without any real effort.
00:36:58 So I played with it for a few hours. And what it gave me was at the same level as the majority of
00:37:06 the projects I'm reading for script evaluation. Please understand this. It's that good. It was
00:37:14 that good then. And when I saw it, my mind just exploded and I saw this needs to be understood.
00:37:20 So getting paid for average, which is happening, is not going to be happening anymore very soon.
00:37:27 This is why I'm going to be blunt and brutal. And I want us, the reason why I'm focusing on average,
00:37:33 not in a disparaging way, I want to use AI to elevate certain human capacities that have
00:37:39 gotten a little bit atrophied, weakened. So this is the angle I'm coming at. There's a reason why
00:37:44 it's important to understand average. Okay. The first time I was really interested in where does
00:37:52 average come from when it's tied to experienced screenwriters with track record. So I'm not even
00:37:58 focusing on people who cannot do the craft. You have to know the craft, leave it aside.
00:38:04 No, somebody experienced is writing something average after a while. Why is that happening?
00:38:08 So in one of my first jobs, I was a story editor at a broadcaster. I asked the most experienced
00:38:15 producer who was blowing all the fuses working with us, very prominent writer, who was just not
00:38:22 getting better. Every rewrite was the same level after a year of development. So I asked her,
00:38:27 what is your explanation? Why do you think he is not writing better? Because we have proof that
00:38:33 he used to write better. And she was brutal. And she said, you know what? He has only two brain
00:38:39 cells in his head and they are fighting with each other. Meaning the man is not very smart.
00:38:44 And I'm like, and then she said, that's one thing. The other thing is he's gotten jaded.
00:38:50 No. Then I have the opportunity a few days later, because I was the only person he didn't hate
00:38:56 because I was new. The whole broadcast, he wanted to talk to a new girl and discuss his project.
00:39:01 And I enter a very interesting, long and deep conversation with the man and discover within
00:39:06 few minutes, not only that he's intelligent, he's very intelligent, he's also highly educated.
00:39:12 And he did a lot of research about the topic he was writing. This was no fool.
00:39:16 He knew a lot, or rather he thought that he knew a lot. He thought that he knew a lot.
00:39:24 The other thing she said, first thing that he was not intelligent, wrong.
00:39:30 But he was jaded. And this is the key. He was jaded. Somewhere along the way in his life,
00:39:37 he lost the ability to properly, fully, and really see the chicken.
00:39:45 Now, to understand what the chicken stands for, I first have to introduce you to two people who
00:39:53 brought it really vividly to me. My niece and nephew, seven years old, the Turbo Twins.
00:40:00 All right? They are turbo on all levels. Live in Switzerland, fluent in two languages already. So,
00:40:07 I come visit them. And at that time, when they were four years old, they've developed a full
00:40:12 blown obsession with chickens. So, you take them to chicken coop, and they suddenly become very calm
00:40:19 and focused. And all of that turbo activity stops, and they really specifically relate to every
00:40:24 single chicken. There's a story connected to every one of them. They know the family relationships,
00:40:28 who is the top chicken, who's the down chicken. I mean, everything.
00:40:31 Now, fast forward six months. I come and visit again, fully expecting this to continue.
00:40:38 I take them to the chicken coop in the park, and what happens? You know when children get jaded,
00:40:44 how quickly it goes? I'm like, "Here are your friends, the chickens." And they go like, "Oh."
00:40:49 I'm like, "But look at them. They're now bigger. What do you think? Which is which?"
00:40:52 And my niece gives me the first eye roll, I think, of her life. She goes, "It's a chicken.
00:40:58 It's a chicken, not the chicken anymore." This is the key. When we acquire language and ability
00:41:07 to abstract, we exchange the relationship with reality that's very real, visceral, embodied,
00:41:14 in the moment, fully everything it can be, to an idea. So we all go around with ideas about
00:41:21 chickens and the ideas about everything, really, in our lives. It stands for the reality itself.
00:41:26 Fine. Perfectly normal when we acquire language. Perfectly normal to be like that,
00:41:32 because that's how the human mind works. However, sorry is the artist who's lost the ability to
00:41:39 fully see the chicken. That's the essence of the artistic abilities to see through the concepts
00:41:47 and ideas. And this is where the fire in the eyes comes in. They say that children and mystics have
00:41:53 the same ability. They say mystics are the people with the fire in their eyes, because this fire
00:42:00 burns through the illusions, the constructs, the ideas, the words, the philosophies, the beliefs,
00:42:06 the cultural programming. A mystic sees reality as it is, not as they themselves are. So in the
00:42:14 terms of filmmaking, their eyes are cameras with clear lenses, not projectors. You understand the
00:42:21 difference? The best artists are the closest we have to mystics. The best artists see through
00:42:29 the bullshit. Three see through the constructions of their own mind. It's an ability that needs to
00:42:35 be really maintained in the age of technology. It's getting more and more difficult. Why?
00:42:40 This is the time we live in. We are constantly bombarded by stories. And this is just if we
00:42:47 take into account our own industry. From the moment you wake up to the moment you go to bed,
00:42:53 you're bombarded by a story, one or the other. From the little tweet to the narrative to the
00:42:58 biggest, most amazing Nobel Prize winning novel, all of it is a narrative. There is no space for
00:43:04 breathing. We are choking and drowning in stories. The impression I get when I meet a writer who is
00:43:10 oversaturated, that is, they have so many layers of narrative in their mind that they have no
00:43:15 fresh perceptions. Nothing new and fresh is coming through that mind because it's overly
00:43:21 fed by stories. This is the image that comes to me. Gulping, gulping, gulping narratives for many,
00:43:29 many years. Now, this image is not accurate because it shows us a person gulping on junk food.
00:43:35 I don't think the point is, are you consuming only the best art house cinema or are you,
00:43:40 you know, watching crap on YouTube? No. The point is how much the quantity itself is a problem,
00:43:47 not the quality. In terms of food or drink, if you drink the best wine out there, you know,
00:43:54 you have the fine taste, but you drink a liter or two every day, are you a connoisseur or an
00:43:59 alcoholic? We are work story-holics. We truly are story-holics and that's in the way, paradoxically
00:44:07 enough, consuming too much, too many stories will make you not a great storyteller.
00:44:13 A person who is kind of stuck in stories in a very profound way, he's really stuck. And a lot
00:44:23 of my work goes into piercing that, disjolting that, disturbing that. And if a person is talented
00:44:34 and has any real life experiences that I can mine with them, will get through.
00:44:39 If all they do is spend their time watching stories, reading books, I can't help them.
00:44:48 When we find most of our inspiration in stories, I don't know if you can see this image,
00:44:55 but it's going from the very dark red and all the nuances of, you know, it's getting paler and
00:45:00 paler. We end up creating these echoes of other stories and we are not even aware of it. This is
00:45:08 the issue. Because the creative process itself, even when you're creating the most mediocre story,
00:45:12 will often feel very alive. There's something about writing that just juices up a person.
00:45:18 It's only when you step back from the story or somebody comes from the side and says,
00:45:22 "This is generic, this is generic, this is generic," and you start unpacking that,
00:45:27 that a creative person will get aware of it. So it's really tricky. It's not a question
00:45:31 with intelligence. We are all swimming in the same soup. Writers, the audience, the consultants.
00:45:38 The reason I'm kind of honed into it is because I became very early aware of it.
00:45:42 But it's a practice to see it. Now, a simple solution that people propose for solving the
00:45:50 problem of authenticity is, "Let's get very realistic," as if that will solve it. It won't.
00:45:57 Authentic is not necessarily realistic. Authentic has more to do with being true to life.
00:46:05 Now, I come from Scandinavia where I live. Realistic is a big thing, social realism.
00:46:09 Some of the most generic stories seem very realistic. A lot of crime drama.
00:46:16 They're all on the same pattern. But the realistic production, everything.
00:46:22 But there are episodes of Star Trek, The Next Generation, from the '90s that are more true to
00:46:28 life than any of these seemingly realistic stories. Star Trek still kicks ass when it
00:46:33 comes to understanding and rendering the human condition. I want to give you two fun examples
00:46:38 of this. Okay, let's go back to the twins. I take them to the cinema. This is what we
00:46:44 choose to watch. An animated film called Extinct, about a donut-like species that's gone extinct at
00:46:53 some point in time. Through some convoluted plot, they end up in our time, the brother and sister
00:46:58 here, only to discover that their species doesn't exist anymore, so they have to go back and save
00:47:04 everybody there. That's the plot. A little adventure. What was fascinating, they're like,
00:47:10 "There is nothing real here." Was it authentic? Hell yeah. Which part was authentic? The
00:47:17 relationship between the brother and the sister. I was sitting in between the twins in the cinema.
00:47:21 What they were doing during the screening was looking at each other all the time and going
00:47:26 like this, because the relationship captured was true to life to the T. Afterwards, when I asked
00:47:32 them, I said, "They are us." I said, "In what way?" Well, they were seven years old now when we saw
00:47:40 this. They say, "Well, the girl, she wants to jump from the roof and the brother wants to take the
00:47:46 stairs." This is the dynamic between my niece and nephew. She's crazy and he's going, "Don't do it."
00:47:53 Many, many different ways. Authentic is about being true to life, not being realistic.
00:47:57 Let's move it a little bit to more high-end storytelling.
00:48:01 A few days later, I was in Belgrade visiting my parents and I wanted to experience the shaky
00:48:12 chairs again, because the last time I went to cinema to experience this was 10 years ago in
00:48:16 Canada. I thought the technology must have advanced, really? They spray water on you,
00:48:23 there are smells coming up, it gets hot and all of that. Pretty crude, but I was hoping for an
00:48:29 immersive experience. They wanted to give authentic immersive experience. The film that was screening
00:48:34 was Avatar. Did I get an immersive experience from jerking chairs? No. I got almost discus hernia
00:48:43 in my neck. It pulls you out of the film constantly because it's so crude. It's actually
00:48:49 the opposite of immersion. I don't believe this technology has any future. When they figure out
00:48:53 how to make VR last longer without making people want to vomit after 30 minutes, this is out.
00:49:00 This is only 30-minute limit now that we have VR, but this technology with the chairs is out.
00:49:07 Avatar, what is this story about? Family escaping war, becoming refugees, war, war, war, la, la, la.
00:49:14 How much money has been put into Avatar to make it seem realistic and authentic? I think that's
00:49:23 the top of the line when it comes to investment, right? Did it succeed? I would say a mixed bag.
00:49:28 Which aspects were authentic? Well, again, when they show us relationships between family members,
00:49:36 and I think if they failed there, that would have been a catastrophe because every writer,
00:49:41 every filmmaker has been, most of us have been parts of a family. We have something to draw on.
00:49:46 It would be really weird if they failed there. However, when it comes to rendering war,
00:49:52 flight from the war, they failed miserably. There is a huge gap between the quality of expression
00:50:00 and the quality of perception. When you put a lot of money and the best talent, you will often get
00:50:04 great quality of expression that will mesmerize you. In part of the audience, we are duped all
00:50:11 the time by stories. Well-told stories, one way or the other, are hypnotizing, but always, always,
00:50:17 always pay attention to what's the quality of perception, what's the quality of what they're
00:50:22 actually saying about life. And here, when it comes to those aspects, which is kind of
00:50:27 70% of the film is war and all of they have to say about that, what was Avatar
00:50:34 delivering? Authenticity? No, the opposite of authenticity, which is, in my view, bullshit.
00:50:41 Now, I'm not being unnecessarily vulgar here. I'm being positively academic.
00:50:49 This is an essay from Moral Philosophy. It's called On Bullshit. I suggest you read it. It's
00:50:57 an amazing little booklet that explores the nature of bullshit in modern times and tries
00:51:04 to define it. Now, a lot can be said about bullshit, but what I'm interested in is this,
00:51:09 the distinction between bullshit and lying. Lying is knowing what the truth is and choosing to go
00:51:16 the other way for whatever reason. Bullshitting is when we are not even concerned with the truth.
00:51:22 We are happy to wing it, to improvise, to deceive, and not to dupe people, but to impress.
00:51:30 We are bullshitting when we are trying to make an impression, and boy, was Avatar trying to make
00:51:35 an impression. Right? Why is it so, again, this is not a question of, for me, bullshitting is
00:51:43 fascinating when it doesn't come from intent, when it happens automatically. We bullshit all
00:51:50 the time because we are not even aware of it. Why we are not aware of it? Because we live in a world
00:51:54 that's bombarding us with information, and having information accessible all the time, having
00:51:59 stories all around us, creates this feeling that things are familiar, more familiar than they are,
00:52:05 that we understand things and know things better than we do. Our life experience doesn't match
00:52:11 the level of information that we are exposed to, and this is where bullshit comes up
00:52:15 unconsciously. It's not really people trying to deceive each other. It's a side effect of the
00:52:21 information deluge that we live in. Was Avatar, when it comes to rendering a war in any way
00:52:29 original? Hell no. Like any other action movie, bomb, things blowing up, very little story there.
00:52:36 There were many scenes there that were completely redundant. You already showed this. You already
00:52:42 showed this. Why is this scene even here? Because they storyboarded the film before they wrote it,
00:52:47 so the visuals were more important than the story content, and it was really telling.
00:52:52 Now, I think this world, all the original, is really interesting when we talk about authenticity.
00:52:57 We want things to be original, but let's consider what original comes from.
00:53:01 What is the cousin of original? Origin, the source, right? Something that is close to the source is
00:53:09 original, but what is the source of any story? How close are our stories to life, and how close
00:53:16 are they to other stories that have already processed lived experience? So, every story
00:53:21 is a reduction of reality. Every story is pale compared to reality. So, if we never go back to
00:53:28 the real source, the messy source of inspiration, we will always be in danger of creating echoes of
00:53:34 other stories. So, a way to kind of get away from this, and a way to know how to deal with AI later,
00:53:43 is getting really good at stepping in reality, stepping into the reality itself. Do not rely
00:53:51 on stories all the time, but from time to time, really live fully, and we see whatever kind of
00:53:58 lived experience we have, it will have enough ambivalence, nuance, paradox, and confusion in
00:54:04 itself that will provide rich source material to create something original. So, I want to leave
00:54:09 the stories aside and focus on the lived experience and what it shows in contrast to what
00:54:18 unoriginal, inauthentic stories show. So, I'm going to give you examples from real life.
00:54:23 Do you recognize the situations? Can you see them well?
00:54:30 These are all pictures taken from the news, people fleeing Kabul, Afghanistan, very recently.
00:54:44 And when this hit the news, people who had my background reacted pretty severely,
00:54:51 and we started swapping experiences because it is exactly the same way I
00:54:55 fled as well, from Sarajevo, from Bosnia, 1992, 12 years old.
00:55:00 Now, what's interesting to me is how rarely do I read stories about war and becoming refugees that
00:55:08 seem to me to be authentic and interesting and bring something fresh and new to the table.
00:55:13 Most of them are pretty predictable and generic, and it's always fascinating me,
00:55:17 because it seems to me that people were writing more about the stories they've seen
00:55:21 instead of bothering to actually talk to people who had interesting experiences
00:55:25 that were confusing enough to start with. So, what do we know if we just watch the news?
00:55:31 Well, we know that this is a very stressful situation, that fear is dominant, that there
00:55:35 is a lot of uncertainty, that people are behaving irrationally, people are behaving like cattle,
00:55:41 there is a lot of confusion, all of these things you can, you know, kind of gather from just these
00:55:45 visuals. What is it that we cannot possibly predict if we never enter really confusing
00:55:52 collective situations ever, if we have no experiences of that? Our imaginations will
00:55:56 not go to these interesting places. So, let me tell you just one situation that shows it.
00:56:02 Imagine being separated from your parents, the men are not allowed, they take away your mother,
00:56:08 my sister ends up somewhere else, I'm alone, I have no idea where anybody else,
00:56:12 and I see my sister, not that far away actually, between two fat men, completely clenched between
00:56:19 them in this kind of situation, everybody's pressed upon each other, and she faints because
00:56:24 there's no air and the sniper fire is happening at the same time. So, can you imagine the behaviour
00:56:28 of people, the groups move like this, like there is no presence of mind in anybody, total panic,
00:56:36 and I'm screaming, "You are choking and killing my sister!" Nobody hears.
00:56:40 And it lasts, and it lasts, and it lasts, and suddenly, a completely unexpected thing happens.
00:56:48 In the middle of all that, an umbrella appears a little far away, a huge red umbrella.
00:56:53 A woman just opens it and closes it like this, and then she leaves it open and starts saying,
00:57:01 "Peace, calm down, we are better than this. Peace, we are better than this." And she starts
00:57:09 weaving energy. I would say this is the closest explanation I can give it. From that point where
00:57:17 she was standing, everything started changing. I don't think I've ever met a shaman before that
00:57:24 moment, but she was a shaman because her voice changed everybody's energy in that absolutely
00:57:29 insane situation, and going from trance of hysteria to people coming back to themselves.
00:57:36 Immediately, those men saw my sister, cradled her, gave her water, everybody started behaving
00:57:40 like human being again. These kinds of situations only happen in life. They very rarely happen in a
00:57:48 very convincing way in stories. So, when I work with a writer and I tell them, "You know, you're
00:57:53 patterning these stories in a completely predictable way. It lacks that X factor that
00:58:00 life has." And they will go, "Yes, I hear you. Yes, yes, I know what to do." They come back with
00:58:04 a script and there's a musical scene suddenly. And you go like, "This is not the kind of surprise
00:58:10 I'm after." There is a complete internal logic to why this woman in that particular moment, why...
00:58:17 You know, there is a logic to it when you start to investigate, but the moment itself is completely
00:58:21 surprising. What life shows that many generic, inauthentic stories fail to show is this wonderful
00:58:30 dance of light and dark. There is not a dull moment. If we are present to what's going on
00:58:34 and not focused on our ideas, what war is, what becoming a refugee is, just paying attention to
00:58:41 the real lived experience, we will see it. Now, I tend to give a lot of life and death examples.
00:58:49 This is a short presentation, so I'll stop here. But a question that immediately comes is,
00:58:54 like, if you have lived a life where you have not been exposed to these kinds of extreme events,
00:59:00 does it mean that as a writer you'd have nothing to write about? Not at all. Let me introduce you
00:59:07 to the Swedish laundry room. Like, how do you know that Sweden is one of the richest countries in the
00:59:12 world? The standard of living we have? Jesus. Every building has a very high-tech laundry room.
00:59:18 I cannot recommend them enough. They're the best invention on the planet. The amount of laundry
00:59:23 you can do in three hours? Like, people don't know about this. However, it's interesting for
00:59:29 a different reason. Swedes are the most conflict-avoidant nation on the planet.
00:59:35 This is not a generic statement. It's a scientific fact. I've lived there for 18 years and can safely
00:59:41 say it. It's true. And they will say it about themselves as well. This is the place where the
00:59:47 most conflict-avoidant nation goes to war. This is the only place the Swedes go berserk on each
00:59:53 other. The kind of stuff they say and do to each other as neighbors. Passive aggressiveness,
00:59:59 the inventiveness of it. Jesus. Amazing. So we don't have to go to the war zone to get
01:00:07 this kind of exposure to the human condition. It's everywhere. We just need to pay attention.
01:00:14 So it's really about training your eyes to see this dance much more than becoming mesmerized by
01:00:21 ideas and opinions about what is supposed to be happening in a particular type of situation.
01:00:26 So stay close to the lived experience. Training ourselves to really fully see what's going on in
01:00:34 the now. Becoming fascinated by everything that lived experience brings us. What is the difference
01:00:41 between lived experience and something that doesn't come through it? Well, lived experience
01:00:46 gives you that which you cannot learn about through reading, watching, listening. Stories
01:00:53 cannot help us. There is no empathy in the world that can compensate for this. We need to live
01:00:58 to know. Embodied intelligence is not just cognitive. It's not just mental. It takes into
01:01:04 account all the perceptions that come through the body and its position in the moment in a specific
01:01:09 situation. Now this is what gets interesting with the AI. When I talk to Chad GPT and ask,
01:01:15 "Okay, there is this whole dimension that we human have, which is the embodied intelligence, and
01:01:20 how do you relate to it?" And it gives me a long explanation of how robotics is working on it and
01:01:26 says, "Soon we will have it too." So we'll see. I don't know where I am with time because I don't
01:01:35 have a watch, but I'm trying to not talk too fast. All right. Marina Abramovic is a countrywoman of
01:01:42 mine and a grandmother of performance art. She's one of the most amazing performance artists in
01:01:47 the world. And some 13 years ago, I think it was, she did an amazing thing at MoMA Museum in New
01:01:57 York. There was a retrospective of her work. In every room, people were enacting what she did,
01:02:03 except in one room, she did nothing herself, except sit on a chair. There was a table in front
01:02:09 of her and another chair. And the visitors would just sit there and look her in the eyes for a few
01:02:15 minutes. 750,000 people came to that exhibition. During three months, eight hours a day, every day,
01:02:25 she did nothing else than to sit there and look people in the eyes. They said that some,
01:02:32 when they were interviewing people, said it was one of the most transformative experiences of
01:02:36 their life to look in the eyes of complete presence. It's such a rare experience. Now,
01:02:42 Marina is an Olympic athlete when it comes to presence because she's been training herself her
01:02:47 whole life. She has the shamanic capacity to just be completely present. And of course, it's a lot
01:02:54 to ask from an average person, but we all can get better at it because if we don't strive for it at
01:02:59 all, we end up with murky windows in front of our eyes, with really dirty, muddy glasses in front
01:03:07 of our eyes. And this mud, this dirt, is our ideas about life, our beliefs, our preconceptions,
01:03:14 our assumptions, much more than an ability to really see what's going on. Now, concretely,
01:03:20 my work is this. When I work with experienced people who have good craft, I end up cleaning
01:03:28 glasses most of the time. I don't have to do the logistics of the script anymore. I just help them
01:03:33 see. Personally, I'm getting very good at it because I've trained myself to do it, but does
01:03:40 it mean that I'm good at it when it comes to my own life? No. We all need each other. We really
01:03:47 all need each other to help each other see. I define this as a situation of the half blind
01:03:56 helping the half blind, but at least we are not all blind in the same spots, so we can still help
01:04:02 each other see better. And I would say the biggest piece of advice I can give any creative person
01:04:07 moving forward in the very technology-saturated world is surround yourself with window-cleaning
01:04:13 ninjas, friends, partners, people who can do this for you. It's not always pleasant. It's not
01:04:20 comfortable, but boy, does it help us see and not get mesmerized by our own illusions.
01:04:26 Working with chat GPT, let's say, textual AI, will pull us into language constantly.
01:04:36 What does it mean? Gerhard just shows you that AI treats everything as language in the sense that
01:04:42 it treats everything as patterns. The sound, the images, the MRI eyes, the Wi-Fi signals,
01:04:47 the radio signals, everything is patternized, and AI is dealing with the raw material of this
01:04:54 reality in forms of tokens that are not even meaningful to us as humans, pieces of patterns
01:05:01 that it rearranges and finds. This has deep implications because as human beings, we have
01:05:09 the capacity to intuit wholeness with our bodies, with our intuition. We have a sense of something
01:05:14 beyond the fragments that we experience. The AI doesn't necessarily have that yet.
01:05:18 So, if we get pulled into a lot of linguistic interaction from the standpoint of fragments
01:05:26 interacting with fragments, the result of it can be something like this.
01:05:34 This is a quote from a writer who wrote one of the first AI novels. The co-writer was chat GPT-3,
01:05:42 Pharmaco AI. It's a fascinating experience for the writer. It took two years to write
01:05:48 in collaboration with the AI. This is one of the deepest writing processes that I've heard of so
01:05:52 far. This person really paid attention to what was going on and said, "The best thing that AI gave
01:05:59 was a mirror to my own mind. I became aware of how language constructs my identity. I became aware of
01:06:06 the mental concepts that I'm living." If we go deeply into a conversation with AI, we can get
01:06:12 a deep type of result. But this is not what the industry will ask from creative people. The
01:06:16 industry will ask for speed and cheap output. This is the opposite of what this writer did.
01:06:22 So, Gerhard said it took two months. It will not take two weeks or even two days. There was a
01:06:27 course in screenwriting from idea to pilot in two days. This is not an explorative process at all
01:06:35 what's being advocated. So, if we are not aware of this problem, we will get really
01:06:41 totally pulled away from the lived, from the body, from the visceral, into the mind. And we know what
01:06:48 it happens when we never leave our mind. I think this is a picture I took of Sylvia Plath poster
01:06:54 in Amsterdam. Is there no way out of the mind? If there is no way, she'll put her head in the oven,
01:06:59 poor woman, in the end. So, this kind of break of the constant chatter is necessary for good
01:07:04 creativity. If we want to create something new and fresh, we have to step back from constant
01:07:11 activity, constant stories. So, this kind of stepping in super necessary, stepping back
01:07:17 equally necessary, because if we don't step back, we will not see the repetitive patterns that we
01:07:24 are living and writing. It's just as clear as that. You see it in the most typical example of this in
01:07:31 life, when you are witness to somebody being in a very dysfunctional relationship for a long time.
01:07:36 You know, you have a friend, a family member, and everybody around them sees it,
01:07:40 but they can't themselves. And you want to pull your hair in the spray. They just can't see it,
01:07:45 until they see it. And then they wake up and they go like, "What was I thinking for five years?"
01:07:50 This is why we have to step back and disattach and disengage from our experience a lot of the
01:07:54 time. So, this is a training in step in, step back, step in, and it has nothing to do with
01:08:00 technology. It has everything to do with our own capacity. The last thing, element, I would say,
01:08:05 step in, step back, is to find ways to spend time in the unstoried space. What is unstoried space?
01:08:12 Any type of silence that we can engage with. Through, you know, typical stuff that people
01:08:20 would say today, meditation is hugely important. Silence, go into the nature, any other type of
01:08:26 dance, ecstatic dance that will pull you out of your own mind into your body. Even psychedelics,
01:08:32 if you do it in safe circumstances, could help. I mean, there's so much that can be Tai Chi.
01:08:37 There are many gardening. I mean, being with babies, whatever rocks your boat, just so that we
01:08:43 have a taste of that embodied experience that's not narrative. So, I call it the unstoried space.
01:08:48 When we get there, everything that's going on and we can perceive it, we can step back from it,
01:08:55 and this kind of sentence makes sense. There is chaos under the heavens and the situation is
01:09:01 excellent. We are fine with the chaos, but we can hold the opposite. We are not overwhelmed by it
01:09:06 emotionally. So, this kind of training to be able to hold a lot of intensity, a lot of contradiction,
01:09:13 is a human capacity. We are not holding it here. We are holding it here. That's why silence is so
01:09:20 important. So, to wrap it up, important to know how to step in. We need to live. We truly need
01:09:26 to be fully exposed to reality to be of any use as storytellers in the time of AI. We need to step
01:09:33 back to be able to see the patterns, and the unstoried space is hugely important.
01:09:39 Other way to say it is, you know, cultivate the view of the chicken, not a chicken. Whatever is
01:09:47 in front of you, see it, relate with it in reality. Clean the glasses as often as possible. Help each
01:09:55 other do that. And I would say a lot of the time, this is my experience anyway, I'm totally heady,
01:10:00 stuck in my head, and to pull myself out of it immediately, I do something like this.
01:10:06 And I taste salt on my hand, the skin, and I'm immediately back to this moment, this body,
01:10:15 out of my head. So, to boil it all down to a simple thing, a simple message I have to say,
01:10:21 if we want to remain relevant as creative storytellers in the time of AI, more than
01:10:26 anything else, we need to live to tell the tale. So, thank you.
01:10:32 [Applause]
01:10:40 [Pause]
01:10:52 [Pause]
01:10:58 [Pause]
01:11:04 [Pause]
01:11:10 [Pause]
01:11:26 [Pause]
01:11:32 [Pause]
01:11:45 This one is the good one, right? Yeah? Okay. Well, thank you, everyone. Thanks for having me.
01:11:56 As I said at the beginning, I'm a lawyer, I'm a copyright lawyer, and when I look at AI,
01:12:03 I'm amazed by it and at the same time I'm asking myself,
01:12:09 how about copyright? So, that's the title of the talk I'm going to give you. It's "Creativity
01:12:16 brought to you by AI and how about copyright?" So, as I'm based in Germany, I'm going to have
01:12:24 a German law perspective on the things I'm telling now. You can call it a bad or a good thing,
01:12:31 but copyright is usually national law. So, while you work through the European Union,
01:12:38 you're going to have 27 different kinds of laws. So, what I'm telling here might be the same in
01:12:44 your country, where you're from, or it's not. So, bear in mind when I'm working through the
01:12:52 AI process, I'm evaluated only on behalf of the German copyright. So, well, we heard a lot about
01:13:02 creative AI today, and I want to pick it up a little, well, from a lawyer perspective.
01:13:08 Usually what happens when I approach a new technology or I want to evaluate something,
01:13:17 I ask myself, what's that? Well, it's typically that lawyers want to define something. We are
01:13:24 really, we really enjoy contracts, we really enjoy clear language, we enjoy to have a definition
01:13:30 so that everyone is on the same page. So, let's start there.
01:13:34 AI is the science and engineering of making intelligent machines, especially intelligent
01:13:41 computer programs. It is related to the similar task of using computers to understand human
01:13:47 intelligence, but AI does not have to confine itself to methods that are biologically observable.
01:13:54 So, well, that was a long definition and probably we still don't understand what it's about.
01:13:58 Well, basically it's a machine and a machine who copies human behavior. And as Garrett very
01:14:07 vividly explained and showed, there are different types of users right now, and they're exploding,
01:14:14 they're all over the internet. And what I want to focus on right now and kind of make it easier
01:14:20 to digest because law can kind of be boring at some times, don't tell anyone that I said that
01:14:26 because it's my daily life, and I would say it's not boring at all. But so to give you a little hint
01:14:33 of what AI is, I'm going to use a picture of mid-journey. Well, there you have it. A portrait
01:14:42 of a smiling, funny, and slightly drunk fox in the style of a party animal. So, when I think
01:14:49 about generative AI, and when I think about the process of creation and that something came out
01:14:57 of a machine, I think about, well, the sluggishly drunk fox. So, maybe that helps for you to
01:15:05 bear in mind and to visualize what I'm talking about right now and what I will continue about
01:15:13 talking. I'm kind of picking up where Garrett left us to evaluate from a lawyer's perspective
01:15:24 or from a legal perspective why AI is relevant on copyright. You need to differentiate between
01:15:34 three steps, three steps of generative AI. The first step is there's an algorithm and
01:15:40 the algorithm needs to be trained. I'm calling that training input one. And then you go along
01:15:47 and ask yourself, well, are copyright-related actions taking place? Yes or no? Well, little
01:15:54 hint, yes, they are taking place. Otherwise, I wouldn't be here. The second step of generative
01:16:02 AI is the prompts. You get to call input two, as Garrett already was explaining. On mid-journey,
01:16:10 you're typing in a text and then the machine makes something out of it and creates an output.
01:16:17 So, the second column would be the input to the prompts. And the third would be our slightly
01:16:25 drunk fox, the party animal with the drinks in his hands, and that would be the output.
01:16:32 So, in all those three parts, you will ask yourself, or what I'm doing is how those actions
01:16:40 relate to copyright. And here, one thing I want to pick up again, what Garrett already
01:16:49 explained, and what is quite important when you think about copyright is the different techniques
01:16:56 of training the algorithms can be trained. So, AI actually has been around us for quite
01:17:04 some time, for a very, very long time. For example, the chess computer, everyone knows,
01:17:10 I won't be able to use it because I don't play chess, but maybe you happen to use it.
01:17:16 So, that's based on AI. What was around us for a long time was machine learning.
01:17:26 Machine learning, to just to break it down on a simple level, is when a human
01:17:33 tells the machine how to train itself. So, imagine like a machine learning process,
01:17:42 the human creates the parameters the machine is using and following, and then you have,
01:17:49 like, for example, suggestions regarding TV shows on your favorite streaming platform,
01:17:55 and that has been always brought to you by machine learning. What is happening now,
01:18:00 the impact of the human becomes less and less, or it's fading away when we look at deep learning
01:18:06 processes, where the, as Gerhard explained, where the machine is, like, creating those large
01:18:14 language models which are used for a generative AI. So, for example, the one I'm focusing on
01:18:22 today is Midjourney. So, and now, well, the training data. What's the machine trained with?
01:18:33 That's pre-existing work. To train itself, and that's the difference, actually, difference,
01:18:39 actually, between us and the machine is we walk through the streets, or I look at this art in this
01:18:46 beautiful room, and I get inspired. I see it, I consume the art by just looking at it, I get
01:18:53 inspired, and hopefully open my eyes that I can, as Tatiana told us, tell a story with it. So,
01:19:02 what the machine does to being able to learn and to understand how things look, it needs to create
01:19:09 a copy. It needs to make a reproduction of something. It needs to fixate something.
01:19:15 And this is important because it's a copyright-related action that takes place. And this
01:19:21 copyright-related action usually, or in general, is only, well, only the author or the rights holder
01:19:30 is allowed to do so. And what we have to keep in mind and bear in mind is when those training
01:19:37 actions happen, copyright-related actions take place, in terms of the reproduction rights,
01:19:43 which in Germany would be section 16 of the German Copyright Act. So, now, who's allowed to do that?
01:19:53 We see all those mid-journeys, chat GPTs, and all those AI applications which are
01:20:00 brought by us, brought to us in the internet, and we are happily using it, and nobody, well,
01:20:09 I certainly think about, well, what kind of copyright material are they using? And was it
01:20:17 ever allowed for the algorithm to use those pre-existing copyrighted works? Well, usually,
01:20:25 who is allowed to use such work is the right holder. So, it is easy if you yourself want to
01:20:31 train an algorithm with your own pre-existing work, and you are like the rights holder to
01:20:36 an art piece or something like that, and then you want to feed it into the algorithm.
01:20:43 Well, that's not usually what's happening. It's usually third parties that want to use
01:20:49 pre-existing material. And what I usually do, or what is really right now part of my daily work,
01:20:58 is reviewing agreements, licensing agreements, because there are all those companies,
01:21:02 like production companies or the digital companies that want to use pre-existing works for their own
01:21:09 AI projects, and they have this huge stack of data laying around in their company offices,
01:21:16 and they are asking themselves, well, am I allowed now to use it? Am I allowed to feed this kind of
01:21:22 copyright-protected work into an algorithm? Yes or no? Well, here's one answer your lawyer usually
01:21:29 gives you, which you don't like, but it's one of the most common answers a lawyer will give you.
01:21:34 It depends. So, it depends on what is written down in the agreement or in the licensing agreement.
01:21:41 Usually, the best thing that could happen to you is that AI is stated in this contract.
01:21:48 Well, if you have a really old agreement, a really old licensing agreement,
01:21:53 AI was known back at the time when those contracts were closed and the deal was signed.
01:22:00 So, you might go and look further if there is an agreement or the agreement includes the term
01:22:08 for unknown types of use, which might allow you to use the pre-existing work for AI purposes.
01:22:17 Well, if none of this is in the agreement, what usually happens right now, and what's more of
01:22:26 the common way to train such algorithms, is the pre-existing work which you can find on the
01:22:32 internet. So, there's the huge AI applications that scroll through the internet that train
01:22:42 themselves that you use pre-existing copyrighted work without asking, and then I ask myself,
01:22:50 maybe you have too, are those machines allowed to do so? So, there is one term or one section
01:22:57 in the German Copyright Act, and I'm pretty sure it's in the other 27 member states or 26 member
01:23:06 states, there's as well such term because it's based on European Union law, which basically,
01:23:16 from the wording, allows text and data mining to happen. So, I'm not going to bore you with the
01:23:24 whole columns and the exact wording of it, but bear in mind, as I said at the beginning,
01:23:34 there are, in the training process, there are copyright-related actions taking place.
01:23:43 Section 16 German Copyright Act is happening while such machines are being trained, and now,
01:23:49 the Copyright Act provides a legal framework if it is permitted to reproduce, that's the
01:23:58 reproduction I was talking about, lawfully accessible works in order to carry out text
01:24:04 and data mining, which is the process you need to train an algorithm, copies are to be deleted
01:24:10 when they are no longer needed to carry out text and data mining. And this subsection 2 applies for
01:24:19 all digital or digitalized works. So, it's really broad, it's a broad term, it's a broad application
01:24:27 of the section, and so, from the first look of it, it might be allowed to scroll through the internet
01:24:37 well, that would be German law, and to use such pre-existing work. But there's a but, there's
01:24:42 always a but, in law there's always a but. We still don't know how to use this section in the
01:24:49 everyday life, because there has been no legal actions on it, and you already see where I like
01:24:54 highlighted the words, where we're going to fight about in the courts in the next years. Actually,
01:25:02 one of the big topics is, when is something made lawfully accessible? Like, is the work that's
01:25:10 somewhere in the internet, and everybody can see it, is it lawfully made accessible, that everyone
01:25:16 or like an AI mechanism can use it, yes or no? We don't really know the boundaries here yet.
01:25:23 And the third part is down below, an AI mechanism is only allowed to use such pre-existing work
01:25:32 if there has been not a reservation of rights, and such reservation of rights must be included
01:25:39 in the digitalized work in a machine-readable format. So, what's the takeaway from that?
01:25:48 If you want to use pre-existing copyright work in a company, you might have a licensing agreement,
01:25:53 or if you don't have a licensing agreement, and that would be case two, for like the huge
01:25:59 training processes that are taking place for the mid-journeys and the JGPTs, we're going to have
01:26:06 to look at the national law, which might allow for it to be used. So, now that the machine is trained,
01:26:15 we're going to look at the prompts. Gert always just really rapidly explained the prompts.
01:26:22 Prompts are a short text phrase, for example, that it's put into mid-journey. Basically, it's what
01:26:28 you tell the machine what it should do or should not do. To spice it up a little, it would be like
01:26:36 "A Cute Little Fox" by Elon Wilkins, storybook illustration, winter wonderland. So, somebody
01:26:42 just typed that in on mid-journey, and what came out was that. So, what is interesting when I look
01:26:51 at that, you remember this drunkly slide "Fox" from the beginning. You can tell that the machine
01:26:58 has been trained very, very well and can create two different kinds of foxes by only evaluating
01:27:05 the prompt that has been put into. So, if somebody would have drawn that, if somebody would have sat
01:27:12 here on the couch and would have drawn the child illustration fox, obviously the child illustration
01:27:21 fox would not be the drunk fox, but if somebody would have drawn it, everyone would be pretty
01:27:28 sure in this room that this, what was created, would be like copyright protected art. There
01:27:34 wouldn't be like a hint of struggle to evaluate and to say "Well, no, that's not copyright protected."
01:27:40 Well, with the art created by a machine, we certainly do have the problem. And the problem is
01:27:49 that our copyright protection, and that's the paragraph in the German Copyright Act, is
01:27:58 only the author's own personal intellectual creations are work within the meaning of this act.
01:28:06 So, is this fox by only being created by that prompt that was typed in, only by somebody typing
01:28:16 in a portrait of a smiling, funny, and slightly drunk fox in the style of a party manial, is that copyright
01:28:22 protected art? Yeah. Yeah. Yeah. That's where we are right now. When I started, I was telling
01:28:35 before this talk, I was thinking about when we started, when I was talking on AI panels like
01:28:40 back in March, I was pretty sure and I was saying "Well, obviously it's not copyright protected."
01:28:46 It's solely based on a machine trained process and you can't really tell what the mechanism did
01:28:53 behind closed doors. You can't really see the own personal creation that somebody poured into
01:29:00 this fox. And now that it's July, I'm more skeptical about it because when you think about
01:29:15 the prompts and they become more and more specific and they're already starting to be courses
01:29:25 and training sessions of how to be precise and how to use prompts to put them into the algorithm,
01:29:32 into the AI application to use and to create this outcome you really want to create, then you think
01:29:41 might be AI just a tool of the user? Is there a difference between typing something into my
01:29:48 laptop and creating something or creating something with Photoshop or another program like that? Is
01:29:56 there really a difference between that or using an AI applications? Well, what's hard right now
01:30:05 it's to answer this because there is no, or as far as I know, there has not been any court ruling in
01:30:12 the European Union or in Germany at least, which is clear on such materials that come out of an AI
01:30:20 applications are copyright protected. Maybe we can, I think it's a good topic for a discussion,
01:30:28 certainly do have an opinion but those are just the facts where we are at the moment.
01:30:34 So, if we take this a little farther, if you think about, well, the fox, if it's copyright
01:30:43 protected, yes or no, that's the one question. But what's really interesting from a legal point of
01:30:49 view is what happens if the fox, and I always ask this question, if somebody in the audience
01:30:58 noticed their picture or their drawing because that would be shocking, because then it would
01:31:05 be a violation of copyright. Just because we're thinking about that the fox, the output that comes
01:31:12 out of the AI application is copyright protected, yes or no, does not mean or it needs to be,
01:31:21 there's a separate question of whether such AI output can or cannot violate pre-existing work
01:31:30 and copyright of others. And yes, it can. There have been, well, I'm pretty sure in the future
01:31:36 there are going to be a lot of legal cases where we're going to fight about this fox and somebody
01:31:43 noticing, well, that's actually a picture I drew. And now remember, as I said at the beginning,
01:31:50 like the legal terms of, or the legal sections of the German Copyright Act that might allow for
01:31:57 the training purpose, the reproduction which is needed to train a machine. Well, they only allow
01:32:03 that. But they certainly do not allow for pre-existing copyright protected work to be used
01:32:12 in such output. So there's going to be a lot of work in the next few years probably ahead of us.
01:32:21 What brings me to the final question, and that's, I think it's really important we need to think
01:32:30 about is there a transparency obligation. Is someone who use like Chet-GPT or Mid Journey
01:32:38 or in any other AI application, is this one obligated to be transparent about that he or she
01:32:46 used an AI application? And I think it's quite important because what will happen is, or imagine
01:32:54 you see your fox and the fox is like a pre-existing material, you know this, and then I come around now
01:33:02 and use it and might use it like in a children's book or somehow something other. And then you want
01:33:11 to, well, you are actually the rightfully person who created that fox and you want to pursue your
01:33:18 rights. Well, what you usually do, you pursue your rights and you ask the person who's using
01:33:27 such material to stop it. So someone who's using something that was created by somebody else
01:33:36 wants to know or needs to know if maybe AI was used in order to protect itself,
01:33:44 when their self from being sued while using such materials. I think that's going to be
01:33:53 something I'm thinking about a lot and it's not predictable. Well, they are starting to,
01:33:59 again, explain they're starting to use watermarks on AI-created material and that's something really
01:34:07 practical we're going to think about in the future. So to wrap it up and now. Well, as
01:34:14 was said before, AI now and many years to come, I don't think that AI will go anywhere, it will stay.
01:34:24 And the main questions we're going to see and have to discuss in terms of legal is, well,
01:34:31 who's allowed to use such pre-existing work to train the AI? Are prompts even copyright protected?
01:34:40 Yes or no? The thing you type in. Then there's a call to action. If you want to use as like a
01:34:49 company or maybe you have such great data stock and you want to use it, this pre-existing material
01:35:01 to use it to train an AI algorithm, you need to train your team in-house inside the company and
01:35:08 you need to look at your licensing agreements if you're allowed to do so, yes or no. The third thing
01:35:15 we want to have a big and huge discussion about is whether output like the slightly drunk fox is
01:35:23 an own personal creation. If it's copyrighted, yes or no. The transparency obligations that
01:35:33 come with it while using AI. And the last point is, which is I think more of a political
01:35:42 discussion and debate we're going to see in the future is, is protection needed? Is a machine
01:35:51 created fox that you just saw, is it needed or is it necessary that it is copyright protected?
01:36:00 And the last point is related rights. That's really typically in Germany what happens
01:36:09 if you say this really romantic imagination of a creator who creates something, who pours his heart
01:36:18 into soul into something to paint a picture or to create an art piece. If this does not apply
01:36:28 to the mechanism, then we ask ourselves if there is output which was created by a machine,
01:36:35 should be protected otherwise and that's called related rights. And this discussion happened
01:36:41 quite a lot and quite often in Germany in the past concerning other types of rights.
01:36:47 And this is something that will come up in the future. So it's going to be an interesting debate
01:36:54 in the courts and in the lawmaking field actually. And yeah, many questions. I don't have many
01:37:04 answers to right now, but it's always good to end a panel with our talk with questions
01:37:13 to open panel and to speak about it. So thank you very much for listening.
01:37:17 Thanks, do you both want to? I'm going to open to questions in a sec,
01:37:28 but I've just got a few that I want to, maybe I can start the Q&A with.
01:37:32 I think my first question, maybe they're both a bit cynical, but my first one is about
01:37:52 the legal process, right? And the idea of AI sort of scrubbing content that's made before
01:38:01 to create something new and the sort of legal implications of that. But when you were sort
01:38:05 of working through it, I was just sort of thinking about how that sounds like a lot,
01:38:09 how human brains work, right? You kind of just take everything in and then, like, I guess the
01:38:15 best example is someone like Quentin Tarantino, who's known for like lifting whole scenes or
01:38:19 ideas from different films. And the way he sort of gets away with that is that his films are just so
01:38:25 good and he just puts so much of himself in them that he can kind of get away with it. And I guess
01:38:31 the legal complication here seems to be that right now AI is not good enough to get away with it,
01:38:38 it's not good enough to make something different. So when it does become more sophisticated and
01:38:45 make something different, what is the legal implication here? Because that kind of seems like
01:38:50 just creating things. - Yeah, well, that's a question of, like, the training process.
01:38:56 And the training process will always work, like, there is a pre-existing work and then it's
01:39:02 reproduced, fixated, because otherwise a machine can't learn. So and that will always happen. So
01:39:10 on this part, that's the difference between us humans, where we get inspired, we usually,
01:39:17 we usually don't copy, we take it and create something out of it. But the machine can't do
01:39:23 that. It needs to reproduct the pre-existing material and then create something out of it.
01:39:30 So this is kind of like a mix of the, you need to really strictly differentiate between the processes.
01:39:38 And then the output, of course, yes, is AI really getting away with it? Well, big question.
01:39:46 Probably it can, if it's something new, and then you can't see really a copy of something
01:39:52 pre-existing in it. But this is going to be really individual in each case. So
01:39:58 answers. Yay. I love that. Yeah. - And I have another question about stories.
01:40:06 And I, there's so much to chew on there. And it was really interesting. It made me think of this
01:40:12 thing that this indie American filmmaker, Jim Jarmusch once said, he said something to the
01:40:17 effect of like, chasing originality is kind of a fool's game in filmmaking, because every story
01:40:22 has been told. So what you have to do is kind of just be authentic and put yourself into it,
01:40:28 which is kind of what you're saying, you bring your own life to it. But then I think the second
01:40:32 part of what he said was that, who cares? What sells is what sells, you know, films are commodities.
01:40:38 And I'm sure, you know, Rupert Murdoch didn't care about the artistic integrity of Avatar when he
01:40:44 greenlit the first one, he knew he could sell it. So I guess, is your plea, like relevant? Like,
01:40:50 is it kind of a lost battle? - It depends. It depends on how we as
01:40:57 audience react to this. If we are happy with the output, just the deluge of AI created content
01:41:04 that's average, and we as consumers have been numbed into it, and we don't even tell the
01:41:08 difference anymore, I don't think it will matter. For some people, it will continue to matter. And
01:41:13 what Gerhard put on the slide, like human created content will get premium. I think that will always
01:41:20 be the case, but it will be minority, we will not need as many creatives for that. We will just
01:41:26 need fewer creatives to achieve that. So I think the industry will shrink. Creative industry will
01:41:33 not employ as many people, but sure, we will have junk, we will have high premium, it will continue.
01:41:39 But I think people will have to have side jobs, artists. And I'm comparing this to
01:41:45 the transition that happened in socialist countries. For example, Yugoslavia, during
01:41:52 socialism, artists were supported by the state, everything was subvention. So you could have a
01:41:57 job for decades to write, you're employed by a publishing company, you happily write your books,
01:42:03 you have a salary, and then all that stopped, capitalism came, and you have to sell enough books
01:42:08 now. And how many writers sell many books? How many great writers sell books? Suddenly,
01:42:15 people who were used to being going professional artists, were now hobby artists. So most of my
01:42:23 friends who are artists work in banks. And I'm thinking AI can bring this certain type of change
01:42:29 for different reasons. So professional paid artists might become only reserved for the few.
01:42:37 That's the sad development, I believe. And there are always sponsors and what are they called,
01:42:43 Metzenhas, Renaissance. Some artists will get money from rich people, always.
01:42:49 This sounds horrifying, to be honest. It sounds very futile. It's like going back to this upper
01:42:59 class that employs their personal artists, that creates something for their amusement.
01:43:04 Yeah, but that's just me.
01:43:08 Can I ask a question? So at my company, they created their own GPT. It's the same thing as
01:43:20 chat GPT, but they call it chat PMC. Because what they have... Can you hear me? Okay, sorry.
01:43:26 Did you get the first part? Yes, because the company I work for owns a lot of
01:43:35 media publications and there's a lot of writers. What we have been told is that if my editor goes
01:43:45 onto chat GPT and writes a story or even creates a headline and then fills in the story of his own,
01:43:55 everything that's on there is owned by chat GPT. So that's kind of interesting because
01:44:04 they're saying they're not going to use it right now. But I mean, it only goes up to 2021.
01:44:11 All of the content they have is only through 2021. And so everything that everybody puts into it now,
01:44:19 they won't actually own. So is this a trend where other companies are starting to create their own
01:44:28 chat GPTs internally? Do you want to? I think it's a legal question, but I don't know.
01:44:36 I think it's a more legal question for you, I think.
01:44:39 Both. It's both practical and legally, I think. What I've been noticing where chat GPTs used
01:44:47 internally is human resources, HR, they're like chatbots where companies start using,
01:44:55 what they're trying to do is feeding the algorithm with company policies and with all those
01:45:06 compliant documents and everything and training the algorithm with it. So then when it's a big
01:45:12 company and you're a part of it and you want to ask a simple question, for example,
01:45:16 how many days of vacations do I have left? Or how many days of vacations do I get in terms of,
01:45:25 or when my uncle has died? That would be usually a question you would ask somebody of the HR team.
01:45:33 They use a chatbot and they type in those questions and get pretty answers on it.
01:45:41 So that's one, actually one thing I've been noticing how companies use it internally.
01:45:47 How? I think from the practical point of view, you see already that there are a lot of companies
01:45:55 out there and offering services to train secure or non-open large language models.
01:46:03 Two of the biggest challenges right now is the amount of data that you need to really train
01:46:09 something deeply to have that. And the other one is, it's an energy thing. It's really expensive
01:46:17 and you need a lot of centralized servers and computers to be able to train this. But still,
01:46:24 I think they're popping up a lot right now, a lot of copy LLMs that are similar to DALI2,
01:46:31 similar to JCPT, similar to Midjourney. But you see with, I would say, at least from my perspective,
01:46:39 the failure of BART, of the Google JCPT clone, that it's not that easy. And if you talk with
01:46:47 people from OpenAI, and this was back last summer when we had conversations with them,
01:46:55 that they sometimes don't even know why it works. They don't know why it does what it does. And
01:47:02 this I find, for me, endlessly fascinating from a metaphysical point of view, that we go back to
01:47:08 not knowing how the technology we use actually works. But this I find also difficult in trying
01:47:14 to reproduce something like that, that has that kind of capacity, that right now people only use,
01:47:19 we put a lot of stuff in there, and then we have like this knobs that we turn, and when you turn
01:47:25 the knob a little bit to the right, it doesn't work anymore. And we turn it too far to the left,
01:47:30 it doesn't work too. And if we hit the middle, then it does what it does, but we don't know why.
01:47:34 This is, I think, one of the biggest challenges of trying to employ something that is powerful as
01:47:39 the class leaders right now. A question. I cover Hollywood, and right now the town is
01:47:49 more or less shut down by a Writers Guild strike. Now, a lot of people are saying that, you know,
01:47:55 most, there's a lot of issues, but the one they're most concerned about is AI.
01:48:01 And they believe that if, let's say, for instance, the old days, if you wanted to break down the plot
01:48:07 of a TV show and outline episodes, maybe you had 10 people who sat in a room and collaborated.
01:48:15 What the big writers are afraid of is that they may have a mini room, which is fewer writers at
01:48:23 low cost, and then they'll take that showrunner and basically they'll run whatever they do through
01:48:30 AI, through learned intelligence, and it'll require way fewer writers, which the writers
01:48:38 believe will destroy the ecosystem that happens when young writers learn by being part of this
01:48:46 whole collaborative experience. How realistic do you think their fears are? Very. In that
01:48:55 particular respect, yes, I think it's coming. The way how the companies think they want to lower
01:49:02 the cost and they want to do things quickly, and the executives who don't understand the creative
01:49:07 process, there's always been a difference between the suits and the creatives, right? And happy
01:49:12 combination is the creative suits, like people who have done the creative work before they became
01:49:16 business executives. If the companies are run only by business people who really seek the bottom line
01:49:22 all the time, they will never understand what it takes to get good at writing. So that gap will
01:49:29 remain. I do believe that's one of the most clear dangers that comes out of this in short term.
01:49:34 People in supermarkets who used to ring up your products and now you just scan them yourself and
01:49:45 these people don't have jobs anymore. You think it's going to be like that?
01:49:49 I think still amazing writers will get work, but when you had an industry with 500 people somewhere,
01:49:55 you will need maybe 20 max, something like that. So much, much fewer people.
01:49:59 I find it always interesting to, depending on who you talk to, for example, I talked to a writer
01:50:07 here two days ago who said she's not afraid at all because now she can do what she wants to do
01:50:13 without a production company or without a streamer or with any of the structure that you need. I
01:50:18 think it's like for her, it's completely leveling the playing field. If you're creative, if you tell
01:50:23 great stories, you can put stuff out there that will be able to compete with the blockbuster stuff
01:50:30 from the big studios. But I think what's happening is that what I meant earlier with
01:50:36 this structure being destroyed and those values being set free. We already see this in Germany
01:50:42 right now. I know a lot of companies who already use it. They're not going up on stage and talking
01:50:48 about it, but we know of people who train those people at the big companies and they're already
01:50:54 trying to see how it can save, make their work more cost-efficiency. And in Germany, we don't
01:51:01 even have those big writers rooms. We have already a writers room in Germany is a mini room and you
01:51:07 already have those and people try to cut down on those and try to cut down the time that people
01:51:12 have to develop the stories and cut down costs in that regard. So I think this pressure and what's
01:51:17 happening there is very real and it's gonna, in my opinion, it's already happening and people
01:51:22 are not prepared for it. Do you think that these writers, if you were them, if you were leading
01:51:30 them, would you consider this something where it's worth staying out on strike for a good long time?
01:51:38 Is this or is this just inevitable? I think it's very difficult. I think that it's,
01:51:45 I think the longer stay out on strike and are not engaging in a dialogue on how to exactly you will
01:51:52 use this and how what this will mean for the craft and for the industry and maybe also take it very
01:51:58 quickly on a political level, on a national political level, at least in the US. I think
01:52:03 that it's becoming more and more difficult because what we're seeing already right now that there's,
01:52:09 I think, the pain that the audience felt with the last writer strike, it's not at pronounced because
01:52:15 there's so much content out there and people are not feeling the pain of not being able to see the
01:52:20 new show. If they're waiting now two years or three years for next season, who cares because
01:52:25 there's thousands and thousands and thousands of hours of TV out there. But I think one of the
01:52:33 things is to be aware that the structures will change very quickly and maybe find a way to take
01:52:41 this on a political level and not so much on the level of talking with your employers or with the
01:52:46 studios or with the suits as you were saying. What is one thing that AI cannot do, just in general?
01:52:55 Be human. Being human. Feel. Breathe.
01:53:02 What field, like we're talking about writers, we're talking, what is something it cannot do?
01:53:10 What is that?
01:53:15 [inaudible]
01:53:27 From my point of view, it's the stuff which is not creative, when you need to repair something at home.
01:53:34 But in case of the creative, in case of the law, accounting, everything can be replaced very soon.
01:53:44 I think on its own...
01:53:45 Even the gardening.
01:53:46 On its own, it can't do much, but it's getting better and better. And the point is that we will
01:53:51 need less and less of human input to achieve great results. So fewer humans will be asked to
01:53:57 need creative works because a little human input can go a long way. So we have a whole creative
01:54:02 industry, we will have a much smaller creative industry. Maybe that's the right, better angle
01:54:07 to look at it. It's like AI cannot do everything, but it does have a lot of potential. So we need
01:54:12 to think about how we can make it better. So we need to think about how we can make it better. So
01:54:15 we need to think about how we can make it better. So we need to think about how we can make it better.
01:54:18 And I think this is something that you already see that in terms of training AI and there is not enough
01:54:24 text material out there that JCPT hasn't already digested. So what is happening right now,
01:54:32 they're programming large language models that produce content to train other AIs.
01:54:40 But I think that we come to a point sooner or later where this will happen with nearly no
01:54:46 human interaction. And there are some scenarios out there. Mario Klingebein, an artist from Munich,
01:54:52 who spoke at one of our conferences five years ago, he already drew this picture of saying,
01:54:56 you have this bot that scans your social media timeline. It sees that you like to drive a BMW,
01:55:02 that you like blonde men, and that you like France. And so the TV show that is created for you
01:55:09 will be with those elements very prominently. And I think this is right now, for me, apart from all
01:55:15 the societal stuff and the political stuff and the implications behind it, is that stories are
01:55:21 the glue that bind together society, that bind together, form a common narrative that we all
01:55:26 rely on, that this will get atomized in the same way that social media already atomized
01:55:32 our social interactions. And to have this happen with no two people out on the street in a very
01:55:39 dystopian setting, seeing the same story and being able to exchange common experience,
01:55:44 common emotions with each other, I think this is, for me, one of the most dystopian and terrifying
01:55:51 things behind this, that you tell stories that are just for one person, and you're even more in
01:55:57 the bubble of your own creation in terms of narration. - Yeah. They need to see the chicken.
01:56:08 One more. You know, Katjana, you were a little critical of James Cameron's avatar,
01:56:14 but is it possible that he had a Notre Dame-esque-like inspiration when he did Terminator,
01:56:23 in terms of the machines taking over? - Oh, Terminator is so relevant. I mean,
01:56:30 Terminator is amazing. Going back to it, I re-watched it. I mean, there are research papers
01:56:37 by people who work with existential risks, like global organizations, and they call it
01:56:44 "How to Avoid Skynet." So it's been very, very futuristic in its way. I know avatar is wonderful
01:56:52 when you don't have an ambition. It's meant to be entertaining, first and foremost. To me,
01:56:56 it was like, what it could have been if they wrote it properly and not just went for the
01:57:01 impressions. How amazing it could have been. So, yeah.
01:57:06 [BLANK_AUDIO]

Recommended