Dive into the transformative power of AI with Jason Feifer, Entrepreneur Magazine's Editor, and 'Build for Tomorrow' author. Unravel the future of business and formulate pioneering adaptability tactics. In this spirited debate, Ben and Jason share an ounce of history and a pound of preparation.
Category
🗞
NewsTranscript
00:00 So, what's going to happen? What's going to happen is that AI is going to break a thing
00:04 that was already broken. You'd ask me about media, it's already broken. Students writing
00:10 term papers using chat GPT, term papers were never a good way to establish whether or not
00:16 a student had absorbed information.
00:18 Welcome to Beyond Unstoppable, the podcast that explores the intersection of biology,
00:24 psychology, and technology. Here is your host, Ben Angel.
00:29 In today's episode, we dive into the heart of the AI revolution with Jason Pfeiffer,
00:34 editor of Entrepreneur Magazine and author of Build for Tomorrow. Join us as we draw
00:39 valuable insights from the past and present to prepare you for the future of business.
00:44 Together, we explore the transformative job landscape and the impact of AI on our lives
00:49 and industries.
00:51 As an entrepreneur myself, I understand the importance of future-proofing your business
00:55 and your career. AI is a critical component of that conversation. Sit back, relax, and
01:01 enjoy the show. And if you like what you hear, please give us a rating and review. Your support
01:06 means the world to us and helps us reach more listeners who are ready to become unstoppable.
01:13 This episode is brought to you by Ben Angel's new book, The Wolf is at the Door, How to
01:17 Survive and Thrive in an AI-Driven World, presented by Entrepreneur. Get an exclusive sneak peek
01:23 and pre-order at thewolfbookhub.com.
01:25 Jason, first of all, congratulations on the success of your new book, Build for Tomorrow.
01:31 I have to say, it's an incredibly relevant topic of discussion, especially around adaptability
01:38 in the world of artificial intelligence that we now find ourselves. How are you keeping
01:44 up with all of the recent developments that are occurring?
01:47 I really appreciate that. And congratulations on your book. And I am just having a lot of
01:55 conversations, which I'll be honest, I think is more valuable than trying to keep up with
02:00 the blow by blow of every shift in AI. Because the thing is that we're in this early day
02:06 experience where a lot of what's being developed right now is experimental, will never actually
02:13 become broadly useful. People are trying to figure out what to do with this thing that
02:20 is so incredible, but frankly, doesn't have obvious use cases. I think that that might
02:27 surprise people to hear. But right now, I think that people are applying AI to everything.
02:32 And what we'll discover is that a lot of that stuff is actually just kind of useless. And
02:37 that's okay, because it takes time to figure out what something is for. And that's the
02:42 thing that we're doing right now.
02:43 Yeah, I have to say, I feel like we're the guinea pigs of this experiment.
02:47 But we're always the guinea pigs. This is something that people treat AI. I have a belief
02:53 that pretty much everything that you've ever experienced in your life or that you are experiencing
02:59 in the world is just a repeat of things that happened before. And you can tell yourself
03:04 all day long, no, this time is different, this time is new. But I really just don't
03:08 think that it is. And I think the same is true with AI. People talk a lot about, "Oh
03:12 my God, the experiment that is being conducted on live human beings." So that's also the
03:17 experiment that happened when Samuel Morris introduced the first commercially viable telegraph.
03:22 And suddenly we were able to move information faster than the speed of a horse. That was
03:27 a radical experiment, because the world had never moved information that fast. And you
03:32 had no idea what was going to happen. But the only way to discover what was going to
03:35 happen was to put it out in the world and see how people are using it. And that's what
03:37 we're still doing today.
03:38 Do you think there's a slight difference though, because a lot of people are referencing the
03:42 Industrial Revolution. But in that case, that was confined largely to the UK for a 70 year
03:49 period because they prevented the export of equipment and skills to other countries so
03:55 they could get a stakehold on it. Do you think this is moving at a faster rate than some
04:01 of those historical events that have occurred?
04:04 Well, sure. It's moving faster than the Industrial Revolution, but everything is relative. It
04:11 doesn't make any sense to compare what is happening now to something that happened in
04:16 a completely different time with completely different expectations and resources. We are
04:20 in an interconnected world in a way that we weren't before. So quite obviously, things
04:25 that are introduced now are going to move faster than they did 400 years ago. But the
04:32 question I think that has to really be explored is, if you're going to be doing these kind
04:38 of historical comparisons, is what is the change relatively? So the speed at which something
04:47 is changing now will feel exactly like the speed at which something changed back then
04:53 to the people who were alive at that time.
04:55 For people who had never heard recorded music, because for all of human history, the only
05:01 way to listen to music was to have a human being perform with an instrument in front
05:05 of you. To them, the first time that they heard a phonograph, that was as radical a
05:12 change as seeing ChatGPT for the first time, if not more radical. And that's what I think
05:18 we need to really be focused on.
05:20 Yeah. And where do you think the publishing industry is potentially going? I mean, we've
05:24 seen it. They did their test of AI and then they obviously quickly got a lot of criticism
05:30 for it.
05:31 Yeah, rightly so.
05:32 But part of the story that a lot of people didn't kind of follow up on, which occurred
05:36 a few weeks later, was that they ended up laying off a number of their writers and then
05:41 elevated someone to the position of AI development. So it's not necessarily that maybe in their
05:47 eyes they saw it as a complete and utter failure, but where do you see the publishing business
05:54 going?
05:55 Okay. So that's a great question and I'm going to make it go really big, but let me start
06:01 small. My first job in national media was at Men's Health. I was a junior editor at
06:09 Men's Health. This was like 2008. I just moved to New York. And if you know anything about
06:14 2008, I showed up right around a recession. And shortly thereafter, I saw a lot of my
06:22 colleagues get laid off, not just at Men's Health, but also at Best Life. People may
06:27 not even remember, but Best Life was a more mature, older men, more mature focused magazine
06:35 that was a spin-off of Men's Health. It was called Best Life. It was a print magazine
06:39 and it was online. And they closed it. They folded it and they laid everybody off. And
06:44 you might say, "How awful. How awful, right? How awful to have this thriving publication
06:51 and now it is just gone." And for people whose lives were disrupted because people
06:56 worked there and they had to find other jobs, that was a disruption without question.
07:00 But I've come to think a lot about that particular moment because here's the thing. Why did Best
07:06 Life exist? I'll tell you why. Best Life existed because Men's Health, which was a brand that
07:11 had been around for longer, was struggling to land luxury advertisers because luxury
07:17 advertisers didn't want to advertise in Men's Health because they felt it was too down market.
07:21 And so Men's Health thought, "Well, why don't we create a more upscale version of Men's
07:25 Health? We'll call it Best Life and we'll be able to get the luxury advertisers." And
07:28 that worked for a while until dot, dot, dot, it didn't because the economy changed. And
07:34 as a result, Best Life wasn't getting those luxury advertisers and therefore Best Life
07:38 ceased to exist. Is that a bad thing? I would argue no. Things are created because of current
07:47 opportunities in marketplaces and current needs. At the time that Best Life was created,
07:52 there was a marketplace opportunity to create it. Then that marketplace opportunity disappeared.
07:57 Therefore, we don't have Best Life. Just because we have something doesn't mean that we should
08:01 have it forever. Just because something was created doesn't mean it exists forever. So
08:07 why does CNET exist? CNET exists because there was a time in which that information was hard
08:14 to find in other places. And so somebody created CNET and saw a marketplace opportunity. And
08:19 then they staffed it with people that frankly aren't all writing amazing breaking stories.
08:25 And that's no shade on CNET. That's literally every single publication because most publications
08:30 are just driving towards traffic, which means that most publications just have to rewrite
08:34 what is trafficking on other websites so that they can get some of that traffic too. That's
08:39 not a good system. That's a broken system that's just trying to take advantage of whatever
08:43 small shattered portion of the economics of media still work.
08:49 So CNET is struggling right now and they laid off people. And that sucks for the people
08:53 who were laid off. That sucks. But just because a publication is shifting or possibly even
08:59 going to go away doesn't mean that we have lost something because we are going to gain
09:05 something else. Those people are going to go on to do great work elsewhere. New market
09:10 opportunities are going to be created. It doesn't make sense to look at something in
09:13 a vacuum and say just because something is shrinking, we are losing. That's usually not
09:17 the case.
09:18 So I've had a lot of people talking about, you know, we need to pivot, we need to adapt.
09:23 And a lot of people, it's like the scene out of Friends where they're trying to carry the
09:27 couch up the stairs and Ross is yelling pivot to everyone. But they seem to be very vague
09:33 on in terms of what is the potential pivot that we as a society and an economy are going
09:39 to need to make. I mean, there's the generalization that there will be more jobs in tech. Speaking
09:45 to some people in tech, some of them are already aware of layoffs due to AI and questioning
09:51 what is their career looking like even as a coder within the next few years. Do you
09:56 have any kind of indication or forethought on where you think these new jobs may be created?
10:03 Well, I don't because I can't tell the future and none of us can. But I can tell you that
10:10 over and over again, I mean, you mentioned the Industrial Revolution. So here's an interesting
10:13 fact from the Industrial Revolution. So we all know the Luddites, right? Usually that's
10:18 just a term people use for people who don't like technology. The Luddites were a real
10:22 people. It was a movement of people who were trying to, who broke into factories and smashed
10:27 automation technology. And a lot of people at the time who were employed as what you
10:34 might call kind of pre-industrial, industrial knitters, their jobs were to sit in factories
10:40 and knit utility socks, let's just say, all day long. And then machines came along and
10:48 they could do that knitting much faster and more efficiently than those people. And as
10:54 a result, a lot of those people lost their jobs. Do you know what happened next?
10:59 What?
11:00 I talked to a knitting historian. There is a historian for everything. And what happened
11:06 was that, I should be totally clear, I am not discounting the terrible individual disruption
11:13 that happens when somebody loses their job. That is something that we as a society should
11:17 be thinking a lot about. But what happened? What happened was that, according to this
11:21 knitting historian, there was an incredible explosion of innovation in knitting. Why?
11:29 Because now you had all these people who were not employed doing repetitive tasks all day.
11:35 And so they started to rely upon their human ingenuity and they started to innovate and
11:39 they started to create the regional knitting styles that define global textiles. And that
11:47 comes out of a moment in which the Industrial Revolution prompted greater innovation by
11:52 humans.
11:53 I'll give you another example. We're very concerned, generally speaking, when people
12:00 are talking about new technologies. We're very concerned about the fragility of the
12:04 human experience. We seem to think that we are always at the verge of breaking, that
12:10 the way in which we function as a society is always at the verge of breaking, the way
12:14 that we function as human beings are always at the verge of breaking. This is why, for
12:17 example, we're so focused on how smartphones are tearing at the fabric of society and people
12:23 aren't able to communicate with each other anywhere. It's all nonsense because we've
12:25 been talking about this forever.
12:27 And so there are endless examples of this. People thought that the spinning wheel of
12:31 a bicycle would make us go insane. They thought that novels would make women infertile, that
12:35 radio would create addictions. I mean, it's absolutely ridiculous. So you have John Philip
12:41 Sousa at the dawn of recorded music. John Philip Sousa is one of the world's most famous
12:47 composers at the time. You know his music today. That's John Philip Sousa. John Philip
12:54 Sousa was very opposed to recorded music technology. And that was for a whole host of reasons,
13:01 but obviously the biggest one was that he was concerned for his own economic well-being.
13:05 He was in the business of live music and now he saw these machines and he thought that
13:09 machines would fully replace people. And he had all these wacky, wacky arguments for how,
13:16 for example, nobody will ever meet each other anymore. Why? Because people meet at parties,
13:21 but they dance when the band is dancing and then the band, because they're human beings,
13:25 have to take a break. And it's at the break when people stop dancing and actually start
13:28 talking and that's when friendships are made and that's when love connections are made.
13:33 And now because the music will come from a machine that doesn't need to take a break,
13:36 nobody will ever stop dancing and therefore nobody will ever meet. This is the kind of
13:38 stuff that people were arguing back then, which I would argue is not that different
13:41 from the kinds of stuff that you're hearing argued with AI today. So anyway, I know I'm
13:46 being long-winded, but I have a point. And the point is this, John Philip Sousa, one
13:50 of the things that he worried about was that when recorded music enters the home, when
13:53 people have phonographs, when they have machines that can play music so that they don't need
13:57 a human being to perform with an instrument, he thought that people would stop learning
14:02 how to play instruments. He thought that there would be no reason why people would pick up
14:08 an instrument and learn it when they had a machine that could perform the music for them.
14:14 Ben, do you know what happened when recorded music entered the home vis-a-vis people deciding
14:21 to learn how to play instruments? Did it go up or down?
14:24 It would have gone up.
14:25 It went wildly up, wildly up. And in fact, the music industry became much richer and
14:33 set the stage for what we have now with all sorts of jobs that John Philip Sousa could
14:37 not have possibly envisioned, from DJs to audio technicians to whatever. You asked me
14:44 the question that prompted this long rant was, "What are the jobs of the future going
14:47 to be?" I don't know. John Philip Sousa didn't know. But I will tell you that what always
14:52 happens is that when new technology is created, it creates new opportunity. It reduces need
14:59 in some places and it creates new needs in other places. And then those needs are filled
15:04 by new jobs and new human ingenuity. That is what we will see.
15:08 It's interesting trying to piece this puzzle together because obviously there's a million
15:12 different predictions happening right now. I mean, the second ChatGPT was released, I've
15:17 been doing automation for the past 15 years, giving away my age. But typically when I write
15:24 a book, I will hire my own editor and researcher and I have done for the past few years. This
15:29 book, I'll be honest, they're not needed. Even in the instance of a veterinarian, we
15:37 had spent over a thousand dollars on vet bills trying to work out what the issue is, uploaded
15:41 the blood work, found out what the answer was in less than five minutes. Attorney, immigration
15:47 fees. The attorney, I suspect, will be phased out before the end of the year.
15:52 Oh, that's crazy. The attorney is going to get phased out by the end of the year?
15:57 I'm not talking generalistic. I'm talking in my own case. In that looking at over contracts,
16:05 using AI to assess contracts and pull up relevant things that need to be discussed before anything
16:10 is signed. But what I'm finding in researching this book, I'm trying to use it for every
16:16 single use case possible. And this year, I mean, it's phenomenal. We've probably saved
16:22 over a hundred thousand dollars because of artificial intelligence. Now, obviously that
16:26 comes at a cost of someone else. But the question is, the use cases that we're starting to see,
16:33 I mean, there was one CEO out of Hong Kong last year, they trained an AI for a gaming
16:40 company and the AI, obviously humans had to execute the decision making. It outperformed
16:46 the Hong Kong stock exchange. Do you think people are adequately prepared to adapt? Yes,
16:55 other jobs are going to be created, but are new jobs going to outpace the loss of existing
17:02 ones within the next few years?
17:04 I mean, what you're asking or what you're setting up is what economists call the lump
17:08 of labor fallacy. The lump of labor fallacy is, well, it's a fallacy. And the lump of
17:15 labor in this is the idea that there is a fixed amount of work to be done and a fixed
17:19 number of people to do that work. And if you disrupt, if you create an imbalance on either
17:27 side of that, you create a kind of permanent disruption. So this is the reason why, for
17:31 example, people are, to people who are, opposed to immigration because they believe that people
17:37 from outside the country will come in and they will therefore take the jobs of people
17:43 who already live in the country. This is a belief that there is a fixed amount of work
17:47 to be done. And if you add more workers, then there are therefore going to be people who
17:51 are out of work.
17:52 But of course, that's not true. That's not what happens. And the reason for that is very
17:56 obvious and simple. And it's because the immigrants aren't just workers, they're consumers too.
18:00 And therefore, when they come and work, they also consume and therefore they create jobs.
18:05 There is no fixed lump of labor. And the same is true with technology. When technology automates
18:09 some version of what people used to do, it does not create a net loss of jobs. It creates
18:14 a shift in jobs. That's what always happens. If that isn't what happened, then we would
18:20 all be out of work starting in the Industrial Revolution. You and I wouldn't have jobs.
18:24 Why on earth would we? So I don't see a reason why that would change now. Yes, this technology
18:30 is interesting and different. Yes, it will create shifts that are going to be, by their
18:35 very nature, different than ones before. But are we prepared for it? I mean, we're prepared
18:41 for it because we have human brains. Human brains are adaptable. They're literally built
18:45 to adapt. I mean, if you get these stupid studies where people are given smartphones
18:52 for the first time and their brains are scanned and you see that there are changes in their
18:55 neural pathways, and then everyone goes, "Oh my God, that's awful." Except that you do
18:59 the same exact studies when people learn how to drive a car and the same thing happens.
19:03 Every time you learn something new, your brain changes. That's what it is built for. So are
19:08 we prepared? I mean, we don't have a game plan, but we've never had a game plan. This
19:12 means that there will be messy and there will be winners and losers. We will, as a society,
19:18 have to do our best to identify where the weakest parts of our economy and our social
19:24 structures are and figure out how to bolster them. But do I have full confidence that we
19:29 will create better things as a result and create a net growth in comfort and lifestyle
19:37 and ability and access for everybody? Yeah, I do believe that.
19:41 Before we continue, Beyond Unstoppable is brought to you by Ben Angel's new book, The
19:47 Wolf is at the Door. How to survive and thrive in an AI-driven world. Get your exclusive
19:52 sneak peek and pre-order at thewolfbookhub.com. Now, back to the show.
19:58 Do you think the pandemic was a good testing ground for our adaptability as humans? Because
20:05 a lot of us, we did incredibly well. Others didn't do so well. And that we're only talking
20:09 off the backs of a couple of years ago in terms of everything was upended. Do you think
20:15 that was a good test for people for internal reflection to go, how did I adapt during that
20:20 scenario? I do, actually. I've often thought of the
20:25 pandemic as a real success for human social structures. All right. I mean, you can think
20:33 of the pandemic as having done all sorts of terrible damage to human social structures.
20:39 And that's true. There were riots in the streets and there were massive political fights. And
20:47 we as a social structure were pushed very hard, but it all held. All of it. All of it
20:55 held. And not only did it hold, but it came roaring back. The economy came roaring back.
21:03 What you found was that people had a deep, deep desire to connect, to build with each
21:12 other, to thrive, to support each other. And that's the reason why we have survived every
21:24 other thing that has ever been thrown at us. Because as humans, what we do is we build.
21:31 And this is what we did in the Industrial Revolution. Do you know that the bubonic plague
21:37 of the 1300s created the employment contract? You know that? It's fascinating. I was talking
21:44 to Andrew Rabin, a medieval scholar at the University of Louisville. I wanted to know
21:49 what good came out of the bubonic plague because back in March of 2020, we didn't know what
21:54 was going to happen with the pandemic. And I didn't know if anything good was going to
21:58 come from this. And I thought, well, if something good came out of the worst version I can think
22:01 of, then maybe something good will come from the pandemic. I called Andrew. I said, "What
22:05 good came from the bubonic plague of the 1300s?" He said, "All sorts of fantastic things did."
22:10 But one of the most interesting ones was that the bubonic plague killed upwards of 60% of
22:16 Europe. The lords, this was the medieval European economy, was a lord and serf system. It was
22:21 slavery. And so the lords want the serfs to get back to work, start making money off the
22:27 land. But you know what has changed? What has changed is that there aren't enough serfs
22:30 for all the lords anymore. They're all dead, which means that the lords have themselves
22:34 a labor shortage. It's what we would call it now, where multiple lords are now having
22:39 to go to the serfs and say, "Come work for me. No, no, no. Come work for me." The serfs
22:43 realize that something has changed, which is that they can now ask for compensation
22:48 for their labor because they are in demand. Or they can say, "You know what? Screw this.
22:53 And I'm going to move to the city and I'm going to join the first merchant class. I'm
22:57 going to become an entrepreneur." That's what happened.
23:01 Was it clean and simple? No. It was incredibly messy. It was awful. But the end result is
23:08 the reason that you and I are talking right now, which is to say that in one way or another,
23:14 you and I are having this conversation because our income is tied to it. The work that you
23:18 do, the work that I do, has brought us together in this moment because we do work and we are
23:24 compensated for it. And that, at least in Western society, comes from the bubonic plague.
23:30 Good things happen out of very bad things.
23:33 Well, the Industrial Revolution also brought in workers' rights.
23:38 Yes.
23:39 So, in relationship to that, do you think that there should be some level of AI regulation?
23:45 I mean, sure, in theory, but tell me what it is. Also, tell me why I should trust a
23:55 bunch of 300-year-old men who can barely explain the basics of the internet to regulate AI.
24:03 Tell me why I should trust those people, the people who occupy the halls of Congress. Why
24:07 are they smarter than Sam Altman? Why are they smarter? I don't understand. Is it because
24:13 we have an inherent belief that they are looking out for us? Do you really think that your
24:20 congressman is looking out for you? I don't. Who's going to build the regulations?
24:25 But do you feel the same way with the technology industry? Because they're effectively scraping
24:31 content and we've got more lawsuits coming out this week that they're scraping copy-written
24:36 content.
24:37 Yeah, well, these things are going to have to be worked out and they will. I mean, everything
24:42 is a stabilizing and destabilizing and I don't think that anybody should have free reign
24:47 over things. But I think that if you have an American political body that says, "You
24:54 know what? We should hit pause on this right now. All you are going to do is let China
25:00 define what AI does." Tell me how that's good. How is that good?
25:05 The interesting thing is that that pause came from experts in AI, not from the government.
25:12 Well that pause doesn't exist. So that pause is a thing that's been thrown around by a
25:16 lot of different people.
25:17 What's fascinating about this subject to me is the people, the creators of the AI are
25:21 now coming out and warning of the economic implications as well as other issues such
25:27 as Sam Altman. Geoffrey Hinton is obviously a notable one. I mean, he's one of the godfathers
25:33 of the neural network, which AI is based on. And he recently resigned from Google to speak
25:40 out about the issues. In all of the research that I'm doing, I keep coming to the same
25:46 conclusion which is there isn't necessarily an advantageous position in terms of being
25:53 an AI doomer or being an AI optimist. Do you think in terms of preparing people for change,
26:00 do you think we should be able to get out of this binary thinking of just seeing it
26:05 one way and being able to see the shades of grey in this?
26:10 So I don't approach this with binary thinking. I mean, I know that I have played the role
26:15 in this conversation of being the AI optimist, but really my perspective is this. We spend,
26:22 as a society, and as individuals too, we spend far too much time and energy debating whether
26:29 something should happen when it has already happened. And that is not productive. And
26:34 instead what we need to do is we need to channel that time and energy into figuring out how
26:38 to make the most of it. AI is here. It's not not here. And you cannot find an elected official.
26:48 I don't think you can find an elected official who can explain the technology, but you certainly
26:52 can't find one who's going to come up with some incredibly brilliant solution to halt
26:56 AI around the world and preserve exactly whatever it is that everyone wants to preserve. That
27:01 doesn't exist. So instead, why don't we start thinking about how do we make the most of
27:07 this? Is there regulation? Is there smart regulation that could exist? Sure. Here's
27:12 one that I think makes a lot of sense. If something is produced by artificial intelligence,
27:17 it should acknowledge that. It sounds smart to me. But are we going to stop this technology?
27:23 No. China's not going to stop it. I had a really great conversation with David Autor,
27:28 who is an economist at MIT, who made this wonderful point, which is we get the future
27:34 that we work towards. And AI is what he called plastic technology, which is to say that it's
27:41 not made of plastic, which is to say that it's very flexible. It's malleable. It's moldable.
27:44 It is whatever we make of it. So in China, they're going to use AI to create what is
27:51 no doubt going to be the world's most sophisticated surveillance system. That's what they're going
27:57 to do with it. And that's not what a free market in America is going to do with it.
28:03 I would hope that we are committed to thinking really smartly, not about how to pretend that
28:11 this stuff doesn't exist, but rather how to figure out how we can use this technology
28:18 to create the future that we actually want. And that means engaging with it. That means
28:23 being open-minded about it. That means no more talk about how fragile we are as humans.
28:30 Instead, let's run real experiments. Let's be genuinely interested. I mean, I love what
28:36 you're doing as you're diving into all this, but of course, let's keep in mind that what
28:40 you're doing is trapped inside of a window of time. It's very, very early days, 100%.
28:46 And I think that running to the halls of Congress and saying, "This must be regulated," as if
28:54 regulation is by itself, just the very concept of it, some kind of solution, is foolish and
29:00 sets us back. Which isn't to say that all regulation is going to be bad, but I don't
29:05 think that that can be the solution. That can't be the way that we just approach this,
29:08 which is like, "Oh, we just need to regulate it." If a bunch of lawmakers just get together
29:12 and regulate it and try to stop it, try to hold it back, that we're somehow doing something
29:17 positive. I don't think that we are.
29:20 It's interesting because obviously the AI experts that are calling for regulation, they're
29:24 not calling for it necessarily to be stopped. And there's certainly no pause.
29:30 Well, but let's pick apart. I mean, we have to really be smart and pick apart what you're
29:33 lumping in together as AI experts. Because this is something that happens with every
29:39 technology, which is like somebody was involved in the creation of something. And then this
29:44 one person says, "You know what? Actually, I have a change of heart here." And then that
29:48 person gets held up. Really, their voice gets elevated above an entire industry of people
29:55 who are genuinely working hard on creating good things. Because it's very helpful to
30:01 have a media narrative and Congress is always looking for a kind of contrarian spokesperson
30:07 who can say, "This thing is terrible." It was the psychologist who became the voice
30:12 of comic books are destroying children. Remember that? They brought this guy in front of a
30:17 congressional hearing and he talked about how children are being driven to depression
30:22 and to murderous states because of comic books. And what it led to, of course, was the comic
30:28 industry then saying, "Oh my God, Congress is going to come regulate us. So we got to
30:32 regulate ourselves." They came up with the Comic Code Authority, which basically squashed
30:36 creative activity in the comics industry for decades. It was a net human loss.
30:41 And I think that what you're seeing right now is the combination of smart, intelligent
30:45 people who are genuinely trying to figure out what is best and hysterics who benefit
30:56 from hysteria. What's his name? His name is the guy who like the sort of featured player
31:02 in the social dilemma. Yeah, whatever his name is, I'll find it. But anyway, that's
31:06 a guy who the social dilemma, I'm looking it up. That's a guy who has Tristan Harris,
31:11 Tristan Harris from the Center for Humane Technology. That's the guy who makes a lot
31:15 of money off of people who are afraid of technology. Don't forget that every movement that touches
31:21 politics has a kind of, are you familiar with the economic concept of Baptists and bootleggers?
31:26 No. All right. So Baptists and bootleggers, it's a metaphor from prohibition. The idea
31:32 is that the people who were kind of publicly pushing for prohibition and were aligned with
31:38 politicians who were sympathetic to them, they were the Baptists, which is to say that
31:42 they were people who had a genuine belief that alcohol was a scourge on society, that
31:49 we would be better as a people if there was no alcohol. And they had a sort of noble mission,
31:56 even if they were completely misguided in it. And it was easy for politicians to therefore
31:59 align themselves with this very noble mission. Who could be opposed to these deeply religious
32:04 people who just want what's best? But the thing is that behind that movement were the
32:11 bootleggers. The bootleggers were the people who were going to profit very handily off
32:17 of the distribution of illegal alcohol. Because that's of course exactly what happened. As
32:23 soon as alcohol was banned, a black market formed and the bootleggers made great money.
32:28 They were thrilled with prohibition. Bootleggers loved prohibition. We cannot forget that the
32:34 idea of the Baptists and the bootleggers is that all legislation creates strange bedfellows.
32:39 And you have people who have genuine interest in societal good. And then you also have people
32:46 who are just out for self-interest. And both of those parties are interested in a piece
32:51 of legislation. And both of those parties will be pushing. And it does no good to either
32:59 lump them together or to act like one of those parties doesn't exist. And I think that is
33:06 a lot of what you're seeing now in arguments against AI. This is not all people who are
33:12 just like, "Well, we're the experts in AI and we think this is bad." This is also a
33:15 lot of people who stand to profit quite handily if either AI is regulated such that only a
33:20 small number of large companies are able to really engage with it, whereas smaller players
33:26 are regulated out of existence, or people who just make a lot of money off of scaring
33:30 you. And you got to be really, really careful when you just start talking about like lumping
33:34 in all the experts. All the experts are a small number of people in the broader AI community
33:39 who all have a lot of different agendas, some good, some bad.
33:42 A hundred percent. It's interesting looking at all of the reporting coming out is that
33:47 they're talking about superintelligence or AGI, for example, but it's not necessarily
33:53 getting at the heart of the societal changes that are going to occur before that. So in
34:00 essence, it's almost a little bit of a distraction, but of course it's drumming up new business.
34:05 Yeah. I think that the reason I said a little bit ago that one of the great challenges with
34:10 adoption of new technology is that we don't, as humans fundamentally do not believe in
34:14 our own adaptability. This is the reason why the spinning wheel of the bicycle will make
34:18 us go insane and novels will make women infertile. We just kind of don't believe in our own.
34:23 But really what we're doing, the problem with that is that we cannot imagine a world in
34:31 which some things are not fixed. If the number of jobs that were available in America were
34:38 fixed and the variable was the number of people who come into the country, then you would
34:45 have an imbalance because more people would come into the country, but the number of jobs
34:48 are fixed. But that's not true. The number of jobs are variable. They're not fixed. And
34:54 the number of people who come into the country are not fixed. And that's the reason why the
34:59 economy can grow and change. The same is true for the way in which we as a society adapt
35:08 with technology. It is because every part of our lives and every part of our society
35:16 is unfixed. None of it's fixed. None of it. I was just on CBS News talking about AI's
35:23 impact on travel. The host asked me, "Isn't this going to put a lot of people out of business?"
35:30 And I said, "Well, look, people have been saying that travel agents are going to be
35:34 out of existence since the internet began, but here they still are." And they're quite
35:38 useful for a lot of people. And the nature of their work has changed, but the industry
35:43 still exists. And you could imagine a world in which travel becomes more efficient to
35:49 book, therefore cheaper, therefore more accessible, therefore more people travel, therefore there
35:58 have to be more jobs to serve those people who travel. That is the same logic that got
36:03 us to the leisure economy that we have now. Because as soon as farming was automated in
36:08 a country that used to be fully agrarian, it's not as if everyone was suddenly out of
36:14 work. What instead happened was that work became more efficient, which means that people
36:20 had more time, which means that they wanted to spend their money in different ways, which
36:24 means that a leisure economy developed to serve those people, which then created new
36:29 jobs. And you could not have imagined in early 1800s America, when the first tractors or
36:37 whatever are coming online, you couldn't have imagined that this tractor is going to
36:41 lead to professional baseball. Couldn't have imagined that. But that's what happened. And
36:48 that's because things aren't fixed and because we are adaptable.
36:52 It's fascinating that you bring up the tractor. I've heard Gary Vee mention it. I mean, I
36:56 grew up on a cattle farm, saw the tractors, but I mean, there are key distinctions that
37:02 tractors and farming equipment, there were greater barriers to entry than what there
37:08 is today. So there's a lot of those barriers to entry that we could see a huge acceleration
37:15 and change. So on that, I know I've got to be aware of your time. So what would you suggest
37:21 to people in terms of upskilling now and changing before the change occurs, which is already
37:28 happening, let's be real. But what suggestion would you make to people in relationship to
37:34 upskilling? Where would you focus the effort?
37:37 So I think that everybody should spend a little time thinking about what parts of them are
37:47 replaceable and what parts are not. And how can you think about the parts of you that
37:56 are irreplaceable, which is going to be your human ability, which is going to be the transferable
38:06 value that you have that goes beyond the tasks that you perform every day. I am not a writer.
38:13 I am a storyteller. And it is possible that a bunch of the writing that I do will at some
38:21 point be automated. Honestly, I don't think so, but it's possible, at least some part
38:27 of it. But if I think of myself purely as a writer, then I'm in trouble because that
38:32 kind of stuff is going to get replaced. But if I think of myself as a storyteller and
38:36 a storyteller with a distinct perspective, I find ways to be useful. How? Well, because
38:45 what I've done here with you today is storytelling. When I get hired by a company to stand on
38:51 a stage and talk to them about how they can be more adaptable, that's storytelling. Writing
38:57 a book is storytelling, doing podcasts is storytelling, consulting is storytelling.
39:01 People buy my time and they tell me their problems. You know what I do? I tell them
39:05 a story. I tell them a story of somebody who I talked to who went through a similar problem,
39:10 and then I tell them the story of how they can solve it. I have thought through, not
39:15 perfectly, but I have thought through what is the thing beneath the tasks that I perform
39:20 and the role that I occupy. Those things are changeable. But there's something inside of
39:24 me that does not change in times of change, and I think we all need to be very mindful
39:27 of that. And I realize that I'm speaking from a place of privilege here because of my role
39:34 and the industry that I'm in, but I think that if you go up and down the economy, you
39:42 can think and you can identify ways in which everybody has something that they can lean
39:51 more into.
39:53 Whoever is listening to this, have an incredible human ingenuity. You are a creature built
40:02 to create and to change. And you have the same skill set as every other human on the
40:09 planet. The same one. I'll tell you, we're all good at one thing. Pattern recognition.
40:13 It's the one thing we're good at. And the difference among us is that we're all good
40:20 at recognizing different patterns. I can't do what a metalworker does. They recognize
40:25 patterns in a way they have an understanding of space and spatial construction. My pattern
40:31 is how people think. I hear people talk. I understand the way that they're thinking about
40:38 things. I can repeat that to other people. We all have an ability to be a pattern recognizer,
40:46 and we will always live in a world that values humans. Period.
40:51 Jason, I want to thank you so much for being here today.
40:55 I love this subject, as you can tell.
40:57 Yeah, it's interesting, like the unfolding of the process. I mean, for me, I've been
41:03 speaking to a lot of 20 year olds who are studying for work right now that they fully
41:08 suspect won't exist in the next few years. And there's already layoffs happening within
41:12 their industry. But I liken a lot to the process of grief in terms of denial, anger, acceptance,
41:22 moving through those stages. And it's interesting watching the younger generation go through
41:26 these changes.
41:27 Yeah. It's funny, we spent 50 minutes talking about AI, and I didn't give you my core bit
41:34 on AI. So I'll just tell it to you. But my core bit is this. I went to speak to a law
41:41 firm at an attorney retreat a couple months ago in San Francisco. And after my talk, we
41:47 got to Q&A, and all the attorneys are asking me questions about chat GPT. And afterwards,
41:51 I got off stage, I was talking to their CEO, and I said, "So interesting that all of your
41:57 attorneys are so focused on chat GPT." And he said, "They're not going to say this out
42:01 loud, but I'll tell you what they're concerned about. What they're concerned about is that
42:06 AI is going to make their work more efficient, and they get paid on billable hours. And so
42:14 if their work becomes efficient, they can't bill as many hours, and that's what they're
42:18 worried about."
42:19 Yeah.
42:20 And I said, "Well, that's great. That's a positive thing." And the reason that's a positive
42:25 thing is because you know who hates billable hours? Everybody. Everybody hates billable
42:30 hours. Everybody. Clients hate billable hours, lawyers hate billable hours. Why do we still
42:35 have billable hours? Because nobody was incentivized to make a change. Because that was the system,
42:40 and there was no reason for any law firm to change it. But guess what? Now there is. So
42:45 what's going to happen? What's going to happen is that AI is going to break a thing that
42:49 was already broken. That's the thing. It was already broken. You'd ask me about media,
42:57 it's already broken. It's not breaking something that's working. It's breaking something that's
43:02 broken. Students writing term papers using ChatGPT. Term papers were never a good way
43:09 to establish whether or not a student had absorbed information. It was always a bad
43:13 way to do that. But again, there was no incentive to find another system. Now there is, because
43:19 we are going to break things that are broken, which allows us to build from what matters.
43:27 And that I think is going to be the true gift of AI.
43:30 That's perfect. Especially billable hours. I know my attorney is concerned, and he should
43:36 be. And I'm glad.
43:37 Should be. But you know what? You'll find other ways. There will be new ways to build
43:43 relationships. And also, guess what? What if you and every other client for your attorney
43:52 only have to pay half of what you used to pay? Right? You only have to pay half now.
43:59 Work speed has doubled. So you paid your attorney $10,000 a month, now you're paying them $5,000
44:04 a month. All right. Well, that's good for you. You saved money. Is that bad for the
44:09 attorney? It's only bad for the attorney if the attorney thinks of things as fixed. But
44:14 I don't think of things as fixed. So what could the attorney do? Well, here's a start.
44:20 The attorney used to only serve a high-end clientele who could afford very expensive
44:26 legal services. There is a whole world of people out there who cannot afford legal services.
44:33 Maybe now they can. Maybe you can shift your law firm to serve a far wider range of people
44:40 with a far wider range of legal services. That allows you to make the same amount, if
44:46 not more money than you did before, and it creates greater access to legal services for
44:52 people who didn't have it. That is a net victory for everybody. And this is what we're afraid
44:59 of. It doesn't make sense.
45:12 (wind whooshing)