• 10 months ago
Venture into the epicenter of innovation with Jonathan Campos as we dissect the titans of tech, steer through the self-driving revolution, and grapple with AI's impact on jobs.

Jonathan Campos delves into current crucial AI trends and strategies for companies to leverage these for growth, extending beyond the realm of large language models.

Unveiling the pressing technological challenges for self-driving cars, Campos gives us a glimpse into the future of transportation.

Addressing job loss concerns as AI evolves, emphasizing how companies can navigate ethical challenges to ensure societal benefits.

Category

🗞
News
Transcript
00:00 They were all deep fakes and they convinced this employee to hand over $25 million.
00:04 Welcome to Beyond Unstoppable, the podcast that explores the intersection of biology,
00:11 psychology, and technology. Here is your host, Ben Angel.
00:15 In a world brooming with technological marvels and ethical quandaries,
00:19 where does humanity draw the line in the AI sand? Today, we grapple with oppressing issues,
00:25 the race with tech giants, the dawn of self-driving cars, and the looming shadow of job
00:29 loss due to AI innovations that are rapidly accelerating. To guide us is Jonathan Campos,
00:36 Chief Technology Officer and VP of Engineering at Alto, whose genius is steering a new course
00:42 in the rideshare industry. Together, we'll dissect Alto's David versus Goliath battle,
00:47 forecast self-driving car challenges, and navigate the moral maze of AI.
00:52 Don't miss Jonathan's take on paving the ethical road ahead in tech.
00:56 And if you like what you hear, please give us a rating and review. Your support means the world
01:01 to us and helps us reach more listeners when we're ready to become unstoppable.
01:06 This episode is brought to you by Ben Angel's new book, The Wolf Is At The Door,
01:10 How to Survive and Thrive in an AI-Driven World, presented by Entrepreneur.
01:15 Get an exclusive sneak peek and order at thewolfofai.co.
01:19 Jonathan, tell us about Alto. And I know I'm pronouncing that with an Australian accent,
01:24 so please feel free to correct me. Tell us about Alto. So it's a ridesharing company,
01:30 and you're taking on the giants like Uber and Lyft. What's that like?
01:34 From day one, seemingly impossible, but very possible. So Alto is very similar to like an
01:42 Uber and Lyft. That's what people quickly associate us with. But where we're different is
01:48 we own all the cars and all the drivers are employees. This means that we actually spend a
01:53 lot of our time focusing on the experience that our customers have. Every time you get in a car,
01:58 it's going to smell the same. The drivers are going to greet you the same. You're going to be
02:03 taken care of. And that experience is very curated to make sure that it's the same every single time.
02:10 The safety, consistency that you receive in Alto is unmatched. We were able to do this,
02:17 again, because drivers are employees. We can actually train them to do this. When Alto first
02:21 started, I had an extremely small development team. I mean, it literally started with me as
02:26 the first developer and building from there. We didn't have customers that said, "Oh, you're tiny.
02:33 We're going to forgive the fact that you can only do 1% of what Uber and Lyft can do." From day one,
02:40 they said, "Uber and Lyft can do this. We expect this." And it has to be exactly the same.
02:47 What's great is I have a ton of respect for the giants that have come before us. And what's great
02:54 is we're able to look at what they've done. And some experiments that over the years they may have
02:59 done and just kind of turned off because it wasn't working, we can kind of watch back their history
03:06 and take those learnings. And we continue to build stuff that makes Alto unique,
03:11 but we don't have to have quite as many stumbling blocks as they had.
03:16 What is the driving force behind what you do? So for entrepreneurs listening to this,
03:21 you've taken on some major companies. What does that mindset even begin to look like in those
03:29 early days? From the early days up until now, we've continued to say that the big thing for
03:35 us is we want to make a profitable ride-hailing company that can also take care of drivers.
03:40 That is a big sentence right there with a lot of moving pieces. We're looking at Uber and Lyft and
03:46 other companies that have been around for years and are struggling to make profits,
03:50 are struggling to be profitable. We know that there's tools and technology, but also culture
03:56 and operations that's built around a company to make this happen. Being able to say that you're
04:01 a profitable company or profitable in certain areas, that's a big thing to be able to claim,
04:07 especially in the ride-hailing space in a very capital-intensive business such as Alto.
04:12 What's the biggest challenge that you've faced? The biggest challenge by far is doing that supply
04:18 and demand balancing and then also doing that while continuing to grow the business. We integrate
04:29 really advanced algorithms, we integrate machine learning, we integrate artificial intelligence into
04:35 all the pieces that we do so that way us as a small operating company can run thousands of cars
04:44 out there in the world at any given time while staying in that middle ground of supply and demand
04:53 and all of that to be profitable. It's a lot of moving pieces.
04:57 What's your take on driverless cars? What do you see for the future? Researching my book in the
05:02 last year, obviously there are cases of cars not doing what they're meant to. How do you communicate
05:11 if this is something that you eventually move into in the future, first of all,
05:16 but how do you communicate the risks and the safety and that people are going to be able to go,
05:23 "All right, I trust getting into a driverless car"? Long joked internally that we are a human-driven
05:32 autonomous fleet. The reason we say that is because where our competitors use gamification
05:39 and try to incentivize drivers to go here and go there, and they spend a lot of time and effort to
05:43 do that, we actually plan everything that a driver does from beginning to end. As soon as you get on
05:50 your shift, we have a plan for everything, where you're going to be going, where you're going to
05:53 stage, when you're going to take a break, every bit of the piece. We really plan everything.
05:59 We did this from the beginning because we wanted to make it where when an autonomous car comes in,
06:04 we were ready. We could just put an autonomous car into our fleet and have the entire plan and
06:11 the entire execution of their entire shift ready to go. My takes on autonomous driving
06:18 is I'm very positive about it. It's still many years in the future.
06:25 Sadly, this is one of those technologies that just has a very, very long tail on it.
06:32 We can get through most of the use cases fairly easily, but it's all of those very unique cases
06:39 where, yes, it may be overall safer, but in those unique cases where a human would have made a
06:47 logical decision that a programmed car just can't. It's making as best decisions as it could,
06:56 but without that little bit of that human spark and logic, it's hard for it to make that file
07:01 decision. As an absolutely horrible example, just think about what happened with the cruise driving
07:07 incident. Dragging a human, it wasn't initially a crew's fault. It was a cyclist that got into a
07:14 really bad spot, but this is where any human would have stopped and jumped out of the car,
07:19 double-checked what happened. In this case, the car was executing its programming and making
07:27 decisions as best as it could, but it's just one of those cases where you just look and go, "Okay,
07:31 as a human, I can't accept this piece. I can't accept this decision." I thought I read something,
07:36 was it a month or two ago, about them wanting to integrate chat GPT or LLMs into driverless cars.
07:45 What does that look like? Is the goal that it will improve decision-making?
07:52 That is the hope. Maybe an easier analogy to look at is, think of the new generative AI
07:59 images that are created. When you first started using generative AI, you had to type
08:06 amazingly complex props to get out roughly what you wanted. As time has gone on, those
08:12 props have become easier and easier, not because all that data isn't needed, but it's because
08:18 they're using multiple layers of generative AI to take your request for a fuzzy bunny and adding on
08:28 all sorts of other layers to make it what you want and making guesses. They're adding extra text in
08:35 between the text that you've actually entered and giving that to the generative AI to finally make
08:40 the rendered image. In the same way for autonomous cars and autonomous vehicles,
08:46 they're hoping to use text and other tools to get some of that benefit of the generative AI piece
08:58 to make this decision that, "Okay, now I feed the car that I have. An accident just happened.
09:05 What is the next step for the car to do?" The generative AI and the artificial intelligence
09:10 piece will hopefully say, "Stop. Not drive 20 feet and safely exit," even though that's what
09:18 the programming would do. This is the same benefit that machine learning has for years
09:23 been so much better. Us coders, we can only keep adding so many if-then statements to make a
09:28 decision. AI, the intention is that it makes that leap and says, "Okay, even though I didn't have an
09:34 if-then statement to tell me to do this, I know I should do this because it's the next best logical
09:40 step." With the AI models, you speak about how they're adding these extra layers to make, "Hey,
09:46 let's make a bunny." I'm not sure if you saw the example of make the bunny and make the bunny
09:52 angrier, even angrier, and it kept on popping up. But what we've observed since I think December
09:59 and OpenAI have acknowledged this, that there's been a bit of a degradation. It's becoming lazier.
10:06 Is that potentially caused by a couple of factors saying they're adding multiple layers?
10:12 But the other one is that is it now ingesting AI content? Some of the basic research that's
10:21 been done says it effectively makes the AI go mad. I think you've called it out that by adding
10:28 layer on layer on layer, you now have more and more removal of, again, that human spark and that
10:35 human creativity. The good and bad part is that the AI might start to learn, "Oh, this is what
10:42 people find most interesting. If I use a photorealistic image, I get more thumbs up
10:47 than if I use something in an impressionist view of an angry pink bunny." And so it starts to
10:56 remove that creativity piece that a human might do because it's trying to do what's generatively
11:01 and generally more acceptable. And so again, it just kind of takes away from that uniqueness
11:07 that makes us us. You've probably seen some jokes recently that the internet kind of hit peak last
11:14 year, before chatGPT came out, when it was kind of last totally human safe or human created.
11:22 And that was kind of peak and everything since then is slowly, slowly becoming more
11:28 augmented with AI in a negative way. I think I saw it was an interview with one of the OpenAI team,
11:35 I believe, the other day. Recently, they're saying more to be nice to AI, to add a smiley face, to
11:43 say, "Hey, before you do this task, take a little break and then complete it." What's your gut
11:49 instinct with that? Because that feels incredibly weird to me. And I noticed this mid last year,
11:56 when I wasn't producing the outcomes I wanted, I just started being super overly friendly. And all
12:01 of a sudden, the output is insane. Now, I'm not saying we're headed, that's AGI. But what's your
12:09 gut instinct feel about having to be nice to this thing? I mean, I will say maybe because you
12:18 unknowingly personify these tools. Yes, I sometimes tell my Google Assistant, thank you for
12:26 something that I did. Just because as a human, I try to be polite. And I like to believe I'm mostly
12:32 a polite person. But that said, I do think it's just wrong, the fact that I would ask for a pink
12:40 bunny. And it gives me a worse answer unless I sugarcoat my request. That's just, it's silly to
12:50 me. These are generative AI hacks that are out there. And at some point, they just need to go
12:58 away. Do you think that could go wrong in the future? In what way? In that we have to be nice
13:06 to something that has so much power, or the potential for power. In case, you know, my
13:13 robot overlords at some point look back at this, I'll say we should be nice at all times to them.
13:18 But on the other hand, I don't go and say sweet nothings to a hammer before I go hammer and a
13:26 nail. Like, I want to be able to say, okay, I'm picking this up, I'm gonna do this thing. Right
13:30 now, I open up a coding application, I don't, you know, be nice and tell it, hey, take a second,
13:37 have a cup of coffee, and then we're gonna start working today. I open up a program and I get to
13:42 work. For businesses and companies right now, what are the, what do you see in AI, in terms of
13:51 where should they be focusing to grow their companies and deal with these opportunities
13:57 that are hitting every day? I think the right now where we've been limited is the creativity
14:06 to coming up with solutions. And I think we, there's a lot of possible places that AI can
14:13 be safely inserted. I say safely on purpose, because there's a lot of places that can be
14:18 inserted right now that may not be safe. But there's a lot of places that could, you know,
14:22 would be safe, and places that could help improve humans tasks by making it where we are just
14:32 approving what the AI has done, not specifically taking it wholesale. I think that companies would
14:39 be well served to think creatively, and to not try to think, okay, how am I going to totally revamp
14:47 my entire business on AI? But instead, how am I going to take these 10 little pieces and remove
14:54 the human from them and make them easier? At Alta, we've spent time trying to improve our customer
15:02 service. And we've done that by adding in more and more AI. So that way we can still serve the
15:09 same amount of customers, but with fewer humans and also fewer stress on those humans. What about
15:14 the security risks with artificial intelligence? We saw I think it was late last year, the Chevy,
15:22 someone convinced the Chevy chatbot to sell a car for a dollar. And obviously, it wasn't legally
15:29 binding or anything like that. But we're also seeing other hackers being able to access the
15:36 training material that that chatbot was based on through pretty simple prompts. I mean,
15:44 they're not necessarily coders doing this, they're just being creative. How are companies
15:49 going to really deal with that challenge, especially with the AI chatbots, which
15:54 I assume by next year, every single company is going to have an AI chatbot?
15:58 Well, and this is where I think having a human to go in and kind of validate that
16:04 it's not going crazy and not hallucinating in some negative way is just vital. I think in customer
16:11 service as a fairly easy conversation piece here, if you've got these chatbots immediately responding
16:18 to people through email, well, having that human involved to double check the emails and just kind
16:24 of approving them will help speed up their work without having to write the total email from
16:31 scratch. However, by having that human involved, that can now do more work. We can still kind of
16:38 do that quality assurance double check to make sure that the chatbot didn't respond and say,
16:42 "Congratulations, you get a year of service free, and I'm going to enable this right now. Click."
16:48 Like, you've still got that one last quality assurance check. And this is where I think for
16:52 a long time, humans are going to be involved in that quality assurance check until at some point
16:58 we get far enough along and then the trust will be earned and we can stop doing that.
17:04 Do you think humans are going to let their guards down potentially too soon knowing human behavior?
17:10 We want that quick win. Hey, just do it for me. It's all good.
17:14 Oh, easily. Yeah, for sure. Like, this is where it's easy to take a pill and stop worrying about
17:21 something as opposed to like really doing the work. We're going to start seeing that
17:24 any entrepreneur will insert AI and say, "Okay, this works nine times out of 10. This works nine
17:33 times out of 100. This is a huge amount of time that's going to work." And you're going to just
17:39 start taking your hands off pretty quickly. But that one time out of 10, that one time out of 100,
17:46 as your customer base starts growing, it goes from being this little really micro problem to,
17:52 "Okay, now 1,000 customers a day are seeing this. 1,000 customers an hour are seeing this."
17:58 And that could really take your business depending on how bad the AI is performing.
18:05 So who should be held responsible if an AI on your behalf signs a contract or takes out a massive
18:14 loan or even convinces a family member to give you money? Is it big tech or is that the user?
18:23 Ultimately, I would say it's probably the user that's implementing that AI. In this case,
18:29 the entrepreneur that bases their whole business off of some chatbot that they haven't fully vetted.
18:37 I have a hard time saying it's big tech because, again, I go to Lowe's down the street, I buy a
18:42 hammer and I badly hammer together two boards. I don't go back to Lowe's and say, "That's on you
18:50 for the bad hammering job." I say, "Well, I was wielding the hammer. I did wrong."
18:55 Would it be on maybe a smaller company that's deploying those chatbots?
19:01 Exactly. That's exactly what I'm saying. Now, obviously, that changes if you use the word
19:09 big tech. That changes if big tech has packaged and boxed up and said, "This is our product and
19:15 we stand behind it." Now, they are taking the responsibility because they are handing over a
19:22 "completed product." It's like whoever packages it up and hands it over, that's the last person
19:28 that had their hands on it and the person that should be responsible.
19:31 It's the next phase that in open AI, it's been alluded to in the last couple of days of AI
19:38 agents. The agents will autonomously do the task from start to finish.
19:43 What's your take on AI agents? Have you been testing anything?
19:48 Before we continue Beyond Unstoppable is brought to you by Ben Angel's new book,
19:53 The Wolf is at the Door. Get your exclusive sneak peek and order your copy at thewolfofai.co.
19:59 Now back to the show.
20:01 We have done testing on it. It's kind of skunk works right now within Alto.
20:07 Currently, we like that control and we like the ability to kind of double check. And so
20:11 we let the agent go so far. We let the agent make recommendations, but stop at actually taking
20:18 actions. Are we going to end up with a society that's just clicking "Approve" or "Decline"?
20:24 That's kind of one of my biggest fears. It's like, okay, we're just going to approve or decline all
20:30 these decisions without really thinking. I mean, yes and no. I mean, this is
20:36 kind of where I said earlier, like eventually that trust will be earned and we'll have
20:39 less and less places to approve or deny. We'll find areas that it's kind of safer that
20:44 the AI even in hallucinations can't do any real damage. And again, the risk level is lower. And so
20:51 therefore the need for control is lower. This is where you or any entrepreneur needs to kind
20:58 of determine where is that risk and how detrimental can this be?
21:01 Do you think people, especially entrepreneurs, do you think they're aware of the risks adequately?
21:07 It's almost like they just see rainbows and unicorns and money right now and they're just
21:13 throwing the risks out the window. And the problem is as an entrepreneur,
21:17 you kind of have to see rainbows and throw the risk out. I mean, you're trained for that. Like
21:23 you can't, if you're starting a company and the first thing you think about is all the risks,
21:28 you're never going to start a company because the risks are always too great.
21:32 So you have to have that positive outlook to keep moving forward. That said, you need to have that
21:40 partner in that you're working closely with. You need to have someone that kind of pumps the brakes
21:45 from time to time and says, the risk line is here. I can tell you at Alto, like Will Coleman and
21:50 myself, like when he was first interviewing me, he kind of had this grand vision of like,
21:54 we want to do all these things with Alto. And I said, okay, that's great. What are you going to
21:58 do when the technology fails? Because there's 50 things in here and all these moving parts.
22:03 If technology fails at any one of these parts, like what's the backup? And you could see he kind
22:07 of paused and was like, oh, I didn't think about technology failing. It's like, yeah, I mean,
22:11 it mostly works. It might work a million times perfectly, but that one time,
22:16 what are we going to do for our customers? Should companies be doing a SWOT analysis comparing
22:22 what AI can do to what their current services can provide? Last year alone, I saved over 100 grand
22:29 using AI instead of hiring other services as a test for the book, because I wanted to see how
22:36 far it could go. I mean, we even partially replaced our veterinarian, as well as partially replaced an
22:42 attorney, an immigration attorney. They're things that I, at the time, it's like, I wouldn't have
22:47 thought in a million years that I could partially replace my veterinarian. I haven't thought about
22:53 replacing my veterinarian. I have too many dogs and to make that happen, I'd be too scared. But
22:59 I mean, this goes, I think it's a great idea for companies to be doing the SWOT analysis. And like
23:04 I said, if the risk is low for AI, even if AI were to mess up, it's a real low impact. Like,
23:13 sure, you can take that risk, but you just got to make sure that where you're implementing AI
23:19 is safe. I wouldn't, at least right now, we have good systems built up around creating the fare
23:29 for a customer within an alternate, like how much is someone going to pay for a ride? We have a lot
23:34 of machine learning, it's got certain constraints and rules around it. So I know it's going to work
23:39 in roughly these ways. I wouldn't totally chuck that out and put AI in there that may randomly
23:45 set everyone's fares to zero for a whole day, because that's a very risky maneuver. And so
23:51 this is where like, again, where your risk tolerance is and where you can put it in.
23:55 So you think we're going to have to head towards a digital identity, much like everyone has a
24:02 passport? We saw, I think it was just last week, I think CNN reported on this, a Chinese company,
24:08 one of their employees was on a Zoom call with eight other staff members from I think another
24:15 team. They were all deep fakes, and they convinced this employee to hand over $25 million,
24:20 which to me, it's staggering. Like right now, I can hopefully tell it's Jonathan that I'm speaking
24:27 to. But if we look at the graphics and the quality of video in the past year, I mean,
24:34 we saw Will Smith eating spaghetti, and it was all mushed in his face. But now it's getting more and
24:40 more lifelike. Are we going to have to acquire a digital identity to especially financially
24:47 keep ourselves safe? This is where you start hitting the edges of how much I can dream because
24:54 I try to play within the rules and see what how I can set things up. I can tell you over the years
24:59 of working at Alto, and seeing the level of ingenuity that fraudsters will take, it's
25:08 impressive. And working back what someone has done and making it where they can't do it anymore,
25:12 like that is a both fun and hard job. What it takes to actually be secure in the future,
25:20 really, that's, there's gonna be some great minds are gonna have to create some interesting
25:24 solutions to that. And this is one of those places that I think, obviously, this is a technology
25:29 problem, having more tech will help solve it. But is a deep fake boardroom meeting very different
25:37 than someone sending you a text and saying, Hey, I need you to wire me over some money. Like,
25:42 at what point do you say, okay, $21 million? Like, let me let me Yeah, let me double check
25:48 one more thing will send me a text message saying I need five gift cards right now or I'm going to be
25:54 taken, you know, some some crazy front, you know, at some point, you gotta just do that logic check
26:02 and be like, this doesn't make sense. It's interesting that the scammers essentially
26:07 took advantage of visual confirmation of, hey, these all look like my team members.
26:15 This must be legitimate. What practices or policy do you think companies are going to need to put in
26:22 place to prevent this happening? Because this isn't the first time it's happened. I watched this,
26:28 and I married up with there are other in as fast as AI is going and as fast as like generative
26:36 images are being created. There's also people on the other side creating ways to watermark and
26:41 find ways to use technology to unscramble and figure out is this image real or fake.
26:49 And I think those two are going to constantly be in contention with one another. And hopefully,
26:57 we just kind of keep up with the the check piece as fast as we're keeping up as fast as we're
27:03 moving forward with the generative piece. So that way, now there'll be some new Zoom or Google Meet
27:10 tool or whatever that says like, this is a real human, not generative AI that you're talking to.
27:17 And it kind of does that check for you. It's just it's sad that we have to get there. But that's
27:20 that's where we have to go. Do you think the watermarks are going to be entirely effective?
27:27 We've seen recently, it's been pretty easy to get around them.
27:32 I hope because I mean, if not, there's that certain Orwellian moment where you can't trust
27:41 anything that you see unless you saw it yourself. And even then, you know, was it real? It's
27:47 at what point is the world is totally break down because there is no idea of was reality.
27:52 She's a scary place to be.
27:55 Yeah.
27:57 In terms of navigating in the coming years around artificial intelligence, and it's
28:03 constantly encroaching literally on everything that we do. What most excites you about it?
28:11 What most excites me is the iterative nature of artificial intelligence, and what we how we can
28:20 possibly use it within companies. I think we've long known that companies that iterate faster,
28:27 are more successful. And if and on top of that, we marry that up with the idea that with AI,
28:34 you can easily quickly iterate and come up with very unique solutions that you weren't able to
28:42 previously. If we can again, if we can iterate faster as a company, using AI to stimulate the
28:48 world, simulate options, respond to those options, and just do that faster and faster,
28:54 we can have much more successful companies, much more alarming and hopefully positive rate.
29:01 What's your take on universal basic income? It's something that's come up quite a lot recently,
29:08 Elon Musk, Sam Altman, because we're saying I know Google recently, I think they're redeploying
29:16 30,000 employees that have been replacing their ads division by an AI tool. Do you think this is
29:24 creeping up on employees faster than what they can comprehend? And do you suspect we may need
29:30 something like UBI in the future? I got to be careful because I grew up watching Star Trek with
29:36 my dad where the future includes no money, and we're all just working to make a better society
29:43 together. I got to be careful not to let the toll science fiction in. But I mean, I do think that
29:49 it is a concern. For years, as a software engineer, I could basically write my own ticket almost
29:55 anywhere. And I'm watching the writing on the walls. And nowadays wondering, is that a good
30:01 role to recommend people now getting into their career to go into? As an engineer, we solve in
30:10 front of you. People that are building these tools are also engineers. So they're going to solve more
30:14 engineering problems, and therefore work engineers out of work faster, that's going to accelerate so
30:19 much faster. So it is a concern of mine of at what point have we put so many people out of work? And
30:28 then what is the purpose of, of living? Like, what's, what are we trying to do as a society?
30:34 Are the engineers almost speeding up their own displacement? Are they aware of that? Because that
30:41 to me, I tend to think one year to three years ahead thing. All right, what is what is this
30:47 building up to? Are they having any pause to go, I'm effectively going to replace myself? Yeah.
30:53 I mean, I gotta imagine that there's very intelligent people
30:58 working on these problems. And they're saying the same thing, like they can't be
31:03 uniquely blind to it. They're probably still so interested in look at the problems I'm solving,
31:11 and look what I'm able to do. But you very likely could quickly work your way out of a job. Let's
31:19 take one example that I don't think is so far fetched. Engineer sits and writes a amazing
31:26 Android application, and then takes a zip file of everything that they built, uploads it and says,
31:35 now make me an iOS application that does exactly, looks exactly the same and works exactly the same.
31:41 With no bugs, that typically would be a whole team of people. And now the program said, okay,
31:46 I can translate this into that done. Here's your other application. Just if you think about that,
31:51 I think how many people just were displaced in that one example.
31:54 And no longer will take a village to build a company anymore. I think Sam Altman said the
32:02 other day that it could, one person could build a billion dollar company using AI.
32:06 Yeah. And that's very true. But you have to imagine that this is the march of humanity. I mean,
32:16 you don't see a lot of people saying that, cool, I need to start a business. Now I need to go hire
32:21 a full of typists anymore. I need to go. It's just, there is displacement and new
32:31 creations will come in their stead. What's your take on companies training AI on their employees
32:40 work? Because I would only imagine in the case of Google and the 30,000 workers,
32:47 that probably without being aware of it, they actively train the AI that's now replaced them.
32:55 A use, have any kind of inclination that within the next couple of years, we're going to see more
33:01 employee rights to go. All right. Here's the line in terms of using that employee data to train an
33:10 AI that will then replace them. I mean, I, I, I a hundred percent
33:16 think that that's going to be coming up more. And we saw that recently with, you know,
33:22 actors and voice acting where, you know, they're trying to say like, this is my voice. You can't
33:26 use this without my permission. And I a hundred percent agree and stand behind that. Cause I do
33:35 believe that, you know, it does, it does make someone unique. However, as an engineer, if I'm
33:42 writing code for a company, I mean, the company owns that code. That's the whole point. Like
33:47 you're making that trade-off for a paycheck. As long as the company is using the code that they
33:52 own, like that's, that is kind of their property to do with what they please.
33:56 So people have to be more aware of that walking into it.
34:00 I think so. I mean, it's like I said, it's just, it's kind of their property. I will say it's,
34:06 it is scary. Like you're feeding the beast that will eventually replace you, but you can't
34:12 just stop. You can't just take your hands off the keyboard and say, I'm not working anymore.
34:18 But you also can't say to a company like that owns this code that you can't use it.
34:24 Like at what point do you, there's only, there's only so many rules before you can't make a
34:32 solution that works. There's only so many constraints you can put on the system.
34:36 Well, it's certainly not set up to favor employees by any means, is it?
34:41 Oh yeah.
34:42 One thing I've seen, there was a company overseas, I think it was called Alt Brain Inc.
34:48 Where they've started, supposedly they've started training AI chatbots on the employees. And then
34:56 they're actually paying the employees based on the work that the chatbot is doing. Could you see a
35:02 marketplace for that in the future?
35:06 That is interesting. But at what point does the work that though may be based on someone's
35:14 development has morphed enough that it's not really their work? I mean, sure, I would love to
35:24 get paid forever off of code that I wrote 10 years ago because it was put into some bot and is still
35:32 producing code. But at some point that can't be successful. Like as a company, I can't just keep
35:41 paying people forever. Like that's kind of the antithesis to automating as much stuff as we are.
35:48 And it's probably usefulness is going to degrade over time as well.
35:52 Yeah.
35:54 What's the number one question that I haven't asked you? Do you think that I should ask you?
36:02 I think the biggest problem is always the ethical one. It's not whether we could do something,
36:10 but whether we should do something. And I think there's going to be a lot of companies
36:15 that are coming up and out these days, that are going to,
36:22 there'll be that niche company that says, I have inserted AI into this. And there's going to be a
36:28 lot of people that are going to grit their teeth and be like, that is one place AI does not belong.
36:32 Scheduling people for heart surgery. At what point does AI start saying,
36:38 I'm not going to schedule this person because this other one over here
36:41 is more valuable or has a higher likelihood of success. Those ethical things that a computer
36:50 just shouldn't be making those decisions. I really appreciate you diving into those
36:56 aspects because I feel like researching this book, I've almost been shouting at a wall
37:02 with some of the ethical concerns, especially around employment.
37:09 And I've interviewed techno optimists, which has been fascinating,
37:15 but I love to get the different points of view. Jonathan, thank you so much for your time. I could
37:21 keep talking to you all day about this stuff. I agree. This has been a lot of fun. It's been
37:28 a great afternoon. Learn more about Jonathan Campos and Alto at ridealto.com. And if you haven't
37:34 already, subscribe to Beyond Unstoppable and visit thewolfofai.co to order your copy of The Wolf
37:40 is at the Door. And stay tuned for next week's episode.

Recommended