Brainstorm AI 2023: Vinod Khosla’s Views on Investing and More

  • last year
Vinod Khosla, Founder and Partner, Khosla Ventures Moderator: Jeremy Kahn, FORTUNE

Category

🤖
Tech
Transcript
00:00 - Over the last two days, we have heard from organizations
00:03 leading the charge in AI innovation and implementation.
00:07 We've demoed new technologies, explored some of the most
00:10 pressing concerns around responsible AI,
00:13 and what it all means for the future of work.
00:16 Our next speaker has predicted that AI will replace
00:19 80% of jobs in the next 25 years.
00:23 He was the first startup investor in open AI,
00:26 and he's invested in organizations like DoorDash,
00:29 Headspace, and Block.
00:31 Currently, he's on the final investing stages
00:33 of a $3 billion funding round.
00:36 We are thrilled to be joined today by Vinod Khosla,
00:39 founder of Khosla Ventures, to discuss what's next
00:43 for the future of AI investing.
00:45 He's going to be joined by Fortune senior writer,
00:48 Jeremy Kahn.
00:48 Please welcome him to the stage.
00:50 (audience applauding)
00:58 - Great, Vinod, thank you so much for being here.
01:00 - Oh, happy to be here.
01:02 - Great, fantastic.
01:03 So you were the first for-profit investor in open AI,
01:08 invested even before they had the partnership
01:11 with Microsoft.
01:12 Obviously, they've just had this huge blow up with the board.
01:15 What's your take on what happened, and what lessons
01:18 do you think there maybe are to draw from it?
01:20 - Well, not a lot of lessons to be drawn.
01:22 You should get the right board members.
01:24 (audience laughing)
01:25 There were a bunch of misinformed board members
01:30 applying the wrong religion,
01:32 instead of making rational decisions.
01:35 The company's much better off today
01:37 than it was a month ago, so that's great.
01:40 - Right, and one of the splits that became apparent,
01:44 you talk about sort of the religious view of the board,
01:48 is this split between the so-called doomers,
01:52 and the people who are saying that their effective,
01:55 accelerationists.
01:56 I mean, some people are thinking that split itself
01:59 is not helpful.
02:00 What's your view on that split that seems to be developing
02:03 in the community?
02:04 - Look, every new technology has its pros and cons.
02:09 There's huge upside with AI.
02:11 I've been talking about it for probably a dozen years now.
02:15 And then there's some disadvantages.
02:18 But the doomers are focusing on the wrong risks.
02:24 Nuclear had a risk, it could be used for good or bad.
02:27 Biotechnology had a risk, could be used for good or bad.
02:31 The same is true of any technology
02:33 that's going to be powerful.
02:35 By far, orders of magnitude higher risk to worry about
02:40 is China, not sentient AI killing us all.
02:44 I mean, it's so silly, and frankly,
02:46 it's people in the press who amplified that narrative,
02:51 because it's so sci-fi.
02:53 You know, it resonates with the movies we've been seeing.
02:57 So it gets this artificial amplification from the press,
03:02 but it's sort of not worthy of a conversation,
03:05 to be honest.
03:06 - All right, well, I'm sorry I asked.
03:08 (audience laughing)
03:10 I hope we at Fortune have not been part
03:12 of that amplification process.
03:13 But I want to pivot a little bit.
03:15 You mentioned China.
03:16 I know you're very keen to talk.
03:18 You really think that there's a split
03:20 between the US and China, and that there should be,
03:23 that we should not be collaborating on this technology.
03:26 What's your view of the kind of technological arms race
03:29 between the US and China?
03:30 And do you think the Biden administration
03:32 has played this right in their strategy?
03:35 - Let me be clear.
03:36 The risk of sentient AI killing us exists,
03:41 but it's about the same as the risk of asteroid
03:43 hitting our planet and destroying us all also.
03:46 You have to characterize risks properly.
03:50 The risk of China is humongous.
03:53 First, next year, I'd be surprised
03:55 if there aren't 100 million or more bots
03:59 with persuasive AI, one-on-one engaging
04:02 with every voter trying to influence our election
04:06 for their purposes.
04:07 That's very likely to happen.
04:09 I'd say probably a 95% probability
04:12 it'll be influential in the election.
04:16 And we should be worrying about it.
04:19 The longer term, over the next 25 years,
04:22 whoever wins the AI race will win the economic race.
04:27 So economic wins and power will derive
04:31 from technology wins, and I think it's up for grabs
04:35 who wins in the next 20 years.
04:37 And whoever wins the economic race
04:39 will have social influence, and I think
04:44 whether the world believes in Western values
04:46 in Southeast Asia, in Africa, in Latin America, everywhere,
04:50 or believes in chi philosophy, which
04:53 is some combination of Tiananmen Square-type tactics,
04:56 dictatorship, and a reasonable philosophy
05:00 that I don't care about the bottom 5% we abuse,
05:03 but the society as a whole is better off to give him credit.
05:08 That's sort of his political philosophy.
05:10 I don't like that one, but which one wins
05:13 is what's at stake over the next couple of decades.
05:16 That's the really important race.
05:18 - And do you think the Biden administration
05:19 has done the right thing by cutting off
05:21 exports of advanced chips and AI software,
05:24 in some cases, to China?
05:26 - Absolutely.
05:28 - Yeah, interesting.
05:29 When Mahal introduced you, I think she got this wrong again.
05:34 She said you had predicted that AI
05:35 would eliminate 80% of jobs, but I think
05:37 you actually said it's 80% of 80% of jobs.
05:40 - That's exactly right.
05:40 - Maybe you can explain a bit more
05:42 about what you meant there, and which jobs
05:43 do you think will be eliminated?
05:45 - So probably a dozen years ago,
05:48 I wrote a blog called The AI Doctor,
05:52 80%, something like 20% human included.
05:55 There will be a human role, and we can't
06:01 fully define it in all these.
06:03 And AI will need humans to learn from.
06:06 So there's always a learning model.
06:08 But it will be able to do most jobs,
06:11 or most parts of most jobs.
06:13 So 80% of 80% of jobs is like 64% of all the labor,
06:17 economically valuable labor that's done.
06:20 I think that's a very good model.
06:22 But let me give you the example of what happens.
06:25 If we replace 80% of what a doctor does,
06:30 he can have five times more interaction with the patient.
06:35 And the number of interactions in the US
06:37 a doctor has with their patients, say primary care,
06:40 is 1/5 or 1/6 of what happens in Australia, for example.
06:45 Should your doctor check in with you every month,
06:50 and have an AI check in with you every day
06:52 on how you're doing?
06:54 Especially given how important the disease burden,
06:56 chronic diseases, diabetes, heart disease, hypertension,
07:00 you should be checking in with a patient every day.
07:03 My son runs a company called Cure Eye
07:07 that is building an AI primary care doctor.
07:09 The idea is for chronic disease,
07:12 you'll do 100 touch points in the first 90 days.
07:16 Like very simple idea.
07:18 Can't be afforded without a human.
07:20 Still a role for the doctor.
07:22 He's really overseeing the AI, and maybe changing things.
07:27 But much more interaction is needed
07:30 to avoid things like diabetes,
07:32 and we've all read the book "Nudge."
07:34 That nudging is so important to healthcare.
07:37 In fact, it may be the most important drug
07:39 we can invent.
07:41 - And would the human do the nudging,
07:42 or you have the AI do the nudging?
07:44 - AI will do the nudging, but under human guidance.
07:46 - Got it.
07:47 And you were telling me just backstage,
07:49 you're sort of taking this thesis
07:50 to other professions as well.
07:52 You've invested in a company
07:53 that's trying to do this for IT professionals.
07:55 Is that right?
07:56 - Well, we just invested in a company called Cognitivs.
07:59 You know, business users shouldn't have to make a request
08:02 for a new report or a new query of their data warehouse.
08:07 They should just say in English what they need,
08:09 and the goal of this thing is to directly write the code
08:13 to access, say, your Snowflake system,
08:17 or your database system, and get your answers.
08:21 Why do you need to then go to IT,
08:23 and somebody then deal with that?
08:26 There's a lot of overhead.
08:28 So one of the predictions,
08:29 I made probably 11 predictions in the last couple of weeks.
08:35 One of those is there'd be a billion people
08:37 on the world programming
08:39 of the seven billion people on the planet.
08:41 What do I mean by programming?
08:43 Writing in English language what you need.
08:46 If you have an incomplete specification,
08:48 the AI will ask you clarifying questions,
08:51 like what do you want to do if the number of customers is zero?
08:56 Do you divide by zero?
08:57 You know, humans can't think of all that.
09:00 So will there be a billion programmers?
09:03 No question, that'll happen in the next 10 years.
09:05 - I think you said that you think going forward,
09:07 expertise will be less important,
09:09 and actually knowing to ask the right questions
09:11 will be more important.
09:12 Maybe you can talk a little bit about that.
09:13 - Well, you know, so AI is very good at expertise.
09:17 If I have breast cancer, for example,
09:22 hopefully I don't get that,
09:23 but I hope my oncologist has read
09:27 the last 5,000 papers on breast cancer,
09:30 which is probably 2023 publications on breast cancer.
09:33 Like the amount of research is so large
09:37 that no human can keep up with it.
09:39 And most of the time, they can't,
09:43 and they don't remember it if they've read it.
09:46 And you want that science in your care.
09:50 And that's what I'm talking about.
09:52 Expertise will be much better and broader.
09:55 Maybe there's the human elements of care
09:57 that the human provides.
10:00 Maybe there's other parts.
10:01 - Great.
10:03 I wanna take some questions from the audience.
10:04 If you have a question, please raise your hand,
10:06 and we'll get a mic to you.
10:08 And do I see any questions?
10:11 Right here, a woman right here.
10:13 Please state your name.
10:17 - Hi, Suzanne from Invisible Technologies.
10:19 Thank you for that.
10:20 Do you think that people know what they want people to do
10:25 and what they want machines to do?
10:27 I think people make assumptions,
10:29 but I actually don't, I'm not seeing it.
10:31 - I think the thing to remember about the future,
10:34 it's very hard to forecast.
10:40 So you all know the Yogi Berra sign.
10:43 It's hard to predict the future.
10:46 What happens is as technology develops
10:53 and how it develops changes how it's used.
10:58 So use cases will evolve
11:02 and people will adjust to the new use cases.
11:05 And I think this process is very messy.
11:10 So any long-term forecast,
11:12 the only thing you can be sure of is it's wrong.
11:14 So when people ask me for forecasts,
11:20 I say, especially in technology,
11:23 anybody who makes a forecast is not smart enough
11:26 to know they can't make a forecast.
11:29 And it's been universally true.
11:32 For those of you interested in fun books,
11:35 there's a book by a researcher called Phil Tetlock,
11:39 Expert Political Judgment.
11:41 He basically said experts have the accuracy
11:45 of dart throwing monkeys.
11:47 And this is based on following 20,000 experts
11:52 over 20 years on like 84,000 predictions.
11:57 And he said, what's the accuracy?
11:59 And it's dart throwing monkeys.
12:01 What the exact phrase after 20 years of research,
12:05 very scientific conclusion.
12:06 But we ought to realize
12:09 you can't predict everything that happens.
12:11 You can't predict when chat GPT gets to chat GPT capability.
12:15 We invested in open AI five years ago.
12:17 I couldn't have told you when that capability would emerge.
12:21 I could tell you in the long run,
12:23 AI is gonna be really, really pivotal
12:26 and placing bets in it's gonna be very important.
12:29 - Great, I wanna get you another question just here.
12:31 - Thanks, Vinod.
12:33 I'm Anita Vadavatha with IonAventures.
12:36 Synthetic biology and pathway of GI,
12:40 how do you see that any relevance to humans at all
12:43 that are living, almost to the point that
12:45 when a child is born in this decade,
12:48 it's an entirely different wireframing.
12:49 How do you see that pathway over the next couple of decades?
12:53 - You know, every child should be fully sequenced.
12:56 Single point genetic defects,
12:59 you can design a protein sequence
13:03 to go address a specific defect, very specifically.
13:07 AI can design proteins that fold in a particular way
13:11 that address a particular single point mutation.
13:13 The same thing will happen for multi gene,
13:18 sort of polygenic kinds of diseases.
13:22 So I don't think we are very far
13:25 from when you will have one drug for one patient,
13:29 for example, enabled by AI.
13:32 - Wow, another question from this table here.
13:35 - You know, it's sort of silly.
13:37 Let me take a very trivial example.
13:40 Seven billion people on the planet,
13:43 they get one dose of aspirin,
13:45 even though 30% of the planet,
13:47 humans can't even metabolize aspirin.
13:50 Is that silly or what?
13:53 You know, critical drugs like blood thinners,
13:56 like Warfarin or Plavix, depends on genetics.
14:00 You should get a different dose based on your genetics.
14:05 Not the same dose for everybody, but that's what we do.
14:07 We do crude medicine.
14:08 - Let's get to this question before we--
14:11 - Vinod, John Jessup, I'm the CEO of 1440
14:14 based in Park City, Utah.
14:16 - My favorite place.
14:17 - I know.
14:18 - The snow has been great so far this year.
14:20 - We met at Sundance last year.
14:22 So my question for you, you said the English language.
14:26 And so, but if you think about the world population,
14:29 a lot of people don't speak English.
14:30 So where do you think like language, translation,
14:33 'cause you know, somebody could ask in Spanish or French.
14:36 - You have a view that there will be sovereign LLMs,
14:39 is that right?
14:39 - So there will be sovereign LLMs.
14:42 We just announced last week, I think,
14:45 a LLM just for India,
14:48 because all the different languages there are so different.
14:53 They don't even use the same script.
14:58 And so a fundamental thing you hear in AI is tokenization.
15:03 Tokenization in Telugu should be different.
15:08 A tokenization in Kanji should be different.
15:11 So you will see this localization of these models.
15:16 But beyond that, it is such an important technology.
15:21 I think from national security, national defense,
15:25 national control point of view,
15:28 countries, important countries will develop their own AI.
15:32 - And will this be done by government
15:33 or will it be done by private industry?
15:35 - My bet is it'll be governments encouraging private AI.
15:38 This is not something governments can do.
15:40 It's moving too fast to plan.
15:42 You sort of have to be on your reactive feet
15:46 most of the time.
15:47 - Right.
15:48 - It is like skiing.
15:49 You just have to keep your balance
15:51 and you can't sort of like.
15:52 - Fantastic.
15:54 We're out of time,
15:55 but thank you very much for being with us, Vinod.
15:57 I really appreciate it and thank you all for listening.
15:58 - Thank you, everybody.
15:59 (audience applauding)
16:03 [BLANK_AUDIO]

Recommended