• 2 months ago
As AI reshapes industries worldwide, Amazon’s Neil Lindsay and Harvard professor Isaac Kohane explore the transformative impact AI can have on health care. They address concerns around AI bias and hallucinations, how AI can reduce costs, and how it can improve both the patient and provider experience.

Category

🤖
Tech
Transcript
00:00There are two megatrends that are going to define medicine in the next five years, and
00:07then going onwards.
00:08One is that AI's capabilities keep on improving literally month to month.
00:15The large language models that I used a year ago are nowhere near as good as the ones that
00:21I'm using today.
00:22At the same time, primary care is falling apart.
00:25I cannot find primary care doctors to refer my friends or faculty to.
00:30And that provides an opportunity on a patient-centered focus, where a lot of the missing functions
00:38that the primary care doctors used to provide, we can provide and augment, maybe with nurse
00:44practitioners, maybe with physician assistants, or maybe just with the social network of patients
00:50in a consumer-focused way.
00:56AI's been around a long time.
00:58It has become the buzz because I think the end user is now seeing a lot more of it directly.
01:04But there's so much opportunity for AI and technology, machine learning in general, in
01:09healthcare.
01:10And I think about it, especially in three particular areas.
01:13I think about the ways in which we can take costs out of the system.
01:16There's still a lot of phone calls and faxes and the humans involved in work that doesn't
01:21really add that much value that machines can do and should do, because then we can
01:25invest that savings in the experience that actually helps patients get healthier, making
01:32more providers available, improving that experience.
01:35The second area is the provider experience.
01:38So providers often have to go through pages and pages, not just dozens, sometimes hundreds
01:42of pages of history to try to find insights so that they can focus with a patient on the
01:49topics that are going to matter most.
01:51AI can help extract a lot of insights from that history.
01:54AI can also help a provider take notes without having to spend their time facing a computer,
02:01so they can be facing the patient and spending energy with the patient.
02:05And then, of course, I think there is an opportunity for customers and patients to find out more
02:10about their choices so that they can be more informed in their own health and actually
02:16perhaps be navigated using technology without necessarily taking up the time of a provider
02:22to do that navigation for them.
02:29When GPT-4 came out, that was a two-year sprint, basically, from GPT-3.
02:39But it keeps on accelerating.
02:41A year later, many of the large language models started having visual and other multimodal
02:47capabilities.
02:48And so I took a picture that had never been seen by any doctor.
02:52It was just out of the New England Journal of Medicine.
02:53It was a picture of the back of a man, an older man, who had developed a lot of itchiness
03:00over the past day and felt really bad.
03:03And there was a photograph of his back, and there were these, looked like scratch marks,
03:08lots of lines.
03:10I took everything from the puzzler, all the text and the picture, but I removed the most
03:15important clue.
03:16And I gave it to GPT-4 and said, what is this?
03:20And this was with a clue.
03:22Most doctors didn't get it.
03:23And it said, it could be something called bleomycin toxicity, reaction to a chemotherapy
03:28treatment.
03:29The other is shiitake mushroom toxicity.
03:32I'd never heard of it.
03:34And what I removed from the history of the patient is that they're eating mushrooms the
03:39day before.
03:40And so we're having this phenomenal, fast-moving capability that is behaving with all the hallucinations,
03:52with all the problems, better on average than most doctors in terms of depth of knowledge.
03:58How do we address the problem I just asserted?
04:01Well, it turns out that right out of the box, when you train an AI on all the information
04:07you have in the internet, it'll say a lot of weird things, some of which are politically
04:13incorrect.
04:14So OpenAI and Google put a lot of effort into something called alignment.
04:18They give examples of questions and how they want it answered.
04:22And you give that tens of thousands of such examples, and it starts behaving in a certain
04:27way.
04:28But it's not aligned to a certain behavior.
04:31So you can actually align AIs to maximize certain outcomes.
04:38If you're making the AI, you could align it to the patient, you can align it to the hospital.
04:44No one's talking about this, but it's actually a well-known procedure how to align.
04:49It's something called fine-tuning and reinforcement learning with human feedback.
04:53These are all technical terms, which basically say, I'm taking this whole big machine and
04:59I'm telling it how I want it to behave.
05:01What are the concerns about AI?
05:04There are concerns around bias, around hallucinations, but in my opinion, the biggest concern is
05:12who is the AI serving?
05:15Is it maximizing the interests of the hospital, of the insurer, of public health, or the patient?
05:24And answering that question, I think, is perhaps the most important concern.
05:29And right now, we're not talking a lot about it.

Recommended