Stephen Lynch Asks Califf What FDA Is Doing To Guard Against Dangers Of Widespread AI Adoption

  • 5 months ago
During a House Oversight Committee hearing on Thursday, Rep. Stephen Lynch (D-MA) questioned FDA Commissioner Robert Califf about electrical stimulation devices and the dangers of AI.

Fuel your success with Forbes. Gain unlimited access to premium journalism, including breaking news, groundbreaking in-depth reported stories, daily digests and more. Plus, members get a front-row seat at members-only events with leading thinkers and doers, access to premium video that can help you get ahead, an ad-light experience, early access to select products including NFT drops and more:

https://account.forbes.com/membership/?utm_source=youtube&utm_medium=display&utm_campaign=growth_non-sub_paid_subscribe_ytdescript


Stay Connected
Forbes on Facebook: http://fb.com/forbes
Forbes Video on Twitter: http://www.twitter.com/forbes
Forbes Video on Instagram: http://instagram.com/forbes
More From Forbes: http://forbes.com
Transcript
00:00He yields back. Chair, I recognize Mr. Lynch from Massachusetts.
00:03Thank you, Mr. Chairman. Dr. Califf, welcome. Thank you for your good work.
00:10Dr. Califf, in March of 2024, the FDA issued a proposed rule regarding electrical stimulation
00:18devices that are intended to reduce or stop self-injurious or aggressive behavior in some
00:23patients. The proposed rule, if finalized, would remove VSDs,
00:29these electric stimulation devices from the market, and the devices will no longer be
00:34considered legally marketed. I've tried to read as much as I can on these. As an attorney,
00:41I try to refrain from making medical decisions on my own, especially for my constituents.
00:48I do know that the Geneva Convention regards these devices as torture,
00:53but I also have a group of families in my district who are
01:00they have children and loved ones who are undergoing these treatments, and they claim
01:08that those treatments help. Now, as a result of this rule, these treatments will go away,
01:16and my constituents have asked me to ask you and the FDA to meet with them to talk about the
01:23consequences of the FDA's rule. And so, as a member of Congress, on behalf of my constituents,
01:30I am asking you and all your staff to provide an opportunity for those families to meet with you
01:37and to discuss their concerns.
01:39Dr. Califf, thank you for bringing this up. I know it's part of your duty to do so. This
01:46is a very tough issue, and I have worked in psychiatric wards during my career,
01:52and I think most people can't appreciate the anguish of families who have loved ones who
01:57are in a situation that might call for this or other serious mental health problems. But
02:01anyone who has been through it, I think, has a special feeling about it.
02:07As I think you know, there is a proposed rule that we have now put out there that is a docket,
02:11and we do encourage everyone to submit their comments and views to that docket. I will
02:17definitely take this back to our staff. I know that our staff has met with these families before,
02:23but this has been going on for a while, so we will go back and reconsider.
02:26It has, and it's heartbreaking.
02:30Let me ask you, so shifting to something completely different,
02:35last year the FDA made nearly 200 additions to its public list of AI and machine learning
02:41enabled medical devices currently marketed in the United States, and there has been some
02:45wonderful success. You know, Dana-Farber Cancer Center is near and dear to my heart.
02:55Mass General Brigham, their cancer center as well. Wonderful, wonderful progress in diagnosing
03:02breast cancer from mammograms. Clearly there are enormous potential benefits here.
03:09But there is also some concern around privacy and also the lack of explainability
03:18of some of these algorithms that are being used on a diagnosis or the predictive end.
03:26What are we doing to mitigate the negative aspects of the use of AI?
03:36I know it's coming at us hot and heavy in so many areas, but I would like to hear what the FDA is
03:43doing about guarding against the dangers that might be present by this widespread adoption of AI.
03:54Thanks for the question. I have to contain myself here because you may know that I worked at
03:58Alphabet or Google during the five years between my two FDA stints and very heavily into this,
04:05and I think it's going to be a huge benefit, but also with a huge risk on the other side if it's
04:11not regulated. Also, we have many mutual friends. I'll be at Mass General next week as visiting
04:17professor and learning from the people in the Harvard system who know a lot about this stuff.
04:22This is one of the topics. The thing I would emphasize is that I don't think it's
04:28explainability that's really the issue, and I think an easy way to think about this,
04:32think about yourself before you had a map in your car that you could talk to when you used to drive
04:38the car and you get in an argument about which way to go and then you'd have to pull out the map and
04:42look at it. Well, now you just talk to your car, and what's going on with the car is AI continuously
04:50in real time taking into account everything that's happening on the roads, the template of
04:54what's there, and your personal preferences that it learns as you go along, and I think if AI works,
05:02we'll take it for granted because there are many things we do in medicine. If you ask me,
05:06how does aspirin work? Well, we know a lot about aspirin, but exactly how it works for each disease
05:12we're not so sure, but we know it does work for particular things, so what we're really focused
05:18on is creating a community in our health systems and the industry that, like I've already said,
05:24we're referees. We depend, the first line is self-regulation by the industries,
05:28and what's really important here, I think, where AI is going, generative AI, it learns as it goes.
05:36The more information it has, either the better it gets or the worse it gets. You don't know which
05:40one, and if you just put it in place and don't tend to it and monitor it, it can go wrong in
05:46really bad ways. I saw that at Alphabet. It was something we were really worried about,
05:52and so we've got to reformat our health system so that as time goes on, you're constantly looking
05:59at what the algorithm is doing. Are its predictions accurate? That's really the key
06:05thing that we have to do, and right now, we are not configured to do that, so we're working very
06:09much with a community of health systems and the industry to come up with a scheme of what's
06:16called assurance labs, and this would be you sell your AI thing to somebody. It goes out there.
06:22There's going to be a monitoring that says it's either working or it's not in practice,
06:27and it also looks for this bias that we're all concerned about, that if you put the wrong
06:32information in, you end up with a prediction which is preferential to one type of person
06:37compared to another. That's got to be looked at. So I'll stop there, and I could go on a while on
06:42this.
06:42Mr. Chairman, thank you for your answer. Dr. Chairman, thank you for your indulgence.
06:47I yield back.

Recommended