Not using AI could be seen as negligent, while today, relying it too heavily may be considered careless. So it's a balancing act. Welcome to
off the chart, a business of medicine podcast featuring lively and informative conversations with healthcare experts, opinion leaders and practicing physicians about the challenges facing doctors and medical practices. My name is Austin Luttrell. I'm the assistant editor in medical economics, and I'd like to thank you for joining us today. Artificial Intelligence is now part of everyday care, from imaging support to ambient documentation. With it come practical questions for physicians. How does AI affect malpractice, risk? Who's responsible when things go wrong, and what should practices be documenting today in this episode, I'm joined by three experts to help answer those questions, Sarah Gerke of the Illinois College of Law, David Simon of Northeastern University School of Law, and Deepika Srivastava, Chief Operating Officer at the doctor's company. We'll cover where the standard of care may be heading, how liability is likely to be shared, and the concrete steps that you can take now. Sarah, David and Deepika, thank you so much for joining us today, and now let's get into the episode. To start out, let's take a look at how close we are to AI becoming part of the standard of care. Sarah ger he's going to explain why adoption patterns and juror perceptions both matter
absolutely so of course, AI could eventually redefine the standard of care, a study actually from to be at all, published in the Journal of nuclear medicine. They showed that potential jurors may see following a right AI recommendations as reasonable even when they deviate from the standard of care. And so these results suggest something interesting. Of course, it's just a hypothetical, you know, study, but at least it shows that with respect to potential jurors and lay understanding, the use of AI might actually be closer to the standard of care than we might think, and so following the advice of AI already seems to reduce the risk of liability for injury that results from deviations from the standard of care.
So that juror lens matters, but courts are still looking for what physicians are actually doing. David Simon here lays out how pervasiveness drives the legal definition.
It depends on how pervasive the technology is throughout the field. Is it adopted, as do most physicians rely on it as the standard of care? Or is it just something that certain physicians rely on that's going to be a large component? I mean, there's a the way that malpractice generally works is that you look at the custom in the industry with respect to physicians, not with respect to negligence generally. But you look at, you know, what is the practice of physicians in this subspecialty, and for for hospitals, you look at what's the practice nationally. And so if there aren't a sufficient number of people using it, or entities using AI, that it constitutes a standard of care, then not using it, of course, won't be a breach of the standard of care, but if it's pervasive enough, or if it's adopted enough in a certain subspecialty, then it can change the standard of care. The standard of care might be to use some kind of technology that deploys artificial intelligence, but the short answer to your question is yes, AI probably will. I'd say it's very likely to reshape the standard of care, but it's not going to be one and done. It's going to happen gradually. It's going to happen in certain sub specialties, and it's going to be kind of done in fits and starts. And so you'll have cases emerging where it will be decided, you know, what's the standard of care in this practice area? How should we think about AI?
So ultimately, the takeaway is that adoption plus validation will decide when not using AI becomes a legal risk, but until then, judgment still rules next. Let's take a look inside The Exam Room at a situation that I think a lot of doctors are understandably a bit worried about, and that's when AI informs the decision that harms a patient who's on the hook. Gurkey and Simon both point to a still forming map and Srivastava pulls back the curtain as to how insurers are thinking about it.
Yeah, so right now, liability likely mainly falls on physicians and also hospitals, and not necessarily on manufacturers. And what's interesting is that healthcare providers are actually pretty aware of that. So I'm leading on the legal and ethical part of a horizon Europe funded project, which is called Classica. And my team actually led the first empirical legal investigation of surgeons perspective on liability for the use of AI technologies in the US and in Europe. And we conducted a total of six online focus groups with 18 surgeons, 11 from the EU and seven from the US, across different range of specialties. And what's interesting is that we saw several key themes that emerged from the focus groups, and the details can be read in and also surgery open. But I want to highlight here. It. What's really interesting is so first of all, surgeons, they nodded that AI is not yet part of the standard of care, but most, most really expect it will be in the future. And what surgeons also said, which is interesting is that they were very skeptical about holding manufacturers liable unless there was a clear defect. But some, some surgeons really called for shared accountability here, if the AI output is followed properly. And so there's really a dispersing question of whether liability landscapes needs to adapt and and respond to responsibility will be fairly been allocated and shared among stakeholders, especially if you're thinking about AI tools that in the future that are much more sophisticated, that
said manufacturers aren't entirely out of the conversation, especially when we're talking about regulated devices. So I asked Dr Simon what lessons we can learn from past malpractice cases involving medical devices, as we think about how courts may handle AI
a lot of times when there are device cases involving certain devices, that the individual suing who is injured will sue the device manufacturer because there's some defect they claim in the device. And I think that's very likely to happen with AI technologies, particularly those that go through something called the 510, K process. I don't know if your your listeners or viewers are familiar with that process, but basically it's, it's a process that doesn't usually require any clinical data. It's mostly bench testing and maybe some bio compatibility testing. So I think in that area, there's going to be a lot of activity, products liability claims, tort claims against manufacturers, and then also, with some of these AI devices, they're coming on the market through something called the de novo process, which is which is kind of in between the 510, K and something else called the PMA, which typically requires clinical trials. And the de novo devices typically do require some kind of clinical data, but it's not always the same as a PMA, so there's going to be a lot of challenges there with respect to devices. And I think that many of the lawsuits go after the manufacturers because they have, they have bigger pockets, or maybe the the air was the result of the manufacturer, or, you know, there's something wrong with the device that the plaintiff thinks injured them. The cases where, where doctors are involved will be involving, will involve devices, and if the device functions properly, the physician will have done something that differs from the standard of care. So maybe it's a surgical device, like like a robot, surgical robot, and a physician decides to use it in a way that the manufacturer hasn't studied or brought onto the market that could potentially raise liability for the physician. And you can imagine a lot of similar kind of things happening with less sophisticated technology, like maybe genetic testing, like commercial genetic commercially available, retail facing genetic testing that doesn't necessarily have clinical significance, but provides information to consumers or software that analyzes a physician's chart, patient's chart, rather and then provides potential diagnosis something like that. So I think the physicians will be at risk when they either don't use the device according to its specifications or when they use a device that's not standard of care.
And of course, policy choices will also influence where those claims land. Dr Simon also touched on that a bit later on in our conversation.
So I do think that there's a significant amount of room for federal and state regulators to be thinking about how to allocate responsibility and in what cases it should be allocated. And in particular, the FDA can think about this from the standpoint of, What Kind Of Products Do we want to put on the market and in and how, because, like, how the product gets on the market really affects how much liability the manufacturer has. So for example, if the FDA said, Well, we're going to actually take a very strict look at De Novo devices, or PMA devices that have AI, I think that would probably be a good thing, because at least for PMA devices, the liability for those manufacturers is much lower, and so we should get a more robust analysis on the front end of what they're doing.
From the insurer's perspective, at the doctor's company, the largest physician on malpractice ensure in the US Srivastava says the system still leans on physicians and hospitals. Where
does the ultimate responsibility lie when AI contributes to patient harm with the physician who relied on it, the health system that deployed it, or the technology company that designed it? Currently, malpractice law provides no clear allocation of liability amongst the three. Now, physicians currently bear the primary legal risk since they sign the record and make the clinical decision.
So the bottom line is, doctors are still the ones getting sued, and that's not going to change overnight, until the law catches up physicians health systems, they're going to be the ones. They continue to shoulder most of the liability, and manufacturers are going to join the cases only when the AI itself is clearly in question. Say, Keith, this is all well and good, but what if someone is looking for more clinical information?
Oh, then they want to check out our sister site, patient care online.com, the leading clinical resource for primary care physicians. Again, that's patient care online.com.
All right, we don't mean to be all doom and gloom. This isn't just about scaring physicians away from innovation. It's about staying ahead of the curve. So before you start pulling the plug on every AI tool in the office, let's get practical. What can you do this week to reduce risk and keep your documentation future proof.
Physicians should approach AI with the same diligence as any clinical tool, making documentation and patient communication a priority informed consent. We still it's it's our number one thing that we advise our physician community off secure informed consent when AI is used or any other you have to get informed consent. So when these are the practical steps, be prepared to answer patient questions, allow patients the choices to decline, if using an AI scribe or documentation tools, carefully review the generated notes for accuracy and completeness. The physician, as we discussed, remains ultimately responsible. Treat informed consent as a process. Do not delegate that. Recent cases highlight the risk of shifting this responsibility to trainees or even technology. There's a lot of new tech that says we'll just make them sign this make it part of your process, and then anticipate the possibility of future AI related litigation by documenting clearly with the expectation that these records may be scrutinized in court.
I also asked how open physicians should be with patients about AI use in their care, whether that kind of transparency reduces exposure or just complicates things.
So transparency is generally productive. It builds trust and it reduces risk. When handled thoughtfully, physicians should approach the conversation positively. Patients feel comfortable and consent to technology use when they understand when the physician has taken the time to communicate, because then open dialog will foster trust. It reduces liability. We know that and clear and respectful foundation is communication is the foundation of a physician patient relationship, with or without AI now again, ambient listening tools such as DAX or nuance, they save time, they reduce burnout, but their use should be disclosed and consented to. So I guess having I would say providers should establish clear protocols when ambient listening will or won't be used, how consent is documented, what alternatives are available if a patient opts out, because there could be few who would say, Well, I don't want you to use that well, then what? So again, be clear, be respectful, and make communication the foundation of the
relationship. And shrava also touched on the importance of keeping governance tight within your practice or health system, and the risk of not doing so,
they face liability if they fail to proactively and properly sort of vet and oversee and implement AI tools. Their responsibility includes safe systems. Are you training the tool correctly? Are you training the staff correctly? Are you instituting safeguards such as consent, documentation oversight policies, because inadequate training of these tools and governance gaps can heighten malpractice risk. So hospitals and health systems are not off the hook because they're deploying the technology that the physicians are using
as a practice. You also need to vet your tools and your contracts, to Simon's point, how a product gets to market and how you use it that matters
always, before an adoption of a technology is to ask questions of the manufacturer. Actually. You know, it's harder for small, single practice physicians to do this, but they maybe they can. I'm sure there's organizations involving primary care physicians or small physician practices that those organizations should try to get data about how this was validated. How did it get to market? What kind of protections are there for the physician? You know, I'm using your system. You tell me, it's great, but what if something goes wrong with your system? And are you? Are you going to pay for my defense if a patient is injured, God forbid, and if they're part of a health system, the health system should be making sure that the systems are validated for their particular uses in particular context, and not assume that, oh, it's validated to diagnose, you know, retinal myopathy or something, but so therefore, I can use it to diagnose some other eye condition, even if, you know, even if the like, we all use chat, GBT or Claude or whatever, to do. Bunch of different things. You know, that's kind of a no no. You don't want to do that. You don't want to just start experimenting with different uses. I'm not saying that experimenting is always bad, but you should always be cognizant that if it's been tested at all, it's been tested for a particular use in a particular setting, and so don't assume that it can be transplanted to another setting without doing some digging. And as
always, there are things that the system can be doing better, especially when it comes to transparency. With the tools themselves. I asked Sarah Gerke about stronger labeling of AI devices and what that could mean for patient safety and liability.
Yeah, so I've written a lot about labeling, and I think labeling and warnings in particular are really crucial for promoting transparency in the use of AI in healthcare one of my newest papers, which was published in the Emory Law Journal, and that one, I actually show, first of all, that there is a lack of labeling standards tailored to AI and machine learning based medical devices, and that really prevents users here from receiving important information for their safe use, such as, for example, information on the data sets that were used. And so I really suggest we are inspired from food labeling, that we do need a comprehensive labeling framework for AI mobile is medical devices, which should at least consist of AI effects labels also a front of package AI label.
But even with better labels, liability still flows through the physician.
Right now, already, the physician bears most of the risks of liability when using AI. So we have something which is called the learned intermediary doctrine. So if a manufacturer for medical device can typically discharge their duty to one by informing the physician, and then it's a physician as a learned intermediary who then needs to interpret that information and then communicate it to the patient.
Hey, there. Keith Reynolds here and welcome to the p2 management minute in just 60 seconds, we deliver proven, real world tactics you can plug into your practice today, whether that means speeding up check in, lifting staff morale or nudging patient satisfaction north. No theory, no fluff, just the kind of guidance that fits between appointments and moves the needle before lunch. But the best ideas don't all come from our newsroom. They come from you got a clever workflow. Hack an employee engagement win, or a lesson learned the hard way. I want to feature it. Shoot me an email at kreynolds at mjh, life sciences.com with your topic, quick outline or even a smartphone clip. We'll handle the rest and get your insights in front of your peers nationwide. Let's make every minute count together. Thanks for watching, and I'll see you in the next p2 management minute.
All right, so we've covered the risks now. Let's go through a quick recap of what physicians can do right now to stay protected while using AI responsibly. First off is, know your tools. If a vendor says that their AI improves accuracy or saves time, ask how they know that and whether that's validated for your patient population. Second, explain your decisions whether you follow an AI recommendation or override it. A quick note in the chart goes a long way if anyone wants to review the record later. Third, treat consent like a conversation. Let patients know when AI is part of their care and be ready to answer questions not a signature, but a real dialog. Fourth, review the notes, ambient documentation. Tools are improving fast, but they're far from perfect. Make sure the chart reflects what you actually said and observed. And finally, keep your governance tight. Train your staff on when and how to use AI. Double check your workflows and make sure that your contracts are clear about who's responsible for what. These aren't huge changes on their own, but they're good updates to good clinical habits, and they put you in a much better position as AI becomes a more routine part of care, and it will, because AI is here to stay, and the law will catch up in pieces. But until then, treat AI like any clinical tool, validate it, use it within its intended scope and document your judgment. Thank you again to Sarah Gerke, Dr David Simon and Deepika Srivastava for joining us for this look into the current state of AI and medical malpractice. Their insights were also instrumental in the writing of our October cover story the new malpractice frontier. Who's liable when AI gets it wrong, which you can find linked in the show notes down below. My name is Austin Luttrell, and on behalf of the whole medical economics and physicians practice teams, I'd like to thank you for listening to the show and ask that you please subscribe so you don't miss the next episode. Be sure to check back on Monday and Thursday mornings for the latest conversations with healthcare experts, sharing strategies, stories and solutions for your practice. You can find us by searching off the chart wherever you get your podcasts. Also, if you'd like the best stories that medical economics and physicians practice publish, physicians practice published delivered straight to your email six days of the week, subscribe to our newsletters at medical economics.com and physicians practice.com off the chart, a business of medicine podcast is executive produced by Chris mazzolini and Keith Reynolds and produced by Austin Latrell. Medical economics physicians practice and patient care online are all members of the. To age, life sciences, family. Thank you. You you.
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.