Speaker 1 00:00:00 And my encouragement today is, you know, AI could be part of the answer to the challenge, the economic headwinds that health systems are facing today. But you got to do it smartly.
Speaker 2 00:00:22 Welcome to Off the Chart, A Business of Medicine podcast featuring lively and informative conversations with healthcare experts, opinion leaders and practicing physicians about the challenges facing doctors and medical practices. My name is Austin Littrell. I'm the assistant editor over at Medical Economics, and I'd like to thank you for joining us today. This week we're talking artificial intelligence and health care. As medical Economics managing editor Todd Shryock sat down with Dr. Kedar Mate, founder and chief medical officer of Qualified Health and the former CEO of the Institute for Healthcare Improvement, to talk about how physicians and health systems can use AI safely and responsibly. They'll be talking about the opportunities and risks of generative AI. Why oversight is so important. Lessons that the healthcare industry can learn from the aviation and automotive industries, and how physicians can evaluate whether AI tools are reliable, HIPAA compliant, and adding value to their work.
Speaker 2 00:01:12 So with that, thank you again to Dr. Mate for joining us. And now let's dive into the episode.
Speaker 3 00:01:23 I'm here with Dr. Kedar Mate, chief medical officer of Qualified Health and former CEO of the Institute for Healthcare Improvement, to talk about AI safety and governance. Doctor, thanks for joining me.
Speaker 1 00:01:35 Well, it's a great pleasure to be with you. Thank you for having me, Todd.
Speaker 3 00:01:39 So what are the dangers for doctors and healthcare organizations using AI without proper oversight or governance?
Speaker 1 00:01:49 Well, you know, first of all, I, you know, just to back up and say, I think there's a tremendous amount of promise and opportunity with the artificial intelligence technologies that we have today. I like to say that for the first time in, in health care. I'm an. I'm an internist, a hospitalist. And I, you know, for for years, I think the technologies that we experienced in hospitals and in health systems were essentially supply driven. Right. We we had, you know, EHRs and other health technologies that were essentially, I would say, foisted upon the provider environment.
Speaker 1 00:02:21 And, you know, the providers came kind of kicking and screaming to the party, you know, at some level, this is different. And part of the reason I got excited about joining Qualified Health and, and about the technology itself is that, you know, generative AI technologies feel sort of palpably different to me than every prior generation of information technology that we've used in healthcare. And when I talk to health systems and clinical providers, which is sort of my everyday life now across the country, I find that it's not supply driven, but in fact, demand led that clinicians are hungry for using these tools to help automate aspects of their day jobs that they've never had the opportunity to do before. The ability to see across data that they've aggregated over time in ways that they've never been able to do, you know, handle, you know, these sort of really frustrating administrative tasks, documentation tasks and other things that, you know, they've really not been able to do before, but now they've got this really incredibly powerful technology that can help them accomplish so.
Speaker 1 00:03:26 You know, yes, there is risk in these tools, to be clear, but there is I want to start with the fact that there's tremendous opportunity in these tools and that clinicians really are seeing that opportunity to help them sort of throw off the kind of the, you know, essentially the cumbersome shackles of prior generations of information technologies. with that said, I think that, you know, there are significant challenges. As I said, there are risks of hallucination. There's risks of, models underperforming, you know, over time as the underlying foundational models change as OpenAI and Gemini and Claude and Llama. These underlying foundational models, as they as they grow and evolve every week we have a new one that is, you know, doing something new and doing something different. And, you know, if the underlying models are changing as regularly as they are, any application that uses those labs, those large language models to do a task, you know, whether it's to, assist with scribing and documentation or whether it's to, you know, analyze a set of your patients or whether it's to provide guidance on what medications or or potential diagnoses to consider, you know, any application that's using those technologies.
Speaker 1 00:04:48 the if the underlying model changes, the performance of that application might change. And so we need to know whether or not those models are are changing. And we need to know whether or not the performance of the applications is degrading over time. And that's really what we built qualified health to try to do, to help us understand whether or not model performance, application performance is degrading over time, and to be able to understand whether or not, the, the AI tools that we're using in our environments are doing what they're supposed to do and providing the services that they're supposed to provide. So those are the, that's sort of my short answer. If you have a long answer, excuse me for to the question of kind of what what was the risk associated on some level with these technologies and how we might mitigate that risk?
Speaker 3 00:05:37 What's being done now to to make sure these tools are safe and providing accurate information? And maybe what should companies be doing, you know, to make sure either their own products are are safe or if you're using those tools, what should you be doing to make sure they're safe?
Speaker 1 00:05:57 Yeah.
Speaker 1 00:05:57 So starting zooming out of the big picture macro like kind of what's happening nationally in the federal landscape or even at the state level. You know, there's lots of guidance on frameworks around ethical use of AI. I participated in one of those, groups, the National Academies of Medicine created an artificial intelligence code of conduct, which we released earlier this year. after multiple cycles of interviewing and research and gathering information on what kind of what what ethical use of AI would look like in, in medicine today. that that type of framework is very useful, but it doesn't actually solve the problem of, on a daily basis, ensuring that usage is actually happening the way it's supposed to happen. So I think there are at the federal level and at the sort of national scale, there are lots of frameworks, either by the coalition for Health Care AI, the partnership for health AI, things like the National Academies, etc. that are creating these frameworks that help to guide ethical and responsible use of AI, but we need ways of operationalizing those tools in our health systems and our environments.
Speaker 1 00:07:03 That's where I think companies like ours come in to help shape the take those guardrails that are being determined by these multi-party organizations, but take those guardrails and now distill them into technologies that can actually help monitor the performance of algorithms using those principles on an ongoing basis to make sure that we're not using these, technologies inappropriately, providers or patients or, you know, making sure that we're not, breaking through those guardrails and using them in a, in a way that we don't imagine or don't intend, and that the model performance doesn't degrade over time. As I said earlier at a, at a very, individual level, very specifically at, you know, what can you or I as providers or as patients do to make sure that we're using these technologies in an appropriate manner? You know, I think these days we see that there is lots of information that is publicly available on how to how to use AI tools successfully and usefully. Important to know that most of the publicly available tools are not HIPAA compliant. They don't protect protected health information.
Speaker 1 00:08:14 So what I've seen patients do and sometimes providers do, which is a real mistake, is to kind of copy a bunch of text from a note, dump it into an LLM like, you know, ChatGPT and then ask it to, you know, you know, give me a differential diagnosis on this patient or suggest a treatment protocol or otherwise. And that is that is tricky. That is leaking protected health information into, a publicly available, tool. And so, and, you know, AI doesn't forget, right? I mean, these tools don't forget if, you place chromate is, you know, health file into the internet. It's there in the internet, right. If you put it into the into a ChatGPT, it persists in some way in that, in that in the memory banks in some capacity. And so we have to be careful about not using the technology inappropriately, especially with protected health information. And so we have created a qualified, HIPAA compliant chat tool that allows health systems to use a chat type function, but they can chat safely and securely in a HIPAA compliant environment with the the records, with their medical, with their clinical records, so that providers can take medical information from a from their Epic or Cerner or whatever EHR you're using and upload that data and now have interactions with that information using the traditional chat interface to help guide clinical decision making or support.
Speaker 1 00:09:49 And so that is the kind of thing that I think we need to see more of, and we need to make sure And guard against that usage that I just described were we might risk putting public, you know, private health information into the leaking that into the into the GPS and into the internet in general.
Speaker 4 00:10:12 Say, Keith, this is all well and good, but what if someone is looking for more clinical information?
Speaker 5 00:10:17 Oh, then they want to check out our sister site, Patient Care Online. Com the leading clinical resource for primary care physicians. Again that's patient care online. Com.
Speaker 3 00:10:31 Yeah I think that's something a lot of people don't understand is when you put something into these models that becomes part of their database and who knows where it will resurface. So I'm wondering at the individual physician level, whether it's a physician running his or her own practice, maybe talking to a vendor or a physician working for a hospital who's been given these tools by their bosses? What questions should they be asking? Is there any sort of kind of, you know, like the old Good Housekeeping seal of approval, you know? Is there a modern version of that for AI? Or, you know, what questions should they be asking to make sure these tools are compliant?
Speaker 1 00:11:16 Yeah, that's a good question.
Speaker 1 00:11:17 There's no kind of official kind of, you know, Good Housekeeping seal of approval kind of thing. Right. they're actually just very recently the the white House has been talking about generating kind of validation laboratories. Chai has been the coalition for Health Care. AI has been talking about the notion of AI validation, laboratory type things for a little while. I don't know what you want to know is model performance over time. You want to know whether or not the you know, whether or not the tools have been tested in some capacity. in, in real life, not in, you know, kind of fake data sets, you know, that kind of thing. You want to know what the what the companies are using, what the vendor community is using to to auto regulate, to sort of make sure to check their own work, essentially to make sure that the the AI is not hallucinating, that it's not creating, you know, results that are erroneous. you want to know that there is ways in which the health system is guarding against inappropriate usage of the technologies, you know, but, you know, truthfully, that's relying on the vendor environment for their own assessment of, you know, of security, which of course creates a conflict of interest right on their end.
Speaker 1 00:12:33 So if if all you're doing is trusting the vendor is to self-report, you know, challenges or issues with their technology, I think that's a problem. If you're only relying on government to create kind of an FDA seal of approval, which the FDA credibly, to its credit, is trying to create some guidance around appropriate use of AI technologies that are, you know, especially clinically sensitive AI technologies. But there's something in between this, you know, and that's, I think, where organizations like ours and we're not the only one, but organizations like ours are trying to step in to say, you know, it's not just about federal badging that says, you know, good or not. And it's not just vendors auto regulating, but you need a, a way a little bit like I think of it a little bit like cybersecurity. Todd, you know, we all know what that means, right? We're trying to protect we have technologies that police the information borders of our health systems today. Right. And the technology basically monitors for intrusions from external malicious actors.
Speaker 1 00:13:33 We have to have similar kind of technology that takes whatever guardrails you decide, whether it's the National Academies Code of conduct or China's guidelines or whatever, that takes the guardrails and then translates them into operational parameters that technology can then monitor on an ongoing basis, and to see whether or not people are. Not maliciously necessarily. But you know you have non malicious internal actors inside of the health system that regularly leak you know fae into the into these into these LMS. So it's it's not because they're trying to do harm. They're trying to do a good thing. You know help get to the bottom of a clinical situation. but they might not be doing it with the most appropriate technologies present. So you have to be able to guard against that. And, you know, importantly, I think, that's what the technologies that we've created are helping to try to accomplish.
Speaker 3 00:14:27 If you were evaluating an AI tool for safety, how do you measure safety? What what does that look like? What is the metric? Is a single hallucination by an AI.
Speaker 3 00:14:40 Like, you know enough to say this is unsafe. What? You know, how do you measure this very nebulous concept?
Speaker 6 00:14:48 Well, I'm, I'm, I'm wishing we had our.
Speaker 1 00:14:50 Our governance experts on the line here today from qualified health. But, you know, there's quite a sophisticated rubric for measuring. There's, the way I break it down to try to understand exactly what it is. Is it a bunch of statistics about the performance, the technical performance of the AI? You know, so there's does the AI kind of work itself? Is it internally consistent? Does it does it do the things in a replicable manner that it's supposed to do that sort of does the technology work? Right then I think of much more important to me as a practicing physician is does it deliver value added service to me? Right. Does it actually do a job that is adding value for the health system or for the end user, whoever that is, and there's ways of assessing and monitoring for that. and then there's a third thing which is, you know, prevent understanding defect rates.
Speaker 1 00:15:41 Right. Like understanding whether something is not working like a, whether it's breaking in some way, overall over time. So, you know, for me, there's like a structural element, there's a technical element, and then there's a real world performance element that is, I think, kind of the the three kind of components of how we understand and measure performance of these, these models and these tools over time.
Speaker 3 00:16:05 Are there lessons to be learned from other industries? Like if we're talking about safety, you know, the airlines, you know, highly regulated, very on safety, very important. Obviously, you know, automotive industry also a lot of safety regulations. Are there lessons we can take from those industries and apply to AI?
Speaker 1 00:16:25 Oh, 100%. Yeah. I mean, I think there's there's lessons in a broad sense around, you know, from automotive and aviation and nuclear power. You know, historically those have all driven, they've all been there. Their safety, healthcare has has studied those industries, those, you know, high reliability industries is what they're called.
Speaker 1 00:16:45 Right. You have to be highly reliable in aviation or in nuclear power, for submarine warfare, you know that those kinds of scenarios. You know you can't tolerate. The margin for error is very, very small. and so you've got to have, you know, what has become known as Six sigma reliability in order to to basically ensure that those industries are what we call highly reliable. Right. And so there's a whole field of study actually in healthcare that's called, you know, that that studies high reliability organizations and has tried to integrate those ideas into patient safety, and clinical safety parameters, etc.. I would say there's a there's a there's a additional thing to think about here now, which is AI associated with those industries. And how do we actually ensure that the AI that's used in our industry, in healthcare is also borrowing from knowledge around how to how AI was made safe in aviation and in automotive in particular. So as I think about it, you know, as, as a I think you probably know the airplanes that we all fly out every day.
Speaker 1 00:17:51 I always find it kind of interesting to know that, you know, we regularly put our lives in the hands of AI. Actually, if those of you that fly on airplanes write you regularly for a very significant portion of the time that you're on it, you're on board and flying in a in a commercial aircraft. That aircraft is being piloted or, you know, manned by an artificial intelligence technology. Right. The thing that's flying the plane, for the most part, is AI. Right. And, which is kind of interesting, I think we that people don't necessarily know that, but we have to learn how to make that safe and that, you know, how did aviation make the AI technologies that it uses every day as safe as they have made them? Same with self-driving cars, right? Waymo, became what it is because it has become very, very reliable at taking all the input from all the many cameras, so that it can actually use that to generate, you know, algorithmically drive a vehicle without a driver.
Speaker 1 00:18:55 you know, that is a you're literally putting your life in the hands of a, of an algorithm when you get into a Waymo car. And in fact, one of my co-founders, Justin Naughton, the CEO of Qualified Health, actually built his first AI company, which was called trustworthy AI because the automotive, it was built for health care. But health care at the time in 2016 wasn't ready for it. And he ended up selling that company to Waymo because Waymo's interest in the trustworthiness of AI was very, very high because it needed to create that reliability that you're describing. So I think the short answer to your question, Todd, is we have a lot to learn from other industries, not only about how to improve the reliability of healthcare in general, but also how they've implemented AI successfully in, in a highly reliable way in their respective industries.
Speaker 7 00:19:53 Hey there folks. My name's Keith Reynolds. I'm the editor of Physician Practice, and this is the P2 management minute. Here are two low cost ways to simplify your practice fast.
Speaker 7 00:20:01 Number one, automate what you can and ditch what you can. Start with tools you already own. Turn on automated appointment reminders, eligibility checks, digital intakes, and billing follow ups inside your EHR or practice management system. Then prune list every app, no overlaps and retire anything redundant or not. Integrated. Retract workflows every 6 to 12 months, so manual tests don't sneak back in. Number two prioritize tools that actually save time when evaluating software. Ignore flashy extras and ask, does this cut clicks and minutes? Favor intuitive scheduling, fast loading charts, and simple billing flows. If a platform takes longer than your old process, reassess or replace it. Value equals ease of use, quick setup and minimal training period. Automate smart, simplify hard that's your budget friendly reset. For more bite sized practice tips and tricks, make sure you visit Physician's practice.com. Com. Thanks for watching and I'll see you tomorrow on the P2 Management Minute.
Speaker 3 00:20:54 Kind of related to that. Just like you wouldn't want a pilot using AI that they, you know, bought off the shelf or downloaded off the internet because they saw it looked really cool.
Speaker 3 00:21:05 You know, how can healthcare leaders make sure that people within their bigger organization aren't using tools that maybe either are unsafe or untested? You know, how do you make sure you're you as the leader or in control of AI usage within your own organization?
Speaker 6 00:21:23 Yeah.
Speaker 1 00:21:23 That's great. Great question. So we we think this is really important. This is a we need to have, you know, health systems need appropriate governance around AI. And by that I mean not just setting up a committee, but arming that committee with very clear got, you know, guidance on how to prioritize, how to how to ingest the, you know, large volume of very creative, very interesting ideas that people have about where AI can be useful, you know, triaging those appropriately. A lot of those ideas actually use data sources that are unavailable that are not really necessarily subject to generative, you know, AI that can't perform with reliability or consistency. So we actually helped a health system. We help the health system in Atlanta, create a sort of AI triage system to help them, you know, take in the hundreds of ideas from there are many, you know, there are tens of thousands of employees and then triage them appropriately for likelihood of creating business impact, likelihood of creating clinical impact, likelihood that it's even possible with the generative AI tools that we have today.
Speaker 1 00:22:31 Because, you know, with the exciting part of this is that people's imaginations have just, you know, I feel like sometimes I feel like healthcare is going through another awakening. You know, on some level, it's this exciting time where people think, you know, people see this tool and feel quite excited about applying it to all kinds of problems that they've seen over time, you know, without really knowing kind of the nuances of what it's good at, you know, and what it can actually solve. And so creating that triage mechanism has been very important for this, for this health system and that using that triage system, they can also, you know, help to convert that enthusiasm of the, of the of the people that are submitting these ideas into a more educated set of, you know, builders, creators, thinkers, so that they can create more. You know, likely they increase the likelihood of success of the things that are being submitted to them, and then they choose to build the things that they are, you know, either themselves or with vendor partners like, like us, they're choosing to build some of those technologies that have the greatest business value and the greatest clinical value and the greatest likelihood of being, you know, impacted by generative AI tools.
Speaker 1 00:23:38 and I think that's the other thing I would say about this, which is that we have seen the the mushrooming of vendors and, and AI companies or I think somebody shared with me recently, there's 5000 gen AI companies in healthcare. And most of those companies, you know, do individual point solutions. They solve one thing, one problem, you know, and and it's just not conceivable that health systems are going to be able to it's whack a mole to try to go after one problem at a time across the tens of thousands of suboptimal workflows that exist in healthcare. So you need a, in our view, qualified, you need a platform, you know, supplier of AI technologies that's going to help you across many, many workflows, help you make sense of the volume of interest that's coming in to to greet you. And then, you know, in a triaged, appropriate fashion, you know, build and deploy the things that are going to create the greatest business impact and value.
Speaker 3 00:24:38 Yeah, I think like a lot of industries, you start off with a lot of smaller startups.
Speaker 3 00:24:43 Then through attrition and mergers and the best ideas kind of rise to the top. And, you know, you end up with the best companies, with the best solutions. So. Is there anything else that you would like to add, on this topic that we haven't talked about?
Speaker 1 00:24:58 Yeah, I mean, I think, you know, overall, I'd say, you know, the ingredients to make for success right now that we see. First of all, I'd say that health care is a in a big there's a lot of challenges on the horizon. You know, being a healthcare CEO, I spend a lot of time talking to executive leadership teams of health systems today. You know, there's a lot on the horizon, you know, cuts to Medicaid challenges and Medicare. Lots of issues around squeezing commercial payers. You know, there's there's significant challenges. You know, the supply chain is challenging, etc.. And there aren't a lot of levers to pull these days. They've already shrunk their, you know, their employee, the number of employees.
Speaker 1 00:25:37 They've already, you know, created as much of business efficiency as they can possibly create. They're pushing their payers as hard as they can on commercial contracts, etc.. So the levers are kind of limited at this point. And AI in the middle of all this is this sort of big technological revolution. And my encouragement today is, you know, AI could be part of the answer to the challenge, the economic headwinds that health systems are facing today. But you got to do it smartly, and you've got to do it with a partner that understands the how this technology really works fundamentally. And I think with that type of partner that can help you make sense of the technology, help you triage appropriately and help you build and deploy successfully. it's, you know, the sky's the limit right now on what is possible to, to do with this technology. So maybe that's maybe my final word for here today.
Speaker 3 00:26:28 Very good. Very interesting topic. Doctor, thank you for joining me.
Speaker 5 00:26:32 Happy to do it.
Speaker 2 00:26:41 Once again, that was a conversation between medical Economics managing editor Todd Shryock and Dr. Kedar Mate founder and chief medical officer of Qualified Health. My name is Austin Littrell, and on behalf of the whole Medical economics and physicians practice teams, I'd like to thank you for listening to the show and ask that you please subscribe so you don't miss the next episode. You can find us by searching off the chart wherever you get your podcasts. Also, if you'd like the best stories that Medical Economics and Physicians Practice published delivered straight to your email. Six days of the week. Subscribe to our newsletters at Medical Economics and Physicians Practice. Off the chart A Business of Medicine podcast is executive produced by Chris Mazzolini and Keith Reynolds, and produced by Austin Littrell. Medical economics, Physicians Practice, and Patient Care Online are all members of the MJH Life Sciences family. Thank you.
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.