0:00
When you choose Copic, you're not just getting industry leading medical liability insurance, you're getting a true partner, one that's here for you every step of the way. Need quick answers. Our physicians are just a phone call away on our 24/7 hotline, you'll get specialty specific support and proven resources built on decades of experience, C, o, p, i, c.com, we're Copic here for the humans of health care here for you,
0:24
somewhere between seeing the last patient of the day and getting home, something changed in medicine. The work didn't end at the clinic. Anymore. Doctors were finishing notes at the kitchen table, on the couch, in bed. It became so common that it got its own name, pajama time. The electronic health record made physicians better documented and by a lot of accounts, more miserable, but for years, nobody had a real answer. Then came a tool that just listens. It's the first technology we've seen that says to physicians, we can give you back some of not only your time, but of the humanity of the doctor patient visit. We can create a mechanism by which you can make eye contact with your patients and and both appear and also be paying real attention to them, as opposed to being a pretty expensive, grumpy data entry clerk. That's Dr Robert Wachter, Chair of the Department of Medicine at the University of California, San Francisco, and author of the new book, a giant leap how AI is transforming healthcare and what it means for our future. He's been watching this technology reshape clinical practice up close for the past two years, and he's our main guide through this episode. My name is Austin Luttrell. I'm the associate editor of medical economics. This month, we published a cover story on the AI scribe era and what the research actually shows, what the tools get wrong and what to look for before you sign a better contract. This episode, we're going to go a little bit deeper. Along with Dr Wachter, we'll also hear from a few other experts on the conversation as it calls for them. Here's what they had to say.
1:55
To understand why scribes took off the way they did. You have to go back a bit to what the EHR did to the doctor's note, what used to be a few scribbled lines became something else entirely. Suddenly there were required fields, check boxes, Quality Measure checks, all of it there, because, for the first time, the people overseeing physicians actually had a mechanism to enforce it. Because all of a sudden, there was a mechanism by which my boss, Medicare, United Healthcare, my malpractice lawyer, the quality measures, all sorts of entities could be looking over my metaphorical shoulder and make me do stuff that they couldn't make me do before so and of course, given that opportunity, they took advantage of that. So all of a sudden, the note became bloated, it became full of check boxes and became sort of disheartening, not only because it drove the physician sort of away from what he or she thought they were supposed to be doing, which is listening to this other human talk about their issues, but also
2:51
a little bit of, in some ways, both a time sink and a cognitive sync what me and AI scribes do and what makes them different, from the voice to text tools that came before them, they use generative AI to actually understand what's being said in the room, not just transcribe it verbatim. The real magic, and where generative AI came in, was that it could take this conversation and to some extent, figure out what should be in and what should be out of the note, you know, in your description of your chest pain or your headache, out how wonderful it was playing with your grandchildren last weekend, or how well Johnny did in the soccer game. But also pull things together in a formula that is very important for the documentation of a note. So if I start out saying I've had chest pain, and then I tell you a little bit about, you know, my my focaccia recipe, and get down to talking about my shortness of breath five minutes later, those two things go together in the note, so the AI had to be smart enough to do that. So why documentation of all the ways that artificial intelligence could enter medicine, diagnosis, drug interactions, treatment recommendations? Why did the note win? Dr Wachter has thought a lot about this, and his answer involves driverless cars, because you don't start on the hardest problem, because particularly one where if you get it wrong, you can kill somebody. That is not a good strategy to win over hearts and minds of the incumbents, in this case, the doctors, to gain trust, because at some point, something will go really wrong and you'll hurt somebody or kill somebody. Autonomous vehicles didn't start with driverless cars. They started with cruise control. Started with cruise control, then automatic braking, then lane detection. AI and healthcare needed its own version of cruise control documentation. Was it the analogy? You know, I live in San Francisco, so we tend to think a lot about driverless cars, like, how did that happen? Because it's a miracle. And it it didn't start with a driverless car. It started with cruise control. It started with and then added, okay, you know, can you apply the brakes automatically if it if I'm going too fast and catching up too quickly to the car in front of me? And can you signal me when my head seems to be bobbing and I might be falling asleep or I'm going into a lane where there's a car and then eventually you you with a lot of tests.
5:00
Thing and trial and error, you get to autonomous vehicles so you do not start with the hardest problem. You start with easier, more tractable problems. You gain experience and you gain trust. The adoption numbers back them up at UCSF, about 70% of physicians now use a scribe in daily practice. All of our three or 4000 doctors now have access to an AI scribe. About 70% of them use it. The about 90% of them say, I love it. And we've almost gotten to the point where if we turned it off, we might lose a fair number of doctors, or it would be hard to recruit a doctor if we didn't offer them an AI scribe. It's almost become sort of an expectation of practice now, beyond what doctors think about them, what does the research actually say? The first randomized trials are in a study post November in the New England Journal of Medicine. Ai tracked 238 physicians across 14 specialties at UCLA note writing time dropped roughly 10% for users of one of the tools tested, and both tools showed modest improvements in burnout and cognitive workload scores. A separate study out of Mass General Brigham and Emory found burnout dropped from roughly 52.6% to 30.7% percent to 30.7%
6:03
over 84 days. One of the study's authors was quoted as saying, literally, no other intervention or field impacts burnout to this extent. It's a strong claim, but Wachter says the time savings piece has been a little overstated. We probably over hyped it, or maybe overestimated what its impact would be. Was we would thought, we thought it would save five minutes a visit or 10 minutes a visit. It turns out it doesn't save that much time. Saves a little bit, but part of the reason is the repurposing of the time, meaning that it probably does save me a decent amount of time typing and documenting, and some of that time is repurposed into actual, genuine human contact, which feels better for the doctor and feels better for the patient. And I think, although it's hard to prove, I think is ultimately better for healthcare on the financial side, a January 2026 study in JAMA Network open estimated physicians using scribes generated about $3,000 more in annual revenue. That's roughly enough to cover the cost of the tool. But Wachter and his co authors pushed back on that framing in a recent piece in JAMA Internal Medicine. Their argument is the real ROI isn't financial. The AI scribe probably kind of pays for itself just in pure throughput and time savings, but the real benefit is the joy and practice recruitment retention, and certainly when you factor in the cost of recruiting a new physician, which for primary care docs, is often estimated in the million dollars range. If this makes my doc significantly happier and helps me recruit and retain doctors, there's probably an economic ROI to do it. Dr Shannon, Sims, Chief Product Officer at vision, and a practicing internal medicine physician, made the same point. And put it more plainly, I always say that physicians like myself, I mean, physicians go to med school to do good, not do paperwork, and I'm really excited about the potential of these AI tools to return the joy to medicine. And that's a noble goal in and of itself. And I think that's great. But that also translates into, I think, economic good economic things too. So a happier Doc is going to be more productive, they're going to provide better care, and they might extend their career. So I really encourage these physicians and organizations that employ them to look at that and to think not just about the 12 month P and L, but also about the longer term joys and rewards of health care. So
8:20
Keith,
8:28
hey there. Keith Reynolds here and welcome to the p2 management minute in just 60 seconds, we deliver proven, real world tactics you can plug into your practice today, whether that means speeding up check in, lifting staff morale or nudging patient satisfaction north, no theory, no fluff, just the kind of guidance that fits between appointments and moves the needle before lunch. But the best ideas don't all come from our newsroom. They come from you got a clever workflow, hack an employee engagement win, or a lesson learned the hard way. I want to feature it. Shoot me an email at K Reynolds, at mjh, lifesciences.com, with your topic, quick outline or even a smartphone clip, we'll handle the rest and get your insights in front of your peers nationwide. Let's make every minute count together. Thanks for watching, and I'll see you in the next p2 management minute.
9:19
Now, none of that means that these tools are ready to run on autopilot. A January 2025 study in the Journal of Medical internet research found errors in 70% of AI generated notes. The most common type was emissions. That's relevant to clinical details left out entirely, and those are the hardest to catch, because spotting what's missing means you have to remember it in the first place. And this brings up an interesting concern. Obviously, AI tools make mistakes, and it's important to keep a human in the loop and have a doctor review those notes. But how do you make sure they're doing that? And that doesn't just become a formality. You'd like the physician to be reviewing the note.
9:52
And as I spend a fair amount of time on book, talk about this human in the loop. Idea sounds better on paper than it is in reality.
10:00
Kate, meaning that if you know, I've used the scribe in the last 49 notes that it created for me, I reviewed in detail, and they were perfect. Am I really going to be fully attentive as I review note number 50? If I'm human? The answer is no, at some point I'll begin over trusting the data. He also raises something that doesn't get talked about much before. Scribes writing the note was itself, part of how a physician process a visit, when I, in the old days, would would be typing my note.
10:28
That was a cognitively active process. And sometime while I was typing my note, I would say, Damn, I forgot to ask whether she has a pet at home, or whether she's recently, you know, traveled and drank fresh water from a lake, or whatever it was,
10:45
there's something in the process of the human thinking about something and writing it that is probably more cognitive, cognitively active than the human reading over a draft of something. But I'd say that's probably, to me the biggest, biggest trade off, and I this point, a little hard to quantify. I haven't seen a really good study that kind of gets at this instinct I have that there's a little bit lost when I'm not typing my own note for a research perspective on where AI actually stands clinically, we sat down with Dr Mark tsuchi of Mass General Brigham. His team recently published a study testing 21 AI models on their ability to work through a full patient case, not just answer isolated questions, but reason the way a physician does in practice. So when I asked him, Where does AI fit right now, his answer was pretty specific, comfortable using AI for low risk, high feasibility tasks. So what are those things like ambient documentation, visit, notes, summarization, patient friendly explanation, billing things that if you get it wrong, okay, it's not great. You lose some money, but a patient doesn't suffer. And so that's where I think AI is good right now, and it's helping out. But I think when you start to get you increase that risk, start to do things like clinical decision support, responding to patient messages, ordering lab tests, renewing patient medications, like that company that just got approved to do that, psychiatric medications, I think that's where you got to stop and look at the level of performance very critically. But the legal side, we talked to Dan silver board, a healthcare attorney at Holland, and Kate who advises practices on AI vendor contracts and compliance risk. His bottom line is short the physician, not the technology, is on the hook here. There's no Get Out of Jail Free card if the AI recommendation is for a higher code than what was actually performed before you sign anything. He says to push for three things, there's a plethora of AI companies out there. I think the first question the providers need to be asking is, who are they going to partner with, and again, doing their diligence to make sure that the vendor that they do partner with has a proven track record of compliance, including compliance with HIPAA.
12:59
The second thing I would say is, is that I think the recent polling shows that it's still 5050 as to patients who are comfortable with the AI and healthcare. So I think understanding from a physician standpoint, understanding their patient base and what they're comfortable with can help dictate how they want to implement AI, whether it's back office or whether it's patient facing technology, such as an ambient listening device, and then third, how are they going to disclose AI to patients? There's certain states, again, that require that disclosure happens,
13:40
but in the states that don't require that, it's something they want to do any regardless of the law, just for customer, patient, physician relationship, to safeguard that, or is it something they want to put into a Notice of Privacy Practices that we utilize artificial intelligence? So those are, those are some of the questions that I think physicians
14:03
should be at least talking about before rolling out AI into their clinical operations. Worth keeping in mind, 85% of healthcare AI investment is going to startups, and most don't have losing track records, so the due diligence is on you. There's not a ton of vendors out there that have these very proven, historic track records, providing AI tools that pass the litmus test of HIPAA compliance, of having years and years of comfort of 100% confirmation or validation testing, or that are 100% accurate. The second key risk is, is that with all of this new technology, that healthcare providers will simply sign off on whatever the recommendation of the AI program is, or if we're talking about an ambient listening program, that they'll just check the box and approve whatever the record is, without verifying first, whether or not what's it what the ambient.
15:00
Technology has recorded, accurately reflect, reflects the encounter so
15:45
a few practical things to do right now. First, know your tool, if a vendor claims it saves time or improves accuracy, ask where that data comes from and whether that applies to you and your patients specifically. Next, talk to your patients, let them know when you're using an ambient scribe in your practice. We're already beginning to see some lawsuits about AI tools recording patients without informed consent. Next, review your notes. Obviously, errors still happen, and omissions are the hardest to spot, and make sure your governance is in order, train your staff, audit the workflows and get clear on what your contracts actually say about liability. None of these things are huge changes on their own, but together, they matter, because if all of it goes the way that Wachter thinks it will. The note is just the beginning. I think in some ways, the biggest contribution of AI scribes probably will be winning over hearts and minds and creating receptivity for the next probably more complex, but maybe more even more important, AI tools, as I wrote in the book, you know, an AI scribe or an AI chart review tool. These are these are singles. I mean, the home run here is going to be really robust clinical decision support that helps not only make diagnoses, but helps suggest the right tests, suggest the right treatments. That's where the clinical money is. That's where the money money is. That's where the clinical benefit really is. And really, that's the whole point. This technology earned trust before it asked for it. It started with something useful rather than something risky, and now it's building towards something much bigger. And the reason I'm optimistic a I think the tools have really remarkable capabilities. I mean, three years ago, the idea that you could replace a human scribe with an AI scribe would have been inconceivable. And then the other point I've made is why I'm optimistic in healthcare is the system is so broken that we need tools like this in order for just to get through our day, for us to do our jobs. And you know, as I've said many times, you know, the day GPT came out, it wasn't like I was on Google saying I need a better search engine. You know, I couldn't even imagine a better search engine. But I certainly don't know anybody who said the healthcare system is perfect and we don't need help. The AI scribe eras here, review your notes, know your vendor, talk to your patients, and don't treat any of this as a finished product, because it isn't.
17:47
Thanks to Dr Robert Wachter, Dr Shannon Sims, Dr Mark Suchi and Dan silverboard and all of the other experts who helped shape our cover story. Take note. The AI scribe era is here, which you can find linked in the show notes, or over at medical economics.com my name is Austin Luttrell, and on behalf of the whole medical economics and physicians practice teams, I'd like to thank you for listening to the show and ask that you please subscribe so you don't miss the next episode. As always, be sure to check back on Monday and Thursday mornings for the latest conversations with experts, sharing strategies, stories and solutions for your practice. You can find us by searching off the chart wherever you get your podcasts, and if you'd like the best stories that medical economics and physicians practice published delivered straight to your email six days of the week. Subscribe to our newsletters at medical economics.com and physicians practice.com off the chart, a business in medicine podcast is executive produced by Chris mazzolini and Keith Reynolds and produced by Austin Luttrell. Medical economics and physicians practice are both members of the mjh Life Sciences family. Thank you. Thank.
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.