0:00
When we think about the future, it isn't we need to not think about it as AI versus clinicians using this technology in an intentional, meaningful way, will ultimately create more empowered patients, and then also on the provider side, less burnout, more quality interactions that they will gain value from themselves. Welcome
0:31
to off the chart, a business and medicine podcast featuring lively and informative conversations with healthcare experts, opinion leaders and practicing physicians about the challenges facing doctors and medical practices. My name is Austin Luttrell. I'm the associate editor of medical economics, and I'd like to thank you for joining us today before we get started. Just a quick note, physicians practice will be hosting a practice Academy event on March 19. The new practice management track is a virtual learning experience designed for physicians and practice administrators who want to build stronger, more efficient and more resilient practices. It focuses on real world, practical strategies that you can apply right away from optimizing operations to adapting to an evolving healthcare landscape. You can register today by clicking the link in the show notes or by going to registration DOT physicians practice.com that said for today's episode, I caught up with Dr Amber maracini, the vice president and head of healthcare and life sciences at Medallia. We talk about the growing role that artificial intelligence that's playing in patient care, especially as tools like chatgpt health and other health focused AI platforms are moving closer to patients. She explains how these tools can shape patient expectations before a visit where physicians should be cautious and why trust communication and thoughtful design matter more than the technology itself. Amber, thank you so much again for joining me, and now let's get into the episode Amber, Mario Cheney. Thank you so much for joining me today. Uh huh, thank you for having me. I guess it's before we begin. Could you just talk a little bit about yourself and Medallia? Yeah.
1:54
So I, I lead our healthcare and life sciences practice at Medallia, which allows me to work with healthcare, life science organizations, so both payer, provider and then also biopharma, med tech type organizations that are trying to really use technology to reduce burden without eroding trust. So obviously the conversation around AI is very relevant to what we do at Medallia, but my focus for these organizations is to look at how experience, trust and operational design intersect to help ultimately drive and improve better experiences part of the care journey. And then for this topic, just to kind of add a little extra context here, AI in our mind and what we're seeing, it doesn't erode trust on its own. It's really the poor integration that does. So when we think about how the tools are being integrated in overall design, that's when we have the greatest negative impact. So the opportunity to design the tools so that they strengthen the patient clinician relationship, rather than bypass it is where we have the greatest opportunity in healthcare. Great.
3:12
So I guess, just to get started here with chat, GPT Health recently launched, you know, what's actually different about it, or tools like this, compared with previous generations of you know, just typing to chat GBT and asking questions about health related topics. Yeah.
3:28
So I think the biggest thing is, is it's not just a symptom checker. When we think about the function of chat GBT health, it actually becomes a narrator of health data. So think about somebody who's able to talk you through the results that you're seeing, which is actually really beneficial to the overall experience and how you're connecting to the information. When we think about how chat GPT health isn't able to just give you and deliver point blank information, it's actually being used to interpret results, but also integrate across multiple data sources, which is huge. And I think the most critical component when we think about trust and relationship building in healthcare is it's able to do that in natural language. So instead of just in numbers, right when you're seeing like these lab results, and it's like point 0045, with a, you know, an amber sign right next to it, that's not all that helpful. And as as a matter of fact, it can be more stress inducing. So when we think about the natural language, it actually does have this authoritative way of communicating that trust tools didn't because people are more connected to the information that they're getting. One other note that I have here, that I think connects back to purpose, and what we're trying to do as healthcare professionals is for the first time, patients are able to encounter meaning before they actually encounter their clinician. Is a really big deal. It is powerful, but as part of the conversation that we'll talk through today, it's also risky, because there's an emotional reality that we have to keep in mind with the information.
5:13
So between the launch of chat, GPT health, Amazon launched their health, AI anthropic has a similar program. These platforms are in front of patients more now than ever before. Could you walk me through an example of you know how usage of one of these AI tools could prepare a patient before a visit? Yeah.
5:31
So I mean any human, especially adult humans, we've all been through either a scenario where we're receiving information, health information, some type of diagnostic test results. Maybe you're a caregiver or a family member of someone who receives information, but oftentimes lab results get processed at the most inconvenient times, like late at night. And so when you get an alert on your phone, which typically happens, it's in the evening, you might see this result. Anxiety peaks. You don't know what's going to happen, how you're interpreting it, you immediately start Googling worst case scenarios, and then your fear starts building around the situation before you even have any context. That's what happens without AI. That's what happens before the use of a tool like this. I think with more of this thoughtfully designed AI, and the way that chat, GPT, health and some of these other technologies are being leveraged, we actually have results that can be translated into plain language, or guess what a language that you prefer, right? I think that's a big piece of it too. Is being able to translate in your preferred language, but in a way that you're often interacting with chat GPT today. So this allows for more of this common or non alarming explanations that are able to be provided, and as a result, when patients walk into their visit to discuss the results with the clinician, the conversation shifts from just decoding acronym acronyms to actually having thoughtful and meaningful questions around the patient's life values and goals. This is huge, right being able to give yourself that distance between the information and the reaction chat, GPT health can also be used, or these other technologies really can also be used to prompt and promote and suggest, like, consider asking this question for your visit. Or like, be mindful about these three things and how they relate elsewhere. So it's it's a really great way of helping to create space for the human judgment and empathy that should be supporting these conversations.
7:50
Definitely, I think that's something that we see a lot, right with that, like Dr Google, or you're Googling these results and seeing these like catastrophic things for, you know, patients are bringing that in. So I think that's a, that's a really good point. Yeah, so physicians might worry about, I mean, I guess, yeah, that influx of patients bringing questions about what they're seeing in AI into the office. If a patient walks in with a plan or conclusion that, you know, AI, it gave them, how should physicians approach that conversation? So that encounter is productive, instead of, you know, combative or time consuming?
8:22
I loved this question, and the reality is this is not a new practice. Patients have always brought their own information into these appointments because they are trying to be resourceful and trying to show up with good intentions to educate themselves, right? So whether it's printouts, Apps, Google searches, right? Ai just happens to be the newest version that has a component of translation that's trying to take in as most context as possible. So I think when I would recommend, what I would recommend to clinicians, as they're thinking about this is to the most productive path would not be to dismiss the effort. Instead it's looking at the information and say, let's look at this together. So how can I help digest this? Because the clinician is also going to, there's, there's no way for chat GPT to have the entire context. It's only has the context of the information you're feeding it, but the clinician is going to have a broader range of information to help paint that additional piece of the picture. So when you as a clinician, are showing that you're not just an interpreter of data, it actually adds to your credibility and trustworthiness as a clinician to be able to say, I'm also an interpreter of the larger context, right? Let me help paint the larger picture for you. So that's that's really a huge piece is for clinicians to understand. And focus on what is the part that resonated with and was most concerning to the patient, so that they can help bring in the rest of the context and help make sure they're oriented around the information that matters most.
10:14
So I guess, you know, it's no news to anyone that AI can make mistakes and, you know, maybe give back some responses that aren't entirely accurate for physicians who worry about, you know, patients getting incorrect or unsafe guidance from from these tools, are they truly safe for patients to use today, and what specifically should clinicians or health systems be doing to put safeguards in place before they, you know, recommend these tools or, you know, even tolerate them, really?
10:39
Yeah, yeah. Ai, it's not that it can make mistakes, it will make mistakes, right? It's not just a hypothetical. It is a reality, right? And so I think the best thing that we can do to empower patients is to educate them on those disclaimers, so that we're making sure that we're talking about these tools that ultimately people are exploring, whether or not you're promoting them. So let's instead just enable and create education and have conversations about it proactively. So as a clinician, what I would be talking with my patients about is, how are you receiving information? What is the best way for that to be effective, for you, to resonate with you. And then if, if AI is one of the topics that they are focused on, I would share with them here's things that you should consider when you're looking at that, and how to best prep for our conversations together when we digest the information together. So I think instead of clinicians trying to shy away or avoid those conversations, instead they should be leaning in proactively and teaching their patients how to how to best use the technology.
11:54
What are some red flags that you see when AI starts to displace the clinicians in patients minds, and how should health systems or practices design guardrails to prevent that?
12:05
Yeah, so I think a red flag is when the patient saying it's easier to ask AI than my doctor, it's easier to interact with AI than my doctor, or AI is more empathetic than my doctor. Right To me, that's more of a red flag on the relationship and the communication skills with the provider, and there's a human element around those care interactions, particularly anything happening at the bedside, around relationship centered communication that has to be kept at the forefront, because as we see patients leaning into technology more and AI capabilities more, it almost heightens and increases the expectation for really, for for physicians, clinicians right across The spectrum to educate themselves on how to be leaning in more as a human connection point to their health care, and when you see difficulties in communicating, understanding, feeling valued as a patient, it's going to be so much easy for them to pivot left and lean in more firmly to an AI conversation.
13:36
Hey there. Keith Reynolds here, and welcome to the p2 management minute in just 60 seconds, we deliver proven, real world tactics you can plug into your practice today, whether that means speeding up check in, lifting staff morale or nudging patient satisfaction north. No theory, no fluff, just the kind of guidance that fits between appointments and moves the needle before lunch. But the best ideas don't all come from our newsroom. They come from you got a clever workflow. Hack an employee engagement win, or a lesson learned the hard way. I want to feature it. Shoot me an email at K Reynolds at mjh life sciences.com with your topic, a quick outline or even a smartphone clip. We'll handle the rest and get your insights in front of your peers nationwide. Let's make every minute count together. Thanks for watching, and I'll see you in the next p2 management minute
14:27
for a primary care practice that's already overwhelmed, where does a tool like chat, GPT health realistically sit in the patient journey? You know, should physicians? Should they encourage them to use these and should that be, you know, before a visit? Should that be? You know, follow up after a visit, I guess is, what are your thoughts on on that?
14:47
Yeah, I think there's some, there's some key moments that would be beneficial to leverage information that's out there, particularly around education, what to expect for an appointment, how to prep for an appointment. So that the time when the patient is in office, that interaction and time spent is most meaningful. I think what happens a lot of times is when a patient is hearing information for the first time in the office, they're stuck processing that information before they have the ability to actually present what questions they might have about it. So if they can do their prep and chat, GPT, would you know, health or, again, these other applications that we've talked about could be a great way of to prep yourself for the visit, so that when you're there, you're going through a conversation with your provider, versus your provider just giving you information, right? I think that's a huge piece, also any type of post visit reinforcement. So if you have some key takeaways, how to action on the information that your provider is recommending based on your care plan, based on your overall history, because again, they can provide that full context, and then also around navigation or next steps. So what to do from there and how to best connect with resources throughout the health system,
16:09
looking ahead a few years, looking into the future. Here, you know, imagining the AI is embedded into most patient interactions. What would you want a skeptical physician right now? You know who's worried about patients having access to these these tools to notice in their own practice that tells them this is actually helping the relationships and helping their patient outcomes.
16:28
Yeah, I think just a key takeaway is, if AI is working in an intentional way, clinicians won't feel replaced. Instead, they're going to feel more present, because what will happen is these interactions will in the actual appointment time when you're face to face with your clinician, whether that's telehealth or in person, by the way, you're actually going to start seeing better questions, not longer visits. So you'll have more efficient visits with more quality time spent versus quantity of time needed to address concerns and less time explaining basics. So you'll start to see clinicians operating at the top of their license versus answering some of the more basic questions. Because ideally, if this is being used correctly, those will be sorted out prior to the visit or during those in between interactions, which is really an exciting time for patients to becoming more calm and therefore also more prepared to have a more meaningful interaction with their provider.
17:37
Is there anything else that you think we might have missed or glossed over?
17:40
The last note I have to share is that when we think about the future, it isn't we need to not think about it as AI versus clinicians. How do we use AI to create more of a space for meaningful interactions of human connection and trust is ultimately what is going to be the greatest outcome of having those more meaningful moments with providers. I think this, using this technology in an intentional, meaningful way, will ultimately create more empowered patients, and then also on the provider side, less burnout, more quality interactions, that they will gain value from themselves, because they will be able to provide that additional context, that more holistic picture, and ultimately, the greatest human connection back to the patients they're trying to serve and drive health, health outcomes for
18:40
Ever marching. Thank you so much again for taking the time today.
18:43
Yeah, my pleasure once again.
18:55
That was Dr Amber maracini, the vice president and head of healthcare and life sciences at Medallia, on behalf of the whole medical economics and physicians practice teams, I'd like to thank you for listening to the show and ask that you please subscribe so you don't miss the next episode, and don't forget, physicians practice will be hosting a practice Academy event on March 19, featuring a new practice management track with practical, actionable education for physicians and practice administrators. You can register today by clicking the link in the show notes or by going to registration.physicianspractice.com as always, be sure to check back on Monday and Thursday mornings for the latest conversations with experts, sharing strategies, stories and solutions for your practice. You can find us by searching off the chart wherever you get your podcasts. Also, if you like the best stories that medical economics and physicians practice published deliver straight to your email six days of the week, subscribe to our newsletters at medical economics.com and physicianspractice.com off the chart a business of medicine. Podcast is executive produced by Chris mazzolini and Keith Reynolds and produced by Austin Latrell. Medical economics and physicians practice are both members of the mjh Life Sciences family. Thank you. You.
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.