Speaker 1 00:00:00 I will say I've, you know, rarely in past sequences of, of technology disruption have I seen the speed and velocity of change. This is one of the fastest probably I've ever seen.
Speaker 2 00:00:26 Welcome to Off the Chart: A Business of Medicine Podcast, featuring lively and informative conversations with healthcare experts, opinion leaders and practicing physicians about the challenges facing doctors and medical practices. I'm your host, Austin Littrell. In this week's episode, Medical Economics managing editor Todd Shryock is joined by Isaac Park, the CEO of Keebler Health. They're talking about AI tools that are designed to act on behalf of users, known as agentic AI, and how they can be used in healthcare.
Speaker 3 00:01:00 I'm here with Isaac Park, the CEO of Keebler Health, and today we are discussing a genetic AI. Isaac, thanks for joining me.
Speaker 1 00:01:08 Thanks for having me.
Speaker 3 00:01:09 So, Isaac, I think a lot of doctors out there probably are familiar at this point with things like ChatGPT and kind of your large language model, AI. What is a genetic AI? How is it different?
Speaker 1 00:01:25 Yeah.
Speaker 1 00:01:25 Great question. And and honestly, even the idea or the concept of agent AI is still kind of in the gray area is there's still sort of more, molded unmolded sort of definitions for what it is, but loosely it would be a robotic process that leverages large language models or leverages transformer like technology and may include actually other non transformer or non large language model technologies and kind of wrap them all together and try and sequence those things together to do more complex processes and make more decisions over a larger chain of thought process. Does that make sense?
Speaker 3 00:02:05 What can it do to help physicians? How how will it help the health care system?
Speaker 1 00:02:13 Yeah. Great question. So maybe maybe backing it up even a little bit more. The reason why people use or even identify the term a genetic AI. And so it may even be more helpful to think about it from the perspective of like, what are they supposed to do? Like versus traditional large language models and even the root of that term. It's the idea that it's supposed to have enough decision making or judgment ability that it's intended to replace, like an agent, a human workflow, or a human process.
Speaker 1 00:02:43 Right? So you give it enough sort of information or access to information and then give it enough parameters to say, here are the general guidelines for what these decision making decisions should do, and then give it enough guidelines for what you want your output to look like. And then just say go right and let it let it figure those things out. So in many cases, right, even mimicking how a human might actually act or behave or interact with another human. Right. So the sort of the core, I would say, most popular use cases for some of these agent AI models these days, or use or products these days, might be like a chat bot or a customer service representative or something where it leverages large language model or transformer like technology. You give it enough goals to say, hey, answer this person's questions. Based off my repository of knowledge on how to use this car, right? And then as an individual is asking questions about how to use that car, the agent can act like a human, interpret those questions, interpret the car manual, and then be able to respond in a human like way.
Speaker 1 00:03:46 Right. So you can kind of take that sort of analogy and say, okay, this is a good adjacent expression for maybe what could happen in healthcare, right? So if you need to take a sort of set of complex interactive tasks that have a sequence of human judgment. Can you then start applying that to how a human might interact with that set of conditions?
Speaker 3 00:04:07 So what's a good use case for this? Because again, I think we all remember the, you know, early warnings of, you know, AI saying, oh, put glue on pizza and put rock on pizza. And yeah, do we really want AI making decisions in health care? So I guess talking about what would be a good use for AI.
Speaker 1 00:04:31 Yeah, great. Great question. I'd say, generally what we're seeing in market these days is how a genetic AI, if it's being used at all, it's typically being used in areas that are, low clinical impact but high administrative repeatable impact. Right. So that would be actually in some cases like scheduling or like acting as sort of a customer, customer representative in many of the ways we just sort of describe that analogy.
Speaker 1 00:05:02 but also in, in more mundane tasks like EHR, sort of. Ingestion or digestion or something like that. Right. Or revenue cycle management where you're saying, hey, read this thing and tell me yes or no. Is it going to do this or that? Right. Is this document there or not there? Right. So you're things that have less, much less of a clinical impact and much more of a sort of process, creative or, excuse me, a repeatable process within a bound of set of creative directions. it's probably worth noting that, while it is very popular, it's a very popular topic. And actually these days, within even within the last few weeks or months, weeks, the the idea the term agent is, is, is also even becoming less sort of popular as you're sort of seeing these newer foundational models come out. They're called large reasoning models. Right. So it's even larger context windows than then large language models and they are establishing chain of thought. So the large reasoning model actually just does a bunch of sequences for you.
Speaker 1 00:06:09 Really fascinating stuff. But the question is kind of how effective are they? Where can they be used? Where can they be applied? Right. the general rule argument, I think what I'm witnessing here, even in healthcare, is that, you know, it is not when the consequence of a particular result is extremely high, like in healthcare, you generally don't want to rely on just an agent alone process, right? And in a highly regulated environment where things are literally life or death, like in health care, right? You probably want to away from that side, right? Unless you have a lot of what we call human in the loop interventions or checkpoints.
Speaker 3 00:06:52 So we probably don't want a genetic AI taking over the role of a doctor or making a diagnosis or anything like that. But is there a way to incorporate it? So it's helping the physician make a diagnosis, whereas the physician still in charge of the decision making and the analysis. But it's more of a tool to help him or her, you know, talk to me about how that might might look.
Speaker 1 00:07:16 Yeah, I think that's the the right approach in philosophy and in in many ways, I think even that language that you just use the word tool, that's probably the the best framing, I think, for what this is. Right. In many ways. You know, you had that same sort of question around, okay, well, what is the internet going to do. Right. Or, or what is what is the smartphone going to do? What is mobile technology? What is social technology going to do. Right. And I think the same I think the same paradigm is going to happen with, with with generative AI or large language model or transformer based artificial intelligence is how it can actually help a human. How is it going to empower them? And like any tool in the hands of a good human, it's going to go really far in the hands of the less qualified human, it probably won't matter as much. Honestly. I mean, there's a lot of research, actually, that's coming out recently that's saying, you know, even in software engineering, right? If you pair a software engineer with a large language model or with a, with a and a large language model based tool to help them write code, really, really great engineers go really fast.
Speaker 1 00:08:23 and really early results were telling us that really junior engineers would increase their productivity quite a bit. What we're learning now is that, yes, while they went faster, it was kind of like gasoline on a a bad fire. Right. It actually made the bad fire worse. Right. So, in many ways, I think that same paradigm is going to apply to clinicians. it's going to apply to really any almost any, I would say industrial workflows. If you have a human that's empowered with the right judgment to understand what a large language model gives them or what a transfer technology gives them, they'll be able to utilize that tool and make whatever direct Trajectory. They were headed way faster, so if they were great, it's going to make them amazing. If they were terrible, it's going to make them even worse, if that if that makes sense.
Speaker 4 00:09:12 Say, Keith, this is all well and good, but what if someone is looking for more clinical information? Oh.
Speaker 5 00:09:18 Then they want to check out our sister site, Patient Care Online, the leading clinical resource for primary care physicians.
Speaker 5 00:09:24 Again, that's patient care online.com.
Speaker 3 00:09:31 So trust is such a big issue with I you know a couple bad incidents. And you know physicians are going to shy away from this as you know seeing as dangerous or ineffective. Like how do you make sure a genetic eye keeps that trust of the physician and the health care sector in general?
Speaker 1 00:09:55 Yeah. Great question. And you always say, you know, trust is slowly earned earn, but quickly lost. Right. and so particularly, I think, with the clinical community, I don't know very many other industries or professions where even just a five minute break is so, so cherished. right. Because it's just the amount of time and energy they put into their work is so condensed. it makes total sense that if you have a snap, instant bad experience, it's going to ruin a lot of sort of that, that relational trust, even more so right there on it in a clinical setting. So I completely agree. I think, you know, as purveyors of artificial intelligence technology, you you really only have a few shots at making that sort of trust.
Speaker 1 00:10:43 Establish and establish. Well, right. And set a good reputation. I think therefore, you know, cautiously and slowly in implementation, right, in the way that you build that relationship with a clinician, I think arguably also and I mean, we have our own sort of core value of transparency, right? making sure that when we make any suggestions or any artificial intelligence tool out there makes a suggestion, you back it up with a ton of validity around what evidence on where it came from, why, how clinical reasoning, confidence scores, all that kind of stuff to really make sure that you are providing enough information for that clinician to make an amazing decision rather than there. Right. A lot of it is just information. Can you expose enough information quickly so that that clinician can make the right judgment call?
Speaker 3 00:11:32 Let's talk about a few specifics here on how it might be able to help. When we survey physicians, two of the big areas that are challenges for them at their practices are staffing and documentation.
Speaker 3 00:11:46 Now, staffing, it sounds like maybe the scheduling aspect could be someplace where the I could maybe help automate that is am I is that true? Like how might it help with either of those two things specifically?
Speaker 1 00:12:03 Yeah, certainly. I would say that staffing in that in that time slot in component, that's a that's actually a pretty common challenge, right? Particularly if you have low, low constrained resources and you're optimizing for maximizing the amount of utilization of staffers. Right. and that is a pretty complex like traveling salesman kind of problem. Like, okay, if you only have so many slots and this person can't work on this day, and so what's going to happen. Right. So and these patients can only come in this days right. So yeah there's a lot of sort of moving parts there. I would actually say that, Traditional rules based sort of algorithm engines are doing a pretty, I would say, sufficient job, right, at employing or empowering people who are making those decisions. You would have to step function that up quite a bit in order for an artificial intelligence tool for you to trust that it's just going to do it right.
Speaker 1 00:12:55 And so I would argue that they're I'm really curious to see how that shakes out On the documentation side. I think that's already happening, right. When you're when you sort of look at the ambient graph. I mean, that was one of the first, I would say, wild enterprise success stories of generative AI in the clinical use case setting. Right. Is the ambient scribes listening in and saying, okay, here's the downstream document or documentation summarization, right. there's going to be a lot of challenges, right? in terms of where they go with that. Certainly there's a lot of promise. And that's why some of these large corporations like, like a bridge are raising massive valuations, because the promise for what you can do once you're injected or you're at that sort of you have the adoption and your tool is literally sitting there. There's a lot of promise for what you can do afterwards, right? but absolutely, I think that place is already making a major difference. I don't I there are so many clinicians that I run into where they're like, oh, that changed my life.
Speaker 3 00:13:49 Yeah, I've heard the same thing. I recently talked to a few that adopted it, and they were talking about like 80% time savings because it's basically creating the node for them, and they're just kind of signing off on it or making a few tweaks here and there. But I'm wondering what the next step would it would be, would it be the agenda? I may be going through all the notes, and maybe it's more from the health system side, going through all the notes and picking out, you know, the billing and, and kind of maybe doing several steps all at once. Like, how could it really boost efficiency there?
Speaker 1 00:14:26 I don't know if it will. And that's, that's I think kind of what the, the real question I think I have is I would argue that, yeah. Like what we do is, is sort of what you just describe a downstream. We take a bunch of unstructured, clinical notes that come from disparate locations, and we look for evidence for chronic disease.
Speaker 1 00:14:44 Right. And we highlight it and we do all the sort of transparency, the transparency things. I just I just described earlier, I think the the challenge here is healthcare is very, very complicated system. I don't know anybody who will tell you otherwise. And when I say complicated, I mean there's a large number of downstream sequential steps and the variance on what those steps look like are very, very wide, even down to whether you're in a fee for service or a value based care model. Right. Or whether you're back office or clinical and even clinical, whether you're sort of PCP or you're like specialty. There's just so much or ambulatory like location matters so much. There's just so much. and generally, I would argue, that when you have, I like to look at sort of large language model paradigms on two axes, you have a creative versus not creative, sort of structured set right, of decision making. And then the other axis being much more around, sequential versus not sequential, right.
Speaker 1 00:15:49 You have dependencies versus not dependencies where one is almost like a tree structure. Right? You have like versus a large boost graph that with many. And so that's kind of the major broad spectrum. And I'd actually argue that even on just this creative, not creative. Structured set, transformer based technology is very, very, very good at the creative side. They're actually pretty terrible at the structured side. And, even just a few. As recently as, like a year or two years ago, if you ask the large language model ChatGPT to do math, it couldn't do it for you. it's just really bad. I mean, and even now they've kind of fixed some of that. Right? But you kind of have to always still check your math when you ask it to do math, because you don't know if it's going to do it right. not really great. You know, if you're trying to depend on something like lab values that have math in it, right, to make a downstream set of decisions.
Speaker 1 00:16:45 same with genetic and sort of this protocol of multiple sequences. Agents are great when you have a set of sequences, but when you have a lot of complex graph decisions that go back and forth, that's where it gets really awkward. now, in a sort of clinical setting that you just described where like you have your, your complex note and or you have your note generated a note great creative task based on structured ruleset. And then you have to have it execute a bunch of things all in on in a non-sequential order that can vary highly from really creative tasks to very structured ruleset graph setting like math that that's that's really, really, really hard. I mean, even then you might say, okay, we can try it, but if the consequence of you getting it wrong is again like life or death, we should I would not write. I would say, you know, just kind of why I'm like, okay, it's probably not ready for agency, maybe ready for human powered. Right. applications that, you know, help you do governance around an entire process and you speed that human up.
Speaker 1 00:17:51 But I would not say, like, we should throw through that entire complex chain of events into the hands of an agent.
Speaker 5 00:18:01 Oh, you say you're a practice leader or administrator. We've got just the thing. Our sister site, Physician's Practice. Your one stop shop for all the expert tips and tricks that will get your practice really humming again. That's physician's practice.
Speaker 3 00:18:18 So, looking out 3 to 5 years, you know. Where do you see this going? What? What changes can physicians kind of expect to see?
Speaker 1 00:18:27 Yeah, I think you're going to start to see lots of little point solutions, right. Or even little. I mean, they may be all like underneath the umbrella of one software product, right. But you may you're going to probably start to see lots of little point workflows where a tiny little artificial intelligence thing might really amplify or speed up surfacing evidence for this one thing. Right? It may do it really well. Right. You think you're going to start to see sort of the, the the mini cuts of the paper.
Speaker 1 00:18:56 Like turn it into a beautiful kind of a snowflake. Right. And you start to see sort of that snowball of, of, I would say efficacy or, or efficiency improvement happen over time. I think if you had asked me six months ago whether we could get to sort of this, like fully autonomous agent thing, I would tell you that the rate at which these large language models, and large reasoning models is even bigger foundational models are growing that maybe we could get there. I, I don't know if you know, but Apple just put out a in a sort of research paper yesterday, yesterday, today that that kind of talked about the upper bounds of where these large reasoning models are going and saying, oh, it's kind of fake, right. Or like or, or sort of these large reasoning models when you give them a really high complex task, the, the sort of improvements, goes to near zero. and so even now there's like sort of, okay, there's turbulence or there's doubt on can we even get there? And so one of the fascinating and I think more interesting sort of, I think parts of being in this space is just kind of like the the projections swing wildly from week to week on where things are going to go.
Speaker 1 00:20:07 so if you're going to ask me six months ago. Yeah. Amazing. Today, I don't have no idea.
Speaker 3 00:20:13 So, at what point are we going to get to the, I think it was big thinker in the Hitchhiker's Guide when the all these agents say, yeah, the answer is 42 to every questions. Everyone you know.
Speaker 1 00:20:25 Yeah, yeah, yeah. The I would say it was the, the, I don't know. Right. the in the, in the industry, everyone is calling it our AGI, artificial general intelligence, artificial general intelligence and and similar to what I just described and like fluctuating an opinion on this large reasoning model idea. I think more more people are there in that on that AGI side saying similarly, they're saying three months ago was amazing. Yeah, we're gonna get there in like six months. And right. And I would say, you know, the large foundational companies like OpenAI and Cloud or Anthropic are betting that that's true. And now we're saying, okay, I don't know if that's going to be true.
Speaker 1 00:21:10 I will say I've, you know, rarely in past sequences of of technology disruption have I seen, the speed and velocity of change. Right, that I've seen in this particular we call them landscape shifts or technology shifts. this is one of the fastest probably I've ever seen. I don't think it'll be. I don't think it'll stay that way this that long. You know, I think the rate at change, the rate of change is just going to continue to improve. for any net new technology shift or net new technology that shows up. but will we get to AGI? I really don't know.
Speaker 3 00:21:52 Great. Good. Well, Isaac, this has been a fascinating conversation. I really appreciate you taking the time to speak with me today.
Speaker 1 00:21:59 Yeah, thanks for having me.
Speaker 2 00:22:10 Again, that was a conversation between Medical Economics managing editor Todd Shryock and Isaac Park, the CEO of Keebler Health. My name is Austin Littrell, and on behalf of the whole medical economics and physicians practice teams, I'd like to thank you for listening to the show and ask that you please subscribe on Apple Podcasts, Spotify, or wherever you get your podcasts so you don't miss the next episode.
Speaker 2 00:22:29 Also, if you'd like the best stories that Medical Economics and physicians practice publish delivered straight to your email six days of the week, subscribe to our newsletter at MedicalEconomics.com and PhysiciansPractice.com. Off the Chart: A business of medicine podcast, is executive produced by Chris Mazzolini and Keith Reynolds, and produced by Austin Littrell. Medical Economics, Physicians Practice and Patient Care Online are all members of the MJH Life Sciences family. Thank you.
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.