- Welcome to The Minor Consult, where I speak with leaders shaping our world in diverse ways. I'm delighted to be joined today by Dr. Dhruv Khullar. Dhruv is a practicing physician, an associate professor of health policy and economics at Weill Cornell Medical College, and a contributing writer at the New Yorker magazine. Much of Dhruv's recent writing explores the future of medicine from how AI is reshaping diagnosis and the role of doctors to what these changes mean for patients and trust in healthcare. Across all of his work, Dhruv brings a rare perspective, the insight of a clinician, combined with the curiosity and rigor of a journalist. Dhruv, thank you for being here.
- I'm so glad to be here. Thanks for having me.
- Let's maybe start with AI, which you've written extensively about. Where do you see the greatest promise for AI in biomedicine and healthcare delivery, and what should we be most cautious about?
- It's a great question. So when I think of the promise or the potential of AI, I often try to think about it in a few different buckets because sometimes the conversation can become so muddied, because AI seems to be able to do so many things effectively. So I think about it in a few buckets. The first is probably administrative tasks and regulatory burden, and I think already we're seeing AI scribes entering the clinical workforce. And my read of the evidence so far is that right now they don't seem to be saving clinicians a lot of time, but doctors really like them. And so we see a lot of improved wellbeing and professional satisfaction. And then there's of course things around billing and documentation, and lessening regulatory burden, that I think will probably be the first area that AI has a huge impact in healthcare. The second is patient navigation. And so, helping patients just navigate the complexities of the medical system, helping them understand their lab results, imaging results, maybe prepare questions for their next doctor's visit, help them understand what happened in the past doctor's visit. And I think eventually we'll have things like AI agents that help, you know, schedule appointments or automatically coordinate various aspects of care. Probably the third is drug discovery, and I think this is a really exciting area. We're seeing AI increasingly capable of identifying novel targets, understanding how proteins fold, scouring vast chemical spaces, designing new molecules. So there's this whole area of drug development, but I think also drug repurposing. You know, I profiled a organization last year called Every Cure, and what they're doing is really just trying to find new uses for old drugs. It's kind of a low hanging fruit approach. We haven't bothered to pick well enough, but maybe they, we have all these existing drugs, we should be thinking about how it is we can use them for other purposes. And the last big area is diagnosis. And I think this is a huge challenge. It's a challenge for, you know, of course human clinicians as well. It's a really monumental task to figure out all the different ways the human body can go wrong and then put it into a set of diagnoses that can ultimately help someone. But the technology is getting better by the day. And so LLMs are helping clinicians already make decisions. Doctors are, and nurses, and others are turning to these models to get second opinions on their patients. And on the patient side, I think we're seeing some self-diagnosis. So in the past, people turn to Dr. Google, which is much less powerful than what we're able to do now with some of these LLMs. So those are kind of the buckets of big potential I see for AI.
- That's really helpful. And I like the way you crisply define and compartmentalize or assign the various impacts of AI today and in the future into those four categories. Maybe if we focus a bit more on the fourth category, and that is the actual involvement of AI in making diagnoses and increasingly, in clinical decision-making, and if you will, the practice of medicine. Where today do you see the most hopeful aspects of that? In other words, the changes, and you wrote about this in an article, I think maybe last September in the New Yorker, but where are you seeing the most positive impact now and in the future, and where are the areas that really concern you?
- So last year, as you mentioned, I traveled to Harvard to witness the unveiling of this LLM called CaBot, which was named after Dr. Cabot, who came up with the CPCs that you might see in the New England Journal of Medicine. And I was really astonished to see how good it was at reasoning through a complex clinical case. And so what the researchers at Harvard did basically was they customized version of a reasoning model from OpenAI, and they allowed it to kind of break complex problems into steps and, you know, refer to the literature before it comes up with its answers. And basically, it was able to not just solve some of these complex cases that are in the New England Journal of Medicine, but also to cite the relevant literature, and to show its reasoning, which is, you know, so important for what we do in medicine. And the model talked in the style and the cadence of a human physician. Its presentation was sharp and at times funny, it got to the right answer. And so for a long time, I, you know, have been thinking about how it would be that AI could replicate the complex cognitive work that we're doing as physicians. But after watching the presentation, I kind of thought, you know, how could it not? This is definitely going to be a bigger and bigger part of healthcare going forward. But I think, you know, one of the things that I think is really important to point out is, you know, when these models are going through these cases and having superhuman performance on some of these things, part of that is because the cases have been curated in a particular way, which is not always the case. In medicine, of course, patients don't always read the textbook before they come in with a particular medical condition. And what I'm really concerned about is, you know, the idea of turning over our agency to these AI models. We should, you know, be aware that they are still fallible, and clinicians and patients always need to be the final arbiters of care, not computers. And it's partly because AI models can still make mistakes and hallucinate, but partly because, you know, humans bring a certain set of preferences and values to the clinical setting, and we don't want to lose sight of those things. And I think one risk is as AI plays a larger and larger role in healthcare, as we have separate systems that are often talking to each other, even small errors could be propagated across a system, and that could lead to some pretty serious downstream consequences. And so, you know, I know, I think I'd love to talk about cognitive de-skilling as well at some point in the conversation. You know, the other big area that I'm concerned about is how do you train, you know, new doctors, new nurses, other clinicians in an environment where AI can offload a lot of cognitive tasks? And I think every profession is kind of grappling with this question because it is such a challenging want, you know, intuitively, it feels obvious that if you're outsourcing your thinking to a machine, that skill is going to atrophy over time. But there's also some evidence now that, for instance, gastroenterologists who use AI to pick up polyps on a colonoscopy, they become less adept at doing that themselves. And then the question becomes, you know, in a future in which AI pervades medicine and it's extremely good and effective, how big of a deal is it that we lose some of these skills? You know, we used to be better probably palpating the liver or listening to the heart. Now we have echoes and CTs, but I do think that there's something different about the cognitive work, the critical thinking skills, the reasoning that goes into producing a diagnosis. So that's an area that I'm pretty concerned about.
- I think overall you have a very hopeful vision of the future and what AI will do. And maybe in talking about that and the de-skilling, you can also talk about what you wrote in your most recent New Yorker article in December about how maybe we'll actually get back to what physicians have traditionally done and what the profession for millennia has been focused on. And that is healing. So maybe you can talk about, or if you could share with us your thoughts on how the education and continuing education of physicians will change in an era where knowledge is so readily accessible, both the upsides and the downsides of that, and how you see healing really manifesting itself more prominently in the profession and the future.
- Those are great questions. You know, I am hopeful that AI will make a big difference and a big positive difference in healthcare, but I don't want that to come off as saying that, you know, somehow it will replace much of what clinicians do. You know, being a good doctor is about a lot more than analyzing data and answering questions correctly. You know, it's about things like exercising judgment and contextualizing information, eliciting people's values, taking responsibility for their care. You know, I've heard another piece recently about this TV show, the Pit, that I'm sure a lot of people have watched, which follows emergency medicine doctors over the course of just a single shift. And if you watch that show, it's clear how much of what a doctor does in just one single ER shift, AI is nowhere close to being able to do. And some of that is because it's physical. Some of that is because it's emotional. Some of that is the basic fact that all patients can't type or talk, or sometimes they're not even conscious. And so, you know, none of that is to say that AI won't change care and won't change it for the better or change the way that physicians need to go about their jobs, but I don't think it's going to replace much of what, replace doctors in a kind of wholesale way. And so when I think about, you know, the future of how to think about training new clinicians, you know, in some ways the more things change, the more they say the same. So I think a lot of the fundamentals of what a doctor needs to do and know do remain the same. I think there's this kind of narrative that because we have AI and easy access to information, doctors don't need to know anything anymore, or they just have to be empathic human beings. And I don't think that's true, you know, to be a good doctor, even in a future where you have readily available synthesized information, you have to be able to vet the AI's output. You have to be able to make the final decisions, you have to have a intuitive sense of what's right and what's wrong. And that, I think will continue to require years of study and real world experience. And so my advice to new doctors would be, you know, familiarize yourself with these new technologies, know how to make the most of them, but don't think of them as a replacement for your own clinical reasoning and your critical thinking skills.
- I think that's a great way to conceptualize it. And maybe building upon that point and some other things you've talked about earlier in our conversation today, you also written and thought a lot about how the traditional role of physicians, and if you will, the cultural authority of physicians is rapidly evolving both for societal and cultural factors, and also because of the increasing amount of information that's available to patients from alternative sources. Can you expand upon that a little bit in our conversation and how you see, how did the cultural authority evolve over time to reach perhaps where it was, say 25 years ago, and how it's changed since and what you see in the future?
- Yeah, so, you know, the first thing I wanna say is that, you know,, I wrote that piece not to preach about what a doctor should or shouldn't be, but really because I was trying to figure something out for myself. And so for the past few years, I've had this kind of vague sense that something about the profession was changing and something about the profession needs to change, but I wasn't really able to articulate what was happening. And so that piece was an attempt to put some of that thinking into a more concrete form. And the basic idea that coalesced in my mind was this idea that doctors are losing a particular type of cultural authority. And by that I mean a combination of legitimacy and dependency. And so the idea that patients have to turn to doctors or they ought to turn to doctors to solve their medical problems, you know, now people have many more options for how they can go about accessing care and learning about their care. You know, some of this crystallized in my mind, when I saw a patient, this is a middle-aged guy, he started feeling fatigued and a friend recommended a supplement. And so he started looking into it. He saw on social media that people seemed to have good experiences with the supplement. He ordered it from a direct to consumer company. A few months later, he has a complication, he has swelling in his leg, he turns to ChatGPT, and it tells him that maybe he has a blood clot. And when I meet him, you know, in the emergency room, he actually does have a blood clot and we do an ultrasound to confirm it. But what's so interesting about the whole experience was that until I saw him, his whole medical journey took place outside of the traditional medical medical system. And so he found this treatment by word of mouth, social media, lent some credibility to it. He got it through a direct to consumer company, AI kind of diagnosed the blood clot. And so for all these reasons, you know, we need to think about what role does a doctor play when things like knowledge, and access, and authority are no longer kind of a monopoly of the medical profession. So what I was trying to understand and answer in that piece was, you know, what should doctors do, and how should they act in this new world?
- And I think one of your conclusions in the piece is that doctors should focus more on healing. Can you talk to us about what healing means to you? You describe an example in the December New Yorker piece, but can you expound upon that a bit in our conversation?
- Absolutely. So, you know, what I was, what I ultimately came to is that we have, even though we're in 2026, have long had this kind of 20th century understanding of doctors where the medical profession is kind of a superpower, or a hegemon, and it kind of stands alone. And what I was trying to communicate here is that that has a kind of sense that doctors are kind of the high priests of healthcare, but we should actually think of ourselves as what we've always been for millennia, which is healers. And there are some certain things that we can bring together in terms of technical competence, emotional intelligence, the ability to sit and be with people, help them through the most difficult periods in their lives. These are the types of things that people want from their physicians. I think, you know, fundamentally, of course, they want people to be competent and they want us to get to the right answer, but they want someone to manage uncertainty. You know, every medical problem has some level of uncertainty. They want us to, you know, take the best science, but also integrate what their preference and what their values are, and bring those things together to come up with a treatment course. And ultimately they want someone to take responsibility for their care. They want someone to say, you know, this is what I think is right, this is, you know, how I'm gonna help you through this difficult experience. And all those are kind of fundamental aspects of healing. And so, you know, in the piece, I kind of close it with what I thought was such a beautiful epitome of this type of healing process. And so a woman, she was an older woman, she'd been struggling with alcohol use disorder for many decades. She'd been thinking about cutting back at some points, but, you know, her doctor had a hard conversation with her and kind of made it clear that, you know, this alcohol use was a problem for her health. And the patient came around to that. She ended up cutting back, and quitting, and having, experiencing some symptoms of alcohol withdrawal. She felt tremulous, her heart was erasing. She messaged her doctor, the doctor got back to her right away, offered even to meet her at the emergency room if she wanted to come in and didn't feel comfortable at home. And ultimately, the patient, you know, didn't end up wanting to come to the emergency room, but the doctor checked in on her over the course of the night, and the patient told me, you know, that doctor is the reason I am sober. You know, she saw me through the hardest periods and I'm on the other side only because of this doctor. And that's the type of thing I think we can still provide as a profession.
- You're of course involved in educating the next generation of physicians as a faculty member at Weill Cornell and through your various activities. What are you thinking about in terms of how medical education needs to change? Of course, one of the things that you've written about and mentioned in our conversation today is how access to knowledge, acknowledging that there is, that there are hallucinations, but access to knowledge is becoming much more readily available, transparent, accessible than it was just with the internet because of AI. And yet also there are these emerging needs from society for physicians, and also from the educational system. So how do you think about medical education of today, how it should evolve over the course of the next five to 10 years, and how you're incorporating those principles into the physicians and the physicians in training that you're interacting with?
- So, I think one of the things that I've been trying to emphasize is just the power of communication and being an effective communicator. And that has to do with listening to patients and the public, and also finding ways to talk about things that are more compelling. And I think, you know, for a long time we've thought about communication as kind of a secondary skill. You know, you have to have the right knowledge and the expertise, and then communication is kind of the secondary thing. But I think in the modern world, we need to be in a place where doctors are able to get people's attention, to bring them along, to help them see, you know, some of what we see, all these types of things are more important than they've ever been before. And, you know, sometimes I fall into this a lot where, you know, you start with, someone asks you a question, you say, well, the evidence says X or the evidence says Y. But a lot of people don't actually make their decisions by reviewing evidence or, you know, weighing the pros and cons of the data. And statistics aren't always persuasive. And so we need to understand that there are social reasons people make decisions, cultural reasons, political reasons, historical reasons, to get inside people's heads. And that to tell them compelling story about why, you know, we think that this is the right course of action for the future of their health.
- With the rapid evolution of AI and other transformative innovations in biomedicine, what do you think healthcare is gonna look like 10 years from now?
- Well, you know, predictions are hard to make, as I say about the future. So I'm hesitant to make too many predictions, but I think that a lot of the forms of transactional care will be more automated. And so by that I mean, you know, a lot of your personal health data through wearables and prescriptions, electronic health records, that will be integrated into some type of AI. If you have a UTI, if you have an ankle sprain, you need an X-ray, other types of less complex tasks will be taken care of by AI with maybe some distant human oversight. And already you see companies like Doctronic and Council that basically allow people to chat for free with a medical AI and then consult a human doctor only if they feel like they need it. And I'm sure you saw a few days ago, Utah announced I was partnering with Doctronic to allow AI to automatically fill certain prescriptions for patients of a pilot project there. So I think we'll see a lot more of that. I do think that a lot of the more complex forms of care that require trade-offs and values and hard decisions, those things will still primarily be delivered by humans who are assisted in some by AIs. And so if you have cancer or autoimmune disease, or advanced heart failure, there's also gonna be a team of clinicians that is still managing that. And my hope is that AI means that we have more medications that we can help people with. We are better able to monitor their symptoms, they're able to report it in a more granular way. But I still think that AI will play kind of a secondary role in that space, and that humans will continue to be the leaders of the medical system there.
- As a clinician yourself, what excites you the most about practicing medicine in this future? And what concerns do you have, and what's your advice for today's medical trainees?
- Well, I think, you know, we're gonna have a more powerful set of technologies, by which I mean both AI but also medications and drugs. And I think GLP-1s are, for instance, kind of a primary example of this. We're going to have medications and treatment modalities that are ever more powerful, and that is going to allow many more people to live many, many more years in good health. And so I do think that from a bio medical standpoint, we're gonna have more power than we ever had before. And that's going to continue to increase at a pretty rapid clip. You know, I think we need to temper some of the optimism there by recognizing that technological advance is not enough to bring about a healthier world or a more harmonious world, certainly. And so, you know, what is concerning me these days is that we do seem to be very divided as a country, as a society. It's difficult to, you know, we have more information than ever before, but somehow it's harder to know what's right and what's wrong and have basic conversations with people with whom we disagree. And so some of the social, cultural, and political challenges that we are facing, not just as a country, but really as a world, those are things that really concern me, and I think could have a detrimental impact on people's health and wellbeing if they're not addressed.
- Drew, this has been a wonderful conversation and I wanna end with two questions that I ask all my guests. First, what do you think are the most important qualities for a leader today?
- Well, there, I guess, there's so many, but one that comes to mind is, is storytelling. You know, the world, as we've talked about, is changing faster than it ever has changed before. There's so many things that are pulling at our attention. We talked about there's more information and more access to information, but somehow it's harder to make sense of it all just because we are deluged with all this data and information. And so I think the best leaders are people who have a way of making this all coherent, of telling a certain narrative that captures what is happening, why people are feeling the way that they are, and where we need to go going forward. And so if there's one quality that I would try to emphasize, that's this idea of developing the storytelling muscle.
- And then finally, what gives you hope for the future?
- I guess paradoxically, the past gives me hope for the future, and it's kind of a strange thing to say, but I think about it as, you know, during turbulent times when there's uncertainty and polarization, it's helpful to look to the past if only to see that we've had social upheaval before. We've had huge technological revolutions before, and it wasn't always easy. But when people work hard, and they're dedicated, and they work together, we have been able to make a better world. Many more people lead better, many better lives, many more years in good health than they did in the past. And so I think when I'm feeling overwhelmed, I take inspiration from the past to make me feel like we have the agency to create a better future.
- This has been a wonderful conversation, Drew. Thank you so much for joining me today. Thank you for listening to The Minor Consult with me, Stanford School of Medicine Dean Lloyd Minor. I hope you enjoyed today's discussion with Dr. Dhruv Khullar, associate professor of health policy and economics at Weill Cornell Medical College, and a contributing writer at the New Yorker Magazine. Please send your questions by email to The Minor Consult at theminorconsult.com, and check out our website, theminorconsult.com for updates, episodes, and more. To get the latest episodes of The Minor Consult, subscribe on Apple Podcasts, Spotify, or wherever you listen. Thank you so much for joining me today. I look forward to our next episode. Until then, stay safe, stay well, and be kind.
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.