- Welcome to "The Minor Consult" where I speak with leaders shaping our world in diverse ways. Today I'm delighted to be joined by Eric Horvitz, Chief Scientific Officer at Microsoft. A pioneer in artificial intelligence with early roots at Stanford University, Eric has spent decades driving innovation, while helping define the principles of human-centered AI. He served on the National Security Commission on artificial intelligence, co-founded the partnership on AI, and launched Stanford's 100 year study on artificial intelligence. At Microsoft, Eric helps guide the company's scientific direction in AI, advancing cutting edge research, while shaping how these technologies are developed and deployed responsibly at global scale. Eric, thank you for being here.
- It's great to be here.
- So, Eric, you're a Stanford alum. You received both your MD and your PhD degrees from Stanford. You've gone on to do many, many exciting and wonderful things. But walk us through your career. What brought you to Stanford? What did you do here, and what did you do right after graduation from Stanford, and how have things evolved?
- I came to Stanford with my deep curiosity focused on the foundations of cognition, the human mind. I had several opportunities to go to many places, but looking at my acceptance brochures for my MD-PhD programs at the time, my reaction was... Or my assessment was, boy, you know, I don't know where my interest will take me, but I think computation will be central. And even though I was aiming at neurosciences, neurobiology, I reflected that artificial intelligence would likely be a tool for understanding the deep principles around cognition someday. And that made Stanford Medical School a top ranking option on my list. I thought, "Boy, I could do artificial intelligence research potentially if it becomes something that I wanna do to understand intelligence better while pursuing my curiosity about neuroscience in the context of medicine." Also, Stanford had this incredible... It was going different than some other schools in that the main campus for scholarship in engineering was just a bicycle right away from the anatomy classes, for example, the first year of medical school. So I came to Stanford with that goal in mind, and in my first few months, started indeed taking that by school ride over to main campus quite often, sometimes right after anatomy or neuro biology or my pathology classes, and began immersing myself in the world of artificial intelligence with graduate work and eventually taking that on as a very serious endeavor, which became my primary direction for pursuing intelligence.
- That's great. And after Stanford, what was your... What did you do right after Stanford and how did that lead you to Microsoft?
- Well, I should say I don't recommend this to my son who's a PhD student in AI right now, nor my colleagues or students that I work with. During my PhD work, while was deeply focused on the PhD work, I and some colleagues got so excited about what we were doing, We worked with the Office of Technology Licensing at Stanford, and we kicked off a startup company right in the midst of our research work. So here I was focusing on, you know, making strides in core research. In those days, it was harnessing probability and utility theory in medical diagnosis, but also building more fundamentally principles of intelligence, models of bounded rationality. How could systems operate under limited resources and like time and data? Of course, in those days, those resources were very, very significant constraints in how we designed those systems as they still are. We'll never escaped those constraints, can't have infinite time and... Compute or data. I should say that standing up a company made things even more real for me, and again, a couple of my co-founders. And as I finished my academic work, in training, we heard that a large company up north, up in Seattle, was interested in the techniques we were using, reasoning under uncertainty and decision making under uncertainty in medicine and other areas by that time, and wished us to come and join and to think about having our company acquired by that large entity up north. We looked at that with some skepticism. Microsoft told us they were building a research unit. Someday, it could be helped with that. Eventually we were impressed enough to join Microsoft, and I thought I'd stay about six months and meet people, learn the lay of the land, leave my co-founders there, and go back to academics, my pursuit, do more research, particularly in artificial intelligence at the time. And I ended up staying onto today. It's now 32 years, and it seems like I'm just getting going.
- Fantastic. Eric, you spent decades at the forefront of AI and you've said that AI will change the trajectory of human existence. Where are we today on that trajectory, and what should we be paying closest attention to right now?
- It's such an exciting time. I mean, I think we're in the middle of a genuine, like phase transition with technology and its influence on people and society. You know, in AI we move from building narrow task specific tools to now interacting with broadly capable systems that I call polymathic in their abilities to reason, communicate and collaborate with influence and relevance to almost every human domain. And so the trajectory around is no longer set by the prototypes in the lab that I had worked on for many years. It's kind of in some ways, unfolding in our pockets and our desktops. So I look at this moment as navigating two different kinds of influences, it's the fast moving surface waves of sorts. The rapid breakthroughs in what we see AI can do. Then there's the slower, deeper currents that may ultimately have even more deeper consequences for humanity. On the surface waves, we're seeing issues like disinformation and its influence on democracy, where AI is generating persuasive synthetic narratives at scale. We're also seeing attention with the dual uses in the sciences. There's incredible opportunity with the sciences, with AI accelerating lifesaving discoveries. But those same tools can lower the barriers to misuse, for example, with AI powered challenges in biosecurity. We have to fact these dual use risks with vigilance. On deeper currents though, there are potentially subtle long-term transformations on human cognition, human agency, our identity. And so we need to sort of start thinking deeply as society about these potential influences. I mean, think about education from K through 12 or the professional programs. The core question now is not just, you know, can students use AI to write essays? It's how is AI shaping their independence of thought? So we need best practices that ensure that AI provokes thinking and curiosity rather than replacing it. Same applies to how we think about workforce and the economy. We're seeing young people, I've talked to Stanford students, for example, fundamentally rethinking their career paths and their sense of-
- Sure.
- Self-identity based on what they see these machines doing. Whether it's in clinical medicine where we need to pursue and design tools that support a doctor's judgment rather than black boxes that erode it or in broader... You know, broader social lives, I think we're still early in our ability to measure and monitor these longer term impacts. But the question is coming to the fore now, which is, it's no longer just what AI can... You know, what can AI do. I think the question that's coming to the fore for this generation and for generations to come is how is or will AI shape us and how do we guide and shape it? I think we have to ensure these systems remain in the service of our human aspirations. Supporting our autonomy rather than diminishing it over time. That's gonna be a risk.
- You were a founder of Stanford's 100 year study in artificial intelligence, and the most recent report in 2021 was entitled, "Gathering Strength, Gathering Storm," and it documented AI's rapid technological progress along with growing ethical societal and governance challenges. So now five years later, what advances have most surprised you over this period of time, and what are you most excited about in terms of the future?
- Yeah, that title, "Gathering Strength, Gathering Storms" was proved to be remarkably precedent.
- Yes.
- You know, when we released that report in 2021, the study panel was signaling kind of a threshold. AI was moving from the lab into the fabric of everyday life. The team saw potential, but also clouds forming of around fairness, transparency, governance. I'd say would've surprised me the most since then is the kind of magnitude and character of the leap that was driven by scale. You know, in early 2022, while exploring early versions of what would've become known as GPT-4, it was clear we weren't just seeing incremental improvements of the kind that I expected we'd see moving forward at the time that report was written by the 100 year study on AI, we were seeing a qualitative jump.
- Yes.
- And the fluidity the systems could actually operate across domains, medicine, law, programming, abstract reasoning, often without any task specific training, was just remarkable. As for where we stand today, I'm in a state of guarded optimism. We've made meaningful progress in recognizing those challenges that were pointed out back then. You know, the area of studying concerns about the rough edges of AI are no longer like a niche academic conversation, whether it's the partnership on AI, a nonprofit group help to co-found bringing in civil society, or even the FDA shifting toward lifecycle based oversight for medical AI, you know, we're seeing some real thoughtful engagements work... For example, at Microsoft and with partners working on media providence to respond to the potential coming storm of misinformation warned about... These are... Raised my hopes that we could invest in technology and socio-technical solutions to address some of the concerns and also work on governance. The defining challenge of our era moving forward, I think is gonna... I refer to it as the asymmetry of speed. You know, the technology is advancing in discontinuous rapid leaps. I expect another few jumps back. I think 2026 will be a big year for what we'll see in capabilities of AI systems. But the institutional responses remain more incremental, it's almost by necessity. You know, our organizations move and government moves more slowly than the technology.
- Sure.
- But I think we're in an interesting transitional phase. The awareness is an all time high, the strength of the technology is undeniable and it's influences, but there's gonna be a close... A challenge with closing the gap in our understanding of how the technology, you know, reshapes human agency and social dynamics and closing the gap between our scaling capabilities and our ability to guide the technology thoughtfully, is where some of the most exciting work is going on today, you know, at the juncture of those two. So I'd say just quickly, what excites me the most is the sheer possibility of the space that is opening up here. You know, when I first came to Stanford decades ago, I was, as I mentioned earlier, driven by a fundamental curiosity about the science of cognition. Like what are the foundations of intelligence? And today we're seeing AI systems that aren't just tools, they're becoming what I like to look at as new kinds of lenses through which we can study intelligence itself.
- We at Stanford Medicine have had the privilege of collaborating with you and your colleagues at Microsoft on a number of different applications and tools involving AI and the delivery clinical care, and you've led many research efforts that have led to some extraordinarily impactful papers. And we've sort of moved from, if you will, the mundane, showing that generative AI can do quite well on medical licensing exams. And I guess that's not too surprising given that it's trained on basically all the digital information that's out there. But now we've really... Through some of the more recent work that you and colleagues here at Stanford have been doing, you've shown how AI can really augment, improve both the diagnostics that are being done by clinicians and also the care that's being delivered in complex medical settings such as tertiary coronary hospitals like ours. Can you walk us through that process and how your... A through line for you, Eric, throughout your career has been this enduring interest in healthcare and biomedicine from the company you founded when you were a PhD student here to the work you've done in the decades you've been at Microsoft. And I'm sure that these last five years in particular have to be incredibly exciting because now it's all happening and it's happening at really breakneck pace. So walk us through where we've come, particularly over the past five years, and where you see us going in the delivery of healthcare.
- Well, it's truly an exciting time. I mean, first of all, let me just say that the amount of effort and innovation going on some of the core directions that I've been excited about clinical decision support are happening much faster still in research labs and in prototyping and in startups than they're diffusing into the daily workflows of healthcare delivery. That said, I mean, the most meaningful impact we're seeing right now is AI, I'd say lifting off the cognitive tax of administrative work on clinicians. We're seeing fairly rapid adoption of ambient clinical intelligence. This is the... These are the systems that listen to the, for example, the patient-physician conversation and handle documentation in real time. I mean, I think that's one of the number one drivers pointed out by physicians that leads them to burnout.
- Absolutely, it's a game changer. Yes.
- Game changer, I think, so I would never have expected 25 years ago that that would be like one of the big early applications of AI in healthcare. Another area that's less flashy, but still powerful, which is that the older technologies, the older machine learning technologies, learning from data to build predictive models, are being fielded around the world. And this includes models that can predict readmission, for example. Predict patient deterioration in advance of it happening, sometimes hours to days in advance. So for example, to minimize unplanned ICU transfers, predicting sepsis or other kinds of bacterial infections in advance to guide antimicrobial stewardship. Again, these are happening in pockets right now, but you see the promise there. But the biggest untapped opportunity is how can we harness AI to support and to help physicians with complex diagnostic reasoning challenges or the-
- Yes
- Longitudinal support of patients over time with their therapy. I think moving into this world where these systems are becoming part of the workflow is going to take longer than we thought potentially. You know, I think I mentioned last year at the RAY Symposium, and thanks for hosting me there, that in the late '80s when we built systems that were running at expert levels, even if they were narrow at the time and hard to build, I had thought these systems would explode on the scene immediately, but it just turns out that the realities of the practice and, I'd say, the long grooved workflows aren't necessarily amenable to accepting new technologies like an AI technology without careful thought and integration over time. And also the culture changing to accept these kinds of tools.
- Right.
- But I do see great opportunity there. And as you mentioned, some of my recent collaborations with teams at Stanford Medical School are very exciting in that they're pointing the way for how these tools can become more like teammates or collaborators than they are as standalone tools that are called with... Like a search engine. And in that work, we're seeing a big upside and lots of headroom in the design realm. How do we design interfaces that truly can take input from clinicians and then have a rich dialogue back and forth that would synthesize and augment the clinician's expertise, that complement it in interesting ways, that come forward at the right time, for example, not just be the kind of tool that is engaged with like a search engine or a ChatGPT kind of interface, but that's there in the background, that's overseeing, that's monitoring, that's being proactive, for example, that understands workload on physicians and how to fill the gaps when needed.
- That's fascinating. You know, many have spoken about, talked about how AI is gonna change the role of the physician, change the way medicine is practiced. You just mentioned ambient AI, which is evolving and being deployed extensively and evolving rapidly and really is a game changer for physicians and other health professionals, who have in the past, spent far too much time, you know, documenting things. Documentation is important. None of us would go back to days of paper medical records, but one of the unintended consequences is that it does require a additional steps on the part of care providers in feeding the medical records. And now with ambient AI, of course, that's being assisted. But what will be the role of the physician? You are a physician. You trained as a physician. You also went through the medical education system at a time when there was still a whole lot of memorization. I'm not saying that's gone away, but we've gotten away from it to an extent. But what's the role the physician going to be in the future, and how do you think medical education should be changing as we prepare the next generations of physicians?
- These are really interesting questions and this is gonna be part of an ongoing dialogue.
- Sure.
- Certainly the role of the physician in a world where we have brilliant automated reasoning systems may shift in a variety of ways. Some we might not be able to predict now. I see the clinician as being central in care for indefinite futures. I mean, forever in particular. I think that the human touch will always be critically important in healthcare delivery. I see people having... Expert clinicians having a real good sense for looking at a patient and seeing very high bandwidth way from the point of view of human-human connection, what might be going on in that patient's life, both physiologically and psychologically. We ascribe responsibility to human clinicians as the key decision maker. And while the borders might be changing as to what kinds of automation we might admit someday, and we're seeing some kinds of automation coming to the fore, creating points of discussion. For example, in systems that are prescribing and re-upping prescriptions and so on. But I do think that there'll be a continuing centrality of the human in the delivery of healthcare.
- Sure.
- There's a question about how education should work. It's interesting, I've spent time at several medical schools. I was invited to speak to medical students and to medical educators, to reflect with them about how medical education might change or should change in this world where the tools available to our graduating classes of medical students going on to residency and going on to practicing and professional life are changing the way education might work. How do we prepare our students for worlds where they will be using these tools to augment their decision making. What are techniques for transforming what might be consultation tools into educational experts that know how to... Expert pedagogues, and thus the whole area more generally of leveraging AI and education as an evolving field with many, many open questions and it's very exciting directions, you can imagine AI systems that understand and that model the comprehension of complex topics and fund of knowledge by medical students, for example, and that understand how to engage and work at that border to make learning and retention more effective.
- Eric, in healthcare and beyond, leaders are still figuring out how to govern and regulate AI. And you've spoken a little bit about that today. Maybe you could describe a bit further, in your role at Microsoft, how do you think about these challenges and where do you see opportunities for shaping AI responsibly?
- Well, so at Microsoft back in 2016, we stood up a committee that was looking at AI ethics and effects. And the first thing we did was think through what our basic principles were. We came up with six principles that have stood the test of time, and many groups have principles like that. For us, it was fairness, reliability, privacy, inclusiveness, transparency and accountability. But principles are just words that you have to operationalize. So thinking through how to operationalize is gonna be very important for organizations. You know, at Microsoft, we built out an office that creates a set of guidance for compliance. We have a document called the responsible AI standard we follow, we have a sensitive uses review program. We have a series of studies, we have ready to go at any moment a type of study for disruptive advance in AI. We do a report for the company, sometimes taking those public. And we have some review processes in place that continue some jointly with other organizations. Like we have a joint deployment safety board with OpenAI for any kind of significant deployment of a new technology or a new model, I think it is gonna be an opportunity to think deeply about how to take frameworks like the one we've created at Microsoft, going from principles to review processes, to operationalizing programs for monitoring and responding to advances in technology. It's gonna take... We have to sort of think through how we would scale these through healthcare, government, to do similar things what we've done, I would say, in a self-regulatory manner at Microsoft. Certainly it's been, for me, a process full of learnings. It requires incredible coordination, reflection with multiple stakeholders, trying things out, seeing what works, what doesn't work. And even with all our processes and our approach, we're always learning. Things that would just be from experience. Seeing a failure, for example, coming back and thinking through how we can do things better over time, I think will be important.
- Eric, maybe as a last question on AI before we turn to our concluding topics. You mentioned before that... Obviously you've been deeply involved in the AI revolution now for many, many years. You've tracked it, you know the technology really well, and you mentioned before in our conversation that you were surprised by not the incremental advances, but by the quantum leaps in the impact of the technology, the scope of the information and the knowledge that the technology's been able to bring to users between different versions of GPT, for example. Looking into your crystal ball then if you look at the next... Or project the next five years, the next 10 years, how do you think AI technology's going to continue to evolve? Do you see these quantum leap forward steps? Do you see many of them over the next five to 10 years? If so, what will the areas be in? I mean, most of what we've been talking about today are large language models. So with an emphasis on words, language, ideas, knowledge, that's transmitted through words and language. Now of course, we have images that are a part of the same sort of approach and increasingly other information, other data is being taken in. And so how will all of this come together? And you know, there's an old saying in technology that I think AI is proven to be naive. And that is that we tend to overestimate what can happen in the short term and underestimate what can happen in the long term. But I think in this case, as you've said, we perhaps in the past underestimating what can happen in the short term, at least in the evolution of the large language models and what they bring to us. So with that being the case, you know, what is the intermediate to the long term gonna look like? There's a lot packed into that. But I'd love your reflections on where you think the future is gonna be.
- Yeah, I think we will see more jumps of the form that surprise us. We're truly working in a remarkable space where the name of the game is gonna be emergent capabilities.
- Yes.
- This year is gonna be a particularly interesting year in itself in that... Just to mention where I think we are right now, putting the thermometer in the technology, this is one of the first times we're starting to see a significant quantity of AI resources and effort focused on applying the powers of AI to make AI itself better. The phrase, recursive self-improvement, has been used, often as a speculative even concept from fiction. We're seeing that now. So as AI is applied to AI itself, it's creating an acceleration that's going to give us big jumps even further than the AI that came from... Was guided more intensively by human AI scientists. There's some interesting set of questions that comes of that, you know. For example, how can we preserve the intelligibility or the interpretability of AI as AI begins to build itself in complex, high dimensional spaces. I think we need to preserve that as a first class goal, keeping human scientists in the loop for how AI works, for example. And it's gonna be an interesting issue moving forward. I like to think that we're going to see surprises on the front of the transformative influences of AI and the sciences more than we expect. I think it's to be so impressive. I think that we're gonna be applying these tools of artificial intelligence and it's a constellation of tools to start making faster paced progress into the magic of biology.
- Yes.
- And the implications of that will be enormous for biomedicine, for new kinds of therapies and for understanding illness much more deeply and coming up with remarkable measures that could extend healthy lives by decades. And we'll see similar opportunities in other areas of sciences and engineering, like material science, new kinds of materials, new ways to capture carbon from the atmosphere, for example.
- Right.
- Moving to the future, let me just get a little bit more cosmic.
- Okay.
- I really... My greatest hope is that we look back upon this era that we're in now, even from 100s of years from now as the moment we successfully thought deeply about what it would take to build a partnership with these tools for the ages. One where AI just didn't solve our problems, but elevated our humanity. I think this is possible. I'd like to think that we need to fiercely in the world of rising automation and recommendation and decision support, we fiercely preserve our human agency and autonomy. So I wanna see a future where we were able as people in society to harness these cognitive amplifiers, to democratize expertise, to tackle our hardest, most wicked global challenges, all while ensuring that the technology remained in service of people, individuals.
- Right, right.
- And because I think success is not, you know, about building more capable machines, it's about growing... Ensuring that as these AI tools, tools of cognition, grow more powerful, we as individuals become more capable. You know, more curious, we're more empowered through the goals that our goals, the goals that define us. And I wanna look back someday and say that, you know, we expanded human potential without eroding our values of dignity and self-efficacy that make us human. And if that happens, I think we'll have succeeded.
- Eric, that's such an inspiring vision. And I'd like to end with two questions that I ask all my guests. First, what do you think are the most important qualities for a leader today?
- Boy, I mean, I would say number one, a defining quality for a leader today is intellectual humility, you know. need to have a deep, honest, even openly honest awareness of what we don't know because the ground is shifting around us very quickly right now. And there's lots of questions and unexplored possibilities and challenges. I think that humility has to be, at the same time, paired with the courage to act under uncertainty based on the best intuitions we have, expectations. We can't wait for the perfect data when the technology is moving around us in these discontinuous leaps. We have to lead from a place of what I would maybe refer to as principled intuition. I think equally, I think in this day and age with these polymathic systems around us and these systems applying in so many ways, sometimes jointly in multiple ways, let's say in a healthcare system, a leader has to be a multidisciplinary bridge builder and a multidisciplinary. And just to assume we're gonna work by drawing upon the expertise of people, of multiple people, even people who might not have been part of the story in the past when we made decisions. So, back to my comment before, I think a leader needs to connect both the surface observations, the surface waves, of technical insights with these deeper, slower moving currents of where things are going as the technology sloshes in very fast ripples as well as deeper waves and bringing diverse perspectives to the table. So I think the best leaders won't be just managing teams. They'll be kind of stewards of human agency over time. And whether it be in an organization or thinking deeply about the implications of what it is that we're doing now on the broader society.
- I love that description. I think that's so eloquent and it reflects the depth and breadth of your thought and your knowledge. And maybe in closing, last question, Eric, what gives you hope for the future?
- What gives me hope is the seriousness of the engagement that... You know, that we're all having. I mean, the fact that Dean of Stanford Medical School is asking me questions like this, it gives me hope. You know, we are seeing a seriousness of engagement, the discussion about AI and its influences both the possibilities for how it might change the way things are done, the concerns and rough edges. It's no longer a niche conversation. We have clinicians, policymakers, even historians all in the same room, you know, tackling the challenges together. I'm particularly encouraged by the next generation of scientists and students. They're gaining a kind of a bilingual fluency to think deeply about the technical power as well as socio-technical. I can't tell you how much... How exciting it's been and rewarding it's been for me to work with past versions of myself, Stanford medical students, Stanford MD-PhD students. 'cause I'm living locally to Stanford for half the year now. And I've taken that opportunity to immerse myself with teams and to find mentorship. And I often learn more from mentees than than I provide. But just amazed how excited even medical students are versus anxious for the future, coming ahead with where these tools will be part of their daily life. And that raises my hope for the future.
- Well, let me just say, Eric, we are so delighted that you are spending time with our students, with our faculty, and that you remain a very, very valued member of the Stanford community. And thank you so much for chatting with me today. And thank you for listening to "The Minor Consult" with me, Stanford School of Medicine Dean, Lloyd Minor. I hope you enjoyed today's discussion with Eric Horvitz, Chief Scientific Officer at Microsoft. Please send your questions by email to "The Minor Consult" at theminorconsult.com and check out our website, theminorconsult.com for updates, episodes and more. To get the latest episodes of "The Minor Consult," subscribe on Apple Podcasts, Spotify, or wherever you listen. Thank you so much for joining me today. I look forward to our next episode. Until then, stay safe, stay well and be kind.
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.