We are here today with Doctor Yuan Luau, the chief artificial intelligence officer of Northwestern University Clinical and Translational Sciences Institute. Doctor Luo is also an associate professor at Northwestern University Feinberg School of Medicine and is an expert in multimodal aiml for healthcare. You on thank you so much for joining us today.
Thank you Jordan for having me.
Yes. And so for there's a lot of topics I'd like to cover today. You've been publishing in in peer academic reviewed medical journals. The first topic I'd like to discuss is a topic of disparities and machine learning algorithms. According to some research you've recently published in circulation, heart failure, integrating social determinants of health and the machine learning models has helped mitigate bias when predicting long term outcomes or heart failure patients. And at the best performing model, initially under diagnosed, several underserved patient populations.
Including female, black and socioeconomic disadvantage patients, you when would you please tell me more about this study, its purpose and its conclusions?
Certainly. So this is a very interesting study and initially we observed that there is a lot of disparity in terms of applying showing models to different groups of the patients and especially for minorities. And contrary to the conventional wisdom that when you want to improve on a fairly something algorithm, sometimes you need to trade off between the egalitarian perspective and the utilitarian perspective.
Meaning that you probably want to trade off the overall accuracy for the equity, but what we have found is that by integrating social determinants, the House together with clinical data, we were actually able to improve both the equity and the utility. The accuracy of the algorithm at the same time, and so this is very encouraging in that we don't really need to.
Straight up one versus the other, but we can simultaneously and in improve both of them.
I see and to so I know you also have. So I wanna stay on this topic, but I wanna mention that you're also involved with translational applications and medical research, so.
You hear you are. You've arrived at this, this, this new insight that you can integrate SES and clinical data. You can improve accuracy and fairness the same time. Now that you've learned that you've developed these multivariable linear regressions, you can account for disparities. How do you translate that into clinical outcomes and practice at Northwestern medicine?
My son translation is always very important and high on the priority list for the team that I leave and translation also needs the.
Support and also frankly the insulated from the clinical champion. So for example in one study we were working with Patty Ologist specialist on heart failure patients. One of the how many in the primary care setting instead a lot of locations we were.
Plantation into advanced stage heart failure without being noticed by the primary care formation.
This is the old man name. We recognize it and we develop a machine learning models to predict who's gonna transition into advanced stage heart failure based on your past medical history and also the technical record that we have. And when we project someone is really having the higher risk. We are not the primary care positions and refer them the patients to the cardiology specialist for evaluation and for.
Interventions and for some of those patients, we actually got left ventricular assistive device planning that hard and to reverse or slow the clinical cores for the stage heart failure and this is.
Deployed in the health system and we have.
Strong support from our clinical champions in cardiology and this really strengthens the not only the methodology building, but also in terms of the implementation and the deployment of the model. And we really also learned a lot throughout the course into how to best.
Deploy the model into the clinical setting and how to best the work with not only clinicians but also nurses and patients themselves.
So Luan you said that you alerted PCP's about patients who are high risk as to refer them to cardiology specialists and interventions. I wonder if you compared outcomes during a time period of your application of this new best practice advisory to a baseline of prior to this intervention to see if they're actually were any improvements kind of at a subpopulation level in outcomes?
Right. We actually compare the differences and when you have a publication in one of the technical journals and most importantly for those patients, that would not be referred on unless we have the system. Some of the patients actually get left ventricular assistive device actually planted in their heart.
Sorry, what was what was the what was the outcome with those patients?
For those for some of those patients, they actually got left ventricular assistive device planted in their heart.
Yeah.
Oh wow, so led to an actual cardiac intervention that may have prevented a heart attack.
Yes.
Well, that's, I guess the ideal outcome for listeners. I think that's an interesting take away. I'd like to transition to the next topic.
You're involved in the Institute for our augmented intelligence and medicine, which is established a Center for collaborative AI and healthcare with the mission of advancing artificial Intelligence, Science, engineering, and translation throughout healthcare specialties through create a positive impact on precision medicine. Tell me about this institute. Some of our listeners may be thinking well. You know, we have some sort of ML algorithms deployed within our EHR. What's the benefit of establishing an institute?
Right. So this was actually.
Stemmed from our observation and also frankly some of the pain points that we have lived through when greening collaborations between the AIML scientist and the conditions. So you have probably heard that.
Some of the AI components.
Set the multiple times that AI is gonna replace. Pathologist AI is gonna replace radiologists and of course none of this has come true. And on the other hand, the pandemic has really been a bubble burst moment for some of the AI machine learning models and especially there is a paper.
That will give you hundreds of target also said diagnosis system using imaging for coding 19 outcomes and virtually none of those were actually deployed into widespread clinical use.
That has led to the sort of bubble burst moment and sound of the conditions were suspicious, or some of them may even have sensed them twice AI system.
So.
Contrary to what we had hoped for, that might be.
Barriers in terms of the collaboration between federations and the AI machine learning scientist, what we have observed what worked best is to engage the physicians and AML scientists from the beginning and guide them through this end to end journey from the model inception to the model development deployment and and actually use in clinical settings. This is the why we have launched an initiative called AI for House.
Clinics this clinic is open to all the practical traditions in the Northwestern Medicine health system, and they can come in and pitch their clinical challenge and they staff and AI machine learning scientist and even the trainees who are very interested can help them brainstorm the solutions debate on the implementations and iterate on those implementations and actually help them deploy it.
Opted in moderation to the health system and this has been going on for multiple years and received wide.
I collect from the nickel system.
And that to multiple factories sponsored grants and also deployments of the system into health system and also in terms of really generating tremendous opportunity for education because we actively checking trainees to work on those projects. And this was sort of the prototype for the center of Privative AI in healthcare. And once we realize the benefits and the interest.
Compose. They have system and also the medical school and the engineering campus. We wanted to formalize and streamline process which best for the purpose of this center and this mission of the center is to bring the communities of AIML scientists, the engineers and the clinicians into this photograph so that they can cross pollinate each other and invite each others perspective to add on to their own.
And most importantly, collaborate with each other from beginning to end of the developing and deploying the machine learning models into a concrete healthcare challenge.
Excellent. I I think that's interesting. You're deploying models to bridge silos within different disciplines that are associated with the delivery of health care at Northwestern.
You also have a so on the topic of connecting different bridging silos and and connecting different departments, you've also been connecting different institutions to build a flagship data set for healthcare AI. There's a project you're involved with that involves Northwestern University, the Massachusetts Institute of Technology, and multiple Clinical Translational Science award hubs that ctsa, including those at Tufts University, Washington University, and the University of Alabama at Birmingham.
In order to create a flagship data set that's large enough and diverse enough to capture all aspects of patient profiles before and after their intensive care unit stay, and what makes the data sets useful is to train a model to predict a patient outcome or train a model to recognize certain less frequent diseases. Can you tell me about the impact that this collaboration has had on patients at Northwestern and elsewhere?
Yes, and this is really meant to address the gap that we observed between general domain machine learning, which you saw have for manuals progress since the last decade versus the healthcare field, which seems to be lagging behind the general domain. And we will analyze the differences of why there is such a gap. And we realized that in general domain machine learning, we really have this flagship data sets that this large that is publicly shared with the research community.
So they can translate benchmarking and improve on the status state-of-the-art methods.
But in.
Healthcare field, we do not have the counterpart of imagination or the Wikipedia corpus, and it's not yet that is very diverse and also that is very large that is publicly shared. And so This is why we set out to.
Launch the mission of creating such a flagship data set and really hope that this could trigger a wave of similar efforts and initiatives across across the nation. And the idea is by pulling together the patient cohorts from a diverse geographic locations from the Midwest to the east to the South, we can capture the geographic diversity, the racial, ethnic diversity and also the practice diversity. So for example.
A procedure such as bronchoalveolar lavage. Then it's Lieutenant.
Practiced in northwestern ICU for a number of critical care may not be the routing practice in another institution, and we want to capture such practice divergences and that will lend.
The instruction of the data to cover a great deal of diversity, and This is why we were inviting so many academic medical centers to join the efforts to pull together their data and when we have the data from their critical care cohorts, we want to capture everything not only just their critical care space, but also since these up to it, the encounters before the critical care stay and also what happened after.
The encounters after the critical care study by capturing such a comprehensive and launch to low profile of the patients really hope that this could provide these flagship data set to power the next generation of machine learning development and we wanna share the data with the entire research community so that people could benchmark on this data set.
And use this data set to identify novel hypothesis and also validate those hypothesis and generate insights from the data.
Insights that relate to real world evidence that you could use to power the pharmaceutical and Serapeum Tical development and even drug work progressing so all those kind of downstream applications will be very natural consequences of such collection data sets.
Yes.
Was there so I on the topic of generating large data sets, you also have been. I mean that's the the purpose of developing large data sets is to support research is to lead to new developments like you said to identify and validate novel hypothesis and generate new insights that may benefit patient outcomes and enhance the delivery of care. You also have been involved in the creation of something called a spatial transcriptomics.
Yes.
Analysis Resource acronym would be a sore and it's used to evaluate spatial variability of genes in different tissues to assess possible cell cell interactions and visualizing spatial gene expression of 1700 samples. This also as receiving publicly funded. I think NIH funding for the grant.
But it's meant to Ben. It's meant to benefit research across the United States. So you tell me a bit about the sore program and what your ambitions are there.
Right. So that's a great question. When we are working on clinical data, we realize that often you need other types of data to complement.
So can we go? Data tells you about locations, Gurney, and they're clinical progression. On the other hand, we also need deeper understanding of the disease mechanism and the molecular data such as those provided by the emerging technology of spatial transformations can offer you that. And this is a really interesting technique in that it does not only.
If you the single cell level Orient expression and also tells you where such expression patterns happens in one part of the tissue. So let's say if you want to understand the interactions between CD4 and CD AT cells and the malignant cells and when they are closer how their expression profiles will change when they are farther away, how their expression profiles will change the spatial transformation.
This technology can tell you that the issue with the current status of that field is. Although there are many people's published, they each go through different quality improvement quality check in process and batch effect removal, and there is little consistent consensus standards that the different papers used to process their data. Making the meta analysis and meta analysis reusing those data really difficult.
So what we set out to do is we took all the data set that's published in mainstream build logs and we consistently caused them so that we are subject to the same quality checking scanners, the same batch factory, mobile standard, consistent standards. And we perform dimension reductions to.
Getting rid of some of those artifacts and also we hope form.
Still type annotations so that you know what cell types have this or that kind of spatially variable gene expressions, and how different cell types can interact with each other. By doing that would really enable the meta analysis and meta analysis what we using this public cloth other resource because if you think about it, when people publish papers, they often publish on the strongest signals from their data. That doesn't mean their data cannot support additional.
Process it's not additional finance just mean that sometimes they may lack the enough samples to do so. Or maybe the signal isn't the strongest. But if you pull together all those published data, that will give you an opportunity to generate new hypothesis that is impactful and that gives you new insights on this disease mechanism. And when you combine those understandings to what you observe from the clinical side on the patients.
You can really triangulate different sorts of evidence to look at complex duties and also to inform targeted celebrities.
So you on we are approaching the end of this podcast episode. And when I look back over the topics we've covered, from socioeconomic disparities being addressed by machine learning algorithms, we spoke about crossing different departments and integrating engineers, clinicians and AI scientists together in order to collaborate in unified mission, we spoke about building 5 ship data sets for across different healthcare organizations and also taking published research on the cell level.
And making it available for different purposes to be applied to different hypothesis. It almost seems as though there's a common theme of altruism where most guests and public and healthy data podcast speak about how their efforts are translating into improving lives for patients, sometimes improving the delivery of care. But it's interesting that you seem to be improving things for researchers, creating a foundation upon which they can do their work.
To more directly address the patients, would you spend the last remaining moments of this episode speaking about kind of your what your, what your purpose is, what your motivations are, what you're looking to accomplish and how you see your your role within the healthcare delivery system?
Right. Joining, that's the great point. And so how I see my role in the health system is.
I see multiple aspect that I could contribute. So first of all I could contribute as a half of the AI expertise to support different strategies, whether it's medical or whether it's basic science, they could all leverage on the AI and machine learning toolings and resources that we have built and we are very happy to support them because ultimately whether it is in the VMS, man in terms of the basic science or basic biology, nowadays advancement in terms of the technical practice.
They eventually will benefit the patients like you and me. And so this kind of altruism notion is really deeply embedded in the team that I need. And in the college that I worked with and was my Spain because we share a common goal and a lot of the times we share our data set, we share our toolings and then we make them consistently available to the researchers, not only in the Northwestern University, but also across the nation, so that people can benefit from that.
And I frankly think this kind of sharing autism can.
Rapidly and accelerate the advancement pays of the technology because if you think about what happens in the general domain, machine learning and AI development, none of those fancy because we have this foundations of the data set, the foundations of the tooling that we share across the Community that really drives the state-of-the-art advancement and similar things can happen here. And especially based on our experiences of creating the Center for collaborative.
Yeah, I've got health care. We think that people, when they share the same vision, they can easily be brought together to this photograph, to work on their aligned goals and to collectively Dr. the field forward.
Well, thank you, Yuan, for our listeners. This has been doctor Yuan Luo, the chief AI officer of Northwestern University Clinical and Translational Sciences Institute and associate professor at their School of Medicine, Yuan, thank you so much for joining us today.
Thank you very much for having me.
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.