BTK-ASGBI_Ep2_part1
===
[00:00:00]
Hello everyone. Welcome back to Behind The Knife. We're back with another installment of our BTK and A-S-G-B-I collaborative series with our friends from across the pond. I'm Agnes Premar, one of the behind the Knife Fellows. I'm a general surgery resident representing the US side, and I'll let our UK base co-host introduce themselves.
Hi, I'm Jared ot, a surgical resident based in the uk, and I'm the president of the Mohan Academy, the resident group of the Association of Surgeons of Great Britain and Ireland, or the A-S-G-B-I. It's great to be here.
Hi, I'm Gita Lingham.
I'm a surgical resident in the uk and I am the Vice President of the Moynahan Academy. So let's get started. So today we'll be delving into all things ai. What is it? What role does it have in healthcare? What role does it have in surgery? Are we all going to be made redundant by ai?
So without further ado, let's introduce our panel of experts joining us today.
It's my pleasure to introduce Dr. Eric Nelson and Dr. Ellen Larson from the US side for the [00:01:00] discussion. Dr. Nelson is a surgical oncologist working at the Brook Army Medical Center in San Antonio , and he's specifically interested in expanding the role of AI within surgical education and beyond. Dr. Larson is a general surgery resident at the Mayo Clinic.
She's currently in her research time and finishing up her master's degree in AI and studying the role of machine learning within surgical practice.
Hi there, I'm Eric Nelson. Please consider me your audience insert for this episode.
I've made it a personal study of to learn the inner workings of AI and how it can be used in my relatively nascent surgical practice. And I want this discussion to help you realize its strengths while also avoiding its pitfalls.
All right, welcome. I'm also honored to introduce Dr. Ji. He is representing the UK side. He is a man with many accolades. He is an MRC and GSK surgeon scientist alongside an honorary consultant general and major trauma surgeon in Liverpool, England. Current research bridges academia with industry, he's published a lot and he's got a lot of strengths [00:02:00] in academia, but he is also won a lot of awards and most recently he's won the Arian professorship 2024 from the Royal College of Surgeons of England. So you're very welcome, Dr.
Muji.
Thank you so much Jared, and thank you behind the knife team for inviting me. As you mentioned, I'm a, surgeon scientist or MD PhD principal investigator as our American colleagues refer to us. Working here in Liverpool and currently focused on utilizing AI and ML approaches to integrate large data sets, to enable precision medicine for areas of of huge unmet need in acute care surgery, namely pancreatitis and A RDS.
It's a pleasure to join you and explore how AI is reshaping not just surgery, but how we actually think about surgical science.
Incredible. Well, thank you all for joining us. Let's get started. So probably the most relevant question to the podcast, but [00:03:00] what is ai? Dr. Larson, if you wanna kick us off.
Absolutely. So this question was beautifully addressed in a series of three BTK episodes on ai. I would highly encourage everyone to go listen to that series.
They do a great deep dive on the theory and the science. That's done by some incredible AI experts. But I'll summarize briefly here just so we can think about how important it's for for our surgical practice. So first we have the idea of ai, which is just the concept of engineering. Something that mimics some aspect of human intelligence.
And then within AI we have a subfield called machine learning. And machine learning is essentially the science of taking complex data and using a computer to make predictions from that data.
Then providing a mechanism for the computer to train itself to improve the predictions. And then there's a subset even further of machine learning called deep learning.
And [00:04:00] this is what has really driven the AI explosion because these deep learning models have the mathematical ability to solve problems that are much more complex than traditional statistical models. So we divide AI problems really into three main classes based on the type of data that we have to train the computer on.
So first we have supervised learning. This means that we have some sort of defined ground truth process that we want the machine to learn to replicate. So for example, this would be us teaching a model to find pneumonia on a chest x-ray by training it on a set of x-rays that have been labeled by expert radiologists.
Next, we have a class of models that are called unsupervised learning. The model basically learns patterns from the underlying training data and then replicates these patterns to create material.
These are very popular, the media right now, and I think this is what a lot of people think of as AI in their mind's eye. This unsupervised learning is things like chat bots and [00:05:00] image generation.
And finally, there's something called reinforcement learning. And this algorithm type is trained by receiving rewards or punishments based on decision making. But there's no perfect right or perfect wrong, just navigating a complex decision environment. And honestly, this is kind of a. What all of us feel like when we're taking care of patients.
Really what we're doing is we're looking for ways to teach computers to recognize these complex patterns in radiology images, in medical records, in inop camera images, so that they can help us with the decision making and pick up on connections that are too subtle for our brains to distinguish.
Yeah. Thanks Dr. Larson. That was a fantastic foundation
when we speak about AI and surgery, we are truly discussing augmentation that is the extension of our healing capability and not really its replacement. So consider how profoundly our profession has evolved throughout this technical partnership. You know, the microscope extended our vision [00:06:00] beyond human limitation.
Antibiotics empowered us to command infections. Anesthesia transforms surgery from a desperate torture to control healing. AI represents not a departure from this trajectory, but its continuation. It enhances. Pattern recognition while preserving the irreducible human elements that define surgical excellence.
Those were some brilliant insights and introductions from my American colleagues. The only thing I would add onto the discussion so far is something that I often tell my trainees or residents when I talk about AI and that this is really pattern discovery. At scale. So as surgeons, we've always traditionally relied on critical pattern recognition, and this is something that Dr.
Nelson mentioned in his response. Indeed, if we look back in history, it's been said that some of the greatest surgeons. [00:07:00] Differentiated themselves, not just by understanding how to operate, but crucially when to operate. And they would do this by assimilating and assessing multiple data points within their minds to come to a critically, potentially lifesaving decision.
So that whole concept of interpreting anatomy trends in physiology, subtle clinical clues has been done for generations by surgeons at the peak of their profession. But for the patient, it has often been economy, resource, or chance that has determined which surgeon they'll actually have treating them.
So what AI gives us is the ability to capture this critical intuition mathematically to learn from millions of data points rather than just a thousand cases over an individual's lifetime. However, one thing we must remember is that the goal should be [00:08:00] to not just replace surgical judgment, but to extend it so that we can turn surgical experience and expertise into reproducible, data-driven insights.
And that's where I think AI is going to be potentially game changing.
Oh, well, thank you very much. That was an incredible introduction. I think many people do get the two terms AI and machine learning confused, and sometimes they're used interchangeably, which is not entirely correct. Are there any differences in the approach or the role of AI and machine learning between the UK and the us?
So it's fascinating to compare these two ecosystems. Traditionally, if you took a broad brush and considered both countries alongside each other, both would clearly demonstrate their hunger and desire to implement and innovate. [00:09:00] AI and machine learning approaches to improve healthcare, but from what I've experienced through my discussion with US collaborators and industry, is that US innovation often starts in startups or tech industry collaborations, and it's very much more fast moving and fast paced.
It's investment heavy and occasionally commercially driven, which is in a bit of a. Contrast to the uk, which takes a different tack because we have the National Health Service. So effectively the NHS offers a unique single longitudinal data environment, which I think is a real strength. Our focus is very much more on the UK side on safety and equitable scaling, and certain programs like the NHS AI Lab and the Allen Turing Institute bring academia industry and policy together into some sort of common collaborative and governance framework to allow this [00:10:00] On the downside, it does mean that in the UK progresses may be slower, but arguably as a result of that, it's more.
Reproducible and more ethical. And the aim is that ideally it should be more easy to generalize across populations and thus be much more equitable for, a range of resource settings and populations around the world. So these are the feelings that, I have from the UK point of view. I'll hand over to my American colleagues to illuminate us about the US approach.
The real contrast between the American and the European or the UK approaches it reflects not merely the regulatory frameworks, but also the profoundly different philosophies now Now really consider that, that in America, over 60% of the AI investment.
Origination in very few places. That's Silicon Valley, that's Boston, that's this weird and beautiful city just over there called Austin. And, the advantage is this [00:11:00] extraordinary creativity in energy and, in velocity. But ultimately the challenge is the fragmentation. And that can sometimes border on dysfunction. You know, we create brilliant pilot solutions that don't really communicate with each other. Each one is optimized for marketplace success rather than systemic patient benefit.
I just wanted to add one small point in there too, which is a big sticking point for those of us who work in the development of AI algorithms, which is that in the us. They're sort of these two distinct populations that work on different aspects of ai.
So you have people at the healthcare and university level who actually have access to healthcare data that's generated by university hospitals and big hospital groups. And these are almost always completely separate. From startup companies. And because of the way our data regulations work, we are not able to share data with startup companies.
So [00:12:00] there's a really big disconnect between the people who have the computer science knowhow and the medical knowhow to make AI discoveries and the people who have the marketing know how and the startup know how to actually deploy those discoveries into production. And right now there's an estimate in the business world and in the literature that it takes 17 years to bridge that gap in the US from. Innovation at the university and university hospital level to final deployment of a product into consumer hands. So it's just, it's, you know, despite it seeming like it's a rapid pace in the US sometimes I think because of this disconnect we actually, we end up sort of tripping over ourselves and having potentially these really long delays between creation and deployment of viable products.
That's really
interesting.
I think from my limited experience, I think we have very similar disconnect in the uk.
So there are a number of issues that are going to be really important to address globally moving forward. [00:13:00] One of the ways we're trying to do this in the UK is by certain innovative schemes that are trying to bridge that gap between academia and industry and accelerate that pace of change. So, for instance, my current research award is an MRC Clinician Scientist Award, where I'm part funded by industry.
GSK is my major pharma partner, and I'm also supported by the Medical Research Council, which is central government or UK research and innovation funding. So considering the UK model in more detail. What it does is it delegates, and it's been quite rightly pointed out so far, that oversight is given to sector regulators.
So healthcare AI falls under the medicines and health regulatory authority, NHS, digital ethics frameworks and so on, and they really emphasize transparency, bias testing, and [00:14:00] explainability before deployment. Whereas things are slightly different from what I understand in the states in terms of it's more focused on FDA guidance and US executive order, and so ultimately both systems across the pond need to converge.
In fact, one of the key points that is coming out, and Dr. Nelson has alluded to this, and Dr. Larson has mentioned this as well, is that. Innovation without trust is actually really, really brittle. But on the downside, regulation without agility and the ability to innovate rapidly is actually paralyzing. So it's, it's finding the balance really.
And those academia industry bridges, I think are going to be vital to redress that balance.
Perfect, and I think that segues us right into the next topic, which was just a little bit more information regarding the regulation of [00:15:00] AI practices in both of these spheres within the US and the uk.
So, the regulations surrounding medical AI in the US are fairly complex right now.
The reason is that the previous administration had directed a draft policy regulating medical AI use through an executive order. But as of right now, that unifying federal policy has not emerged as a single federal law. So there are lots of small state regulations for things like AI use disclosure.
There are some states that rec recognize that doctors need to disclose AI in certain scenarios. There's some state laws about algorithm bias. But this is very patchwork across the us. But the actual general process of getting any healthcare AI instrument approved in the US is actually quite well-defined because right now we're still relying on an old regulatory framework called the regulatory framework for for software as a medical device.
So this is a longstanding technical [00:16:00] definition that came from the 21st Century Cures Act. That essentially said that anything, any software that helps with diagnosis, with cures, with mitigation, with treatment, or with prevention of disease can count as a medical device. So, for an example, an AI algorithm that helps you find open spots in your.
Outpatient clinical day to schedule patients is not going to count as a medical device, but an algorithm that helps you triage your traumas in the emergency department is definitely going to fall under that umbrella. So if your AI reaches that medical device standard, it is assigned to a risk-based classification by the FDA, using the same system as any other medical device as a pacemaker, as a tongue depressor, whatever it might be.
It looks at things like are there existing similar programs versus is this a totally new area of AI development? And then depending on which risk classification you get, it's an entirely [00:17:00] different pathway to approval. And I think that's a little bit out of the scope of what we're talking about today, but those are the very well-defined FDA pathways for approval.
So for example, if I was to create an AI based app centered around screening patients for colon cancer, what are some of the questions I should be thinking about when I apply for FDA approval?
There's a separate risk categorization system that's specific to software as a medical device that we are currently applying to ais. And this is called our I-M-D-R-F framework. And that framework looks at things like who the target user of the software is how it's being used.
And then overall you get sort of this risk categorization from the I-M-D-R-F and the FDA decides basically, is it going to trust you to deploy your algorithm within your own quality management system, which is again, regulated through the FDA and that's called enforcement discretion.
Or if they're going to have oversight focus, which means you really need to have your ducks in a row because you know, [00:18:00] the FDA is going to be scrutinizing this very carefully.
So I guess in summary, I would say currently navigation is exactly the same for AI as any other software, and that's through the FDA. But there are a few mechanisms that we can use to sort of escape that pathway for things that are low risk to our patients. And we're all, as a community, still waiting for federal direction on AI specific policy while we navigate through state specific policy.
You know, o open AI started off welcoming federal regulation that that's not really the case anymore. Right? Anthropic recently restructured into a more for-profit corporate framework. And I , expect that many others can be moving in the same direction. The actual oversight fragments across all of these agencies, the FDA, governing medical devices, the FTC over overseeing consumer protection and these vast territories remain totally unregulated at all.
And this creates a troubling dynamic for us [00:19:00] surgeons who are ethically driven to be conscientious, right? Innovation ratios ahead, even while regulation stumbles behind.
Now, we're all physicians first, and our primary obligation is to our patient welfare, not to some technological adoption or god forbid, administrative efficiency. We must anchor our engagement with ai with whatever regulatory environment emerges.
Thank you, Dr. Nelson. I'm in full concordance with you and many of the points that have been mentioned so far. I think going back to some of the points I was making previously regarding the UK setting, the medicines and health products, regulatory agency or MHRA here in the UK is responsible for the implementation of AI as medical devices.
That's on the basis of a reasonably old act that was devised in 2002. The UK Medical Devices Regulations Act. So this is being updated. And the whole [00:20:00] point about updating that is incredibly important is that this is being approached through a different means now. And one of the ways that this is being done by the M-H-M-H-R-A is.
Actively reforming regulations to specifically address the unique challenges of AI and try and develop an adaptive model. So they've started the AI as a medical device change program in the uk, which is aimed specifically at ensuring appropriate evidence is generated throughout a product lifecycle, and then integrated to reestablish guidelines and approaches.
So it creates. An evolving and adaptive regulatory framework. So I think the only other thing that I'd add to the ex excellent exceptional descriptions provided by Dr. Larson and Dr. Nelson so far is specifically thinking about the UK and the us. And more broadly in terms of development, [00:21:00] I think it's vital that the UK collaborates in terms of the MHRA with other international bodies such as the US FDA, to align best practices so that there should be a common language and an ability to adapt and adjust with regards to regulations across both sides of the pond.
If there is one thing that COVID has taught us is that the only way to solve global health challenges of the future is through collaboration and basically expensive, impactful innovation for the few is never as good as meaningful, effective innovation for the many.
Amazing. Well, thank you very much. That was a very comprehensive review of the regulations. But let's say, some companies and people have actually made it through this minefield of regulations and law. What are we actually seeing AI being used for?
I think the way I approach it and think of a [00:22:00] framework, through which we consider how AI is being used and can be used is in terms of surgery, focusing on three key areas, basically, triage, diagnostics, or treatment. So broadly, technologies fit into one of those three, categories or three stages of the patient journey.
So we're already seeing real world developments in say. Radiology. As you know, AI triage tools for chest x-ray prediction, pathology, digital slide scanning, or automated cancer grading, as well as decision support in terms of risk stratification for say sepsis, where analytics and perioperative prediction is another great use of ai.
However, one thing that we also often neglect and don't consider with regards the use of AI that is actively being done at this moment in time is education. So simulation, skill tracking and adaptive [00:23:00] learning for residents and trainees. So there's a number of areas that obviously this is taking place and the the next frontier is really.
Integrating that and making sure that these tools talk to each other across the healthcare delivery system so they're not just being utilized in isolated silos. Uh.
Dr. Mukerji. I would definitely agree. I've seen a lot of , those applications pathology, radiology in the US as well. One thing that I didn't hear you mention that I've seen a lot recently is the use of clinical LLMs in a couple of different settings. So large language models is our LLMs.
And so here in the US there are some of the larger medical institutions have actually developed their own LLMs that are able to process their own medical records. And we use those for everything from medical records summarization to searches, to do doing data exploration for research. And then we [00:24:00] also have.
Commercial products like open Evidence which are medically focused, LLMs that are trained on the body of data from PubMed and other peer reviewed databases. So that when we ask these models clinical questions in the same way that we would ask, you know, chat GPT or Gemini or another commercial system, they're able to give us legitimate peer reviewed medical sources and can avoid some of these hallucination problems that a lot of commercial LLMs have.
So that's a big area that I know I personally use Medical LLMs a lot in my research and training, and I'm starting to see them happen more and more in clinic, whether that's people using them to write their notes having a model, listen in while they're talking with a patient and then summarize the note at the end or even just to do a medical record search.
And in terms of surgical education, the domain holds transformative potential That's only beginning to be realized, you know, a 2024 study [00:25:00] showed.
That chat, GPT-4 can simulate the American Board of Surgery oral exam scenarios and do pretty damn good at it. Residents can now practice clinical reasoning with unlimited patients and accessible software at anywhere and at any time.
And even hot off the press now is there's a new beta version out there behind the knife oral board simulator.
It's designed by this brilliant surgeon, Dr. Matthew Swenson, and it acts as a board scenario, tester and coach using either text or audio input. And then you get personalized feedback on your own personal performance for a board scenario. Now keep in mind, that's all still in beta, but if I were studying for my boards right now, that's exactly the kind of tool that I would be using, and I think it's a worthwhile use of your time.
So shout out for buying the knife. You start off with some free tokens. Give it scenario or two. See if you like it.
Thanks Dr. Nelson for that free advertisement. Appreciate it. But in all [00:26:00] seriousness, LLM models bring education to a new frontier. It'll definitely be interesting to see how we can utilize that to foster better surgeons and a better training environment overall.
I think the realm of LLMs in medicine is very interesting actually, and , I'm sure patients will, if not already be, instead of googling, asking chat GPT to diagnose 'em based on their symptoms. And I think this is a really great opportunity for good, but also potentially harm. I'm not sure if there's any way to regulate that or ensure proper information.
We cannot and must not delegate this responsibility to manufacturers, to regulators, or God forbid, the institutional administrators. And what I clearly believe here is that we urgently need a comprehensive federal AI legislation that balances innovation and patient protection.
You know, I came in as a trainee thinking I was, oh, I'm real skeptical enough. None of us are skeptical enough until we've seen things mess up in our own practices. We must never [00:27:00] advocate our clinical judgment to algorithmic outputs, and we must always advocate for our patients when the AI recommendations seem appropriate. I think we need to hold ourselves to the highest professional aspects of our integrity, and that means that we must demand transparency on AI training data and the validation methods. This is a partnership. It's not a replacement.
Clearly AI has many functions, many forms, many uses. I do know that there's a systematic review published not too long ago that showed of all the trials utilizing AI in medicine. A huge proportion of them are ai powered colonoscopy lesion identification. I thought that was hilarious because you know, clearly the regulation to get a study through ethics and at the same time, get your AI through M-I-H-R-A-F-D-A-E-H-A, whatever the equivalents are in China and India are so insurmountable.
But that we have such a small volume of [00:28:00] RCTs. Evaluating efficacy of AI technology. So I'm sure more than just polyp detection and colonoscopy are gonna come out in the next 10 years, but right now that is what the literature is showing.
But beyond that, you are all successful researchers. You have extensive experience in utilizing AI within the research field. Not just clinical work, but also clinical research. How has it been used,
and how do you expect its use within the research field to be moderated?
It is a great question Jared. So we'll kick off by saying that if I give my specific example from my ongoing research and the groups I'm working with, along with the projects I'm working on, our work focuses on utilizing AI and machine learning to analyze multi mix data and develop. IML models of lung injury as a result of inflammatory conditions such as acute pancreatitis or polytrauma or [00:29:00] sepsis.
So through this we're able to utilize AI to develop a means of analyzing the molecular sub phenotype of organ injury. So essentially it's game changing precision medicine for acute disease. Now we've built up a wealth of data and there's been a revolution in discovery science in preclinical medicine over the years that has provided us with a goldmine of information.
So what AI is doing is it's to, we're developing approaches that can better analyze and integrate this big data. So I'm talking about not just clinical data, where we have the genomics based on extensive digital clinical libraries, electronic healthcare records and radios, but also matched transcriptomics, proteomics, and genomics that can all provide snapshots on their own, but an amazing.
In-depth understanding if that wealth of information is put together [00:30:00] so that we're able to much better interpret and apply this information, it becomes much more relevant to the human condition. And this is where I think AI really excels. And even with a strong background in molecular biology and traditional approaches, that's why I've adopted these approaches as part of my current research.
'cause effectively what we've been trying to do for years in translational science and clinical research is bridge the gap between bench to bedside and this bridge that we've been constructing. In many cases, effectively, in other cases, not over the years, is quite rickety and is traversing what we call the valley of Discovery science, death, where many observations lie after the preclinical stage and they've simply failed to make it all the way to the patient.
So the way I see [00:31:00] AI is that it provides a scaffold to support the bridge and allows the ability for key discoveries to translate to patient benefit. So AI has its real use for better hypothesis generation, better real-time, real world validation to allow the better use of data to accelerate discovery.
That's where I see its clear benefit. So in the future, I think what we'll see more of is digital twins, which are very topical at the moment, but are basically virtual replicas that could simulate say the response to surgery or therapy before we intervene as surgeons. This way we can actually understand what the impact of particular interventions will be before they actually happen, which is ideal.
We're some way off from there at the moment, and we're in the process of learning how best to analyze the data through developing the [00:32:00] best AI algorithms and then identifying the best way to implement these approaches. But I think we'll get there rapidly as the pace of change is much greater than anything we've ever experienced in clinical research and translational medicine.
Up until this point,
well, we've covered a quite a bit of ground here today. Everything from what AI is to the regulation differences between the two countries and current uses. There are a lot of similarities in the way that AI is being utilized, but also there are some key differences in the way it's being approached and implemented.
Yeah, Jared, I completely agree. I think the main takeaway thus far is that while each system is different, we should actually approach AI within surgery in a spirit of collaboration and try and create more sustainable solutions and safe programs for all our patients. And actually, I think we've got a few more topics regarding [00:33:00] disparities under resource settings for AI and the future of our jobs, which luckily we have lined up for our next episode.
So stay tuned.
Perfect, well thank you again to our speakers, Dr. Mukherjee, Nelson, Dr. Larson. Thank you so much for joining us today and for your insights. And thank you to all of our listeners as always, and we're curious to hear your thoughts on which side you think has a better approach
and as Gita mentioned, we have another upcoming episode, so be sure to look out for that. And as always.
Do dominate, dominate.
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.