ASGBI Ep 2_part2
===
[00:00:00] Hello everyone and welcome back to our A-S-G-B-I series here at Behind the Knife. My name is Agnes Premar. I'm one of the Behind the Knife Education Fellows, and I'm a general surgery resident. We'll be representing the US side for this discussion and I'm joined here again by our UK based cohost, Jared and Gita.
.
Hi, I'm Jared Vol, a surgical resident based in the uk.
And the current president of the Moine Academy. We are the resident group of the Association of Surgeons of Great Britain and Ireland for A-S-G-B-I.
Hi, I'm Gita Lingham. I'm a surgical resident in the uk and I'm the Vice president of the Moynihan Academy.
We're here to pick up where we left off last time regarding ai. We had a few more points to discuss. Namely, what's the role of AI in under resourced settings, and what are some ethical dilemmas in ai?
For example, are we all gonna lose our jobs to the machines?
We hope not Jared, but I'm sure it'll be an interesting discussion. So let's meet the team that's helping us explore this topic. Again, [00:01:00] it's my pleasure to reintroduce Dr. Eric Nelson and Dr. Ellen Larson from the US side. Dr. Nelson is a surgical oncologist working at the Brook Army Medical Center in San Antonio , and Dr.
Larson is a general surgery resident at the Mayo Clinic, who's currently in her research time and finishing up her master's degree in AI and studying the role of machine learning within surgical practice.
I also wanted to introduce Dr. Ji, who's representing the UK side. He's an MRC and GSK surgeon scientist, alongside being an honorary consultant general, and major trauma surgeon in Liverpool.
Okay. So in our discussion of AI thus far, both the US and the UK are both high resource countries, but how do we see AI fitting into lower resource areas? Maybe Dr. Nelson, you can kick us off with a military perspective. Do we see this being used in battlefields?
so that question really addresses one of the original vision for surgical robotics was in battlefield trauma care and in rural surgical [00:02:00] access.
Reality has unfortunately proven much more complex than that well-meaning and, and inspiring mission. And I can address. Both context and honestly, they deserve a couple of different interpretations. So as a military surgeon working in military medical center, this carries personal weight to me.
So the, the original darpa version inspired trauma surgery on a wounded soldier performed remotely from safety, but battlefield realities of that vision, unfortunately resulted in, in different and less ideal scenarios. So current military trauma emphasizes damage control. Okay, stop the bleeding, put your finger on the hole control contamination and evacuate to definitive care.
These interventions really save lives. You know, we're talking about early tourniquet application hemorrhage control. Airway management. These are relatively simple, but they demand rapid [00:03:00] decision making amid chaos and danger. You know, AI and robotics don't really meaningfully address that core, direct technical challenge.
Now, that doesn't mean that they're worthless, though. Where AI could genuinely help is in predictive triage. So imagine wearable sensors on every soldier continuously streaming in physiologic data. With AI algorithms determining early hemorrhagic shock before clinical signs manifest. They can, that can trigger early evacuation, especially in the event that their casualty is now incapacitated and can't tighten their own tourniquet.
AI guided ultrasound can help a combat medic identify intraabdominal hemorrhage without radiology training. These augmentation tools serve human judgment. They don't replace that human judgment, those set of human hands out there on the field.
I believe in part the robot was also envisioned to fit into these low [00:04:00] resource areas such as military zones, rural areas in order to help provide essential surgical and expert care. So with the integration of AI and robotics, I'm sure the reach could span even more.
That. Unfortunately faces insurmountable physics, a surgeon versus the laws of physics. My money's on the laws of physics.
Battlefields aren't known for reliable internet connectivity, although we're getting better at that. So autonomous robotics becomes the only option, and that returns us to technical limitations that we earlier discussed. And I'm gonna say candidly, we're not close to developing autonomous surgical robots in combat situations.
The reliability requirements exceed our current capabilities by orders of magnitude. You know, while operating and, and we do this, I train for this, you know, while operating in some makeshift or while somebody is ballistically trying to rightsize my own existence. You know, what I need is a better attractor.
Better lights and a sucker that [00:05:00] actually sucks.
There's a different paradigm in rural access to surgery and that presents more realistic challenges and opportunities. Now, the problem isn't emergent trauma, really, it's access to specialized expertise, right? A general surgeon in rural Montana might encounter a complex pancreatic mass, maybe once every few years, and they're not gonna go after that really.
But AI powered decision support could show them similar cases or optimal approaches to perform a palliative operation and can show them pitfall avoidance.
Telementoring is another great tool. It already exists. An expert surgeon can remotely guide in and can take a local surgeon through a slightly more complex procedure. And there is an even more ambitious vision. This is semi-autonomous surgery for straight. Forward procedures supervised remotely by a specialist, a rural hospital with a surgical robot performing routine hernia repair [00:06:00] can be monitored by a real-time surgeon, halfway across the country and who can intervene on an as needed basis.
There is no shortage of surgeons who would love to post up at home with their own personal da Vinci console to help from afar. I think that'd be a great way to work. Nice thing to do in retirement. But here's the uncomfortable truth. What we're proposing is extremely expensive, technologically sophisticated.
Rural hospitals right now, especially in America, are struggling to maintain even. Basic surgical capability, they're not going to be able to afford a $3 million robotic scalpel. They're not gonna be able to support that AI infrastructure.
You know, while the, the military certainly could, that battlefield medicine requires rugged, simple, reliable technology. And that's basically the opposite of cutting edge AI systems. So perhaps the question isn't, can we deploy AI in those settings, which we probably could, but the real [00:07:00] question is, should we, and in my opinion, probably not.
AI can help, but it's not the solution to surgical access disparities. Those are fundamentally problems of healthcare infrastructure, workforce distribution, and resource allocation. Technology alone is not going to fix the systemic injustice.
To second Dr. Nelson, after that really eloquent description of the principles that we should really utilize for developing ai. I think that the key point is that. Technology that is looking for an indication is philosophically flawed when it comes to clinical development. I think it should be the other way around.
You know, clinical science. Our clinical practice has always been as physicians, surgeons, identifying indications. And then utilizing the resources around us to actually benefit patients in a much better way. And so the whole principle of AI has to be considered [00:08:00] within that context, thinking that it's not about AI surgeons or automated surgery or AI bots or replacement of surgeons, but actually thinking about creating an augmented surgeon where we utilize AI to raise.
Our situational awareness or increase our precision when it comes to operating. Those are the instances where I think clearly things are justified, and that's what I think we need to focus on.
Thank you both for that. I think that the framework to keep in mind about understanding where AI can be utilized in a justified manner in surgery is really an interesting topic. Just to pivot a little bit, I remember reading back in July that Johns Hopkins that actually trained a robot successfully to perform a cholecystectomy.
The robot obviously took , longer time to complete the operation, but the results were pretty remarkable. What do you all make of this? Do you think Our jobs as surgeons are endangered, that we should find another career to [00:09:00] invest our time in?
Great question. Uh, topical and I love the story when it came out. In fact, the work behind this innovation is really great and it captures both the promise of ai, but it also basically displays. The hype around the technology itself and to actually understand things properly, I think you need to understand that the common consensus is amongst most experts, that robots will not be replacing us in any way or form as surgeons.
And I'm sure that's a big relief to many who are starting their surgical careers. But what I think will happen is that the use of AI will basically redistribute and reform what we do as surgeons. So the surgeon of the future will be orchestrating robotics, imaging, data streams, and so on and so forth, rather than just simply operating manually.
So a very different skillset [00:10:00] that will become much more diverse. And the whole thing about cholecystectomy and having an automated robot, I think made me think. And consider that the real question isn't actually, will AI replace us as surgeons? But actually more the point is, will surgeons who use AI replace those that don't?
And I'm sure in the future, the answer to that question is going to be, if you don't know how to utilize the technology and apply it appropriately, you'll probably be replaced by someone who's better at it.
I'm gonna challenge you on that one, just playing devil's advocate.
I mean, many, many years ago people said that nobody's gonna want a personal computer in your home. That's ridiculous. And then people said, well, nobody's gonna want a telephone on your pocket. Like, you've got a, you've got one at your house for that. Why would you want that? So I think and then there's, you know, there's famously people saying that, [00:11:00] a worldwide web would not be of interest to anyone.
And obviously like 10, 20, 30 years down the line from each of these hilarious quotes, we can laugh at them. So for somebody today saying a robot will never do an operation I don't know, maybe they will. How close are we? And maybe the question is in how much time could a robot perform an independent basic operation?
So what's really important to consider is that the study published from John Hopkins was on ex vivo organ models, not a clinical real time setting, and I think the capability is clearly there. It requires development. It was great as a proof of principle or a proof of concept study, but you are right that in terms of scaling things up, there are gonna be some inherent challenges, and I think that's predominantly gonna be twofold.
Firstly, dealing with patient variation and then. Dealing with patient acceptability. So if we consider patient variation, the question remains, how is an [00:12:00] AI robot going to adapt to anatomical variation or inflammation or fibrosis when operating? That might be entirely unique or different. According to the patient in front of them and something that they've never encountered before on the basis of the dataset that they've been trained on.
Secondly, would patients accept a robot operating on them? And that's one of the issues that the automated industry is tackling with. Tesla's automated cars as a person trusting an automated robot to perform a function that carries a significant risk at this moment in time. I don't think the patient accessibility is simply there.
Clearly, we do require certain safety standards and regulations that can prove that these technologies can be safely implemented, validated, and then scaled up, but with technologies. Innovations. You are quite [00:13:00] right to point out, Jared, that at the initial stages there is always an at-risk group that takes the leap.
So, for instance, in my own field considering drug discovery, you know, we have first in human trials on healthy volunteers that have never experienced the drug before. Simply, there to establish lack of toxicity or, a good side effect profile. So I think it's inevitable that we will have to develop such studies to really scale up the approaches that you are proposing and John Hopkins have proposed in terms. Of, new innovation. It's very difficult to put a number on it in terms of timescale because at this moment in time, certainly within the UK and Europe, the general survey suggests that the public remains skeptical and untrusting of such technology to perform such surgery. [00:14:00] So, I don't know if our American colleagues would like to elaborate on this further, but I'll hand over to them.
I love what Dr. Mukherjee said because I, I think he's sort of honing in on what in AI ethics we call the accountability paradox, which is that when we have AI systems that make decisions and make choices and do actions without input, that is directly understandable to us we have this problem where, who is held accountable for what the AI has done, and.
From a moral perspective as physicians from an ethical perspective as well as from a legal perspective, having a robot perform actions on a patient independent of a human operator, there's no ethical solution and there's no ethical way to reconcile that at this time because if the robot makes a mistake and harms a patient, it's difficult to assign accountability for that.[00:15:00]
And as we all know, even the most routine operation, cholecystectomy, appendectomy, central line placement, these can all go catastrophically wrong within moments. And I know from my own personal experience and I'm sure the experience of everyone here is that when that does happen, the best possible outcome for the patient is to sit down with a patient and their family or their loved ones and explain to them what happened, and take accountability for that.
In the case where we have AI performing surgical operations on a patient, accountability is not there. And there's been a lot of ethical debate about this, and I do not see, at this point, I do not see any progression in the ethical literature toward a solution that allows us to do things like this in an ethical manner.
So I think that's going to be a huge hangup for developing any type of autonomous decision making in the actual [00:16:00] operating room that is done by an ai. Now that being said, I agree also, with the idea that. That we're all going to be seeing AI in our surgical practice, whether or not we like it, but I think it's going to creep in ways that are a little bit more a little bit more subtle.
I think we're gonna start seeing things like structure identification popping up on our robot screens. I think we're going to get interoperative decision aids. I think all of those things are gonna be widely used within the next few years and definitely within our lifetimes. But I really struggle to envision from an ethical standpoint any future where the surgeon's hands are not on the console at all times and on the patient at all times. I think that's really hard to see in the future.
Yeah, Dr. Larson really hit the nail on the head there as primarily a robotic surgeon. I recognize that even right now there are times where the robot does assist in a way that my hands are only indirectly connected.
[00:17:00] So, for instance, if I am suturing something extremely fine, I can change the sensitivity of the movements of my own hands on the robot. And even though it is still my motion, the computer is calculating exactly how much my hand moves on the inside compared to how much it moves with me at the console.
Now, that is a computer that's in the way between me and the patient in the same way that the scalpel is. And so the way that AI comes into our practice is going to be very gradual with those little steps that people get used to all at once. When I first started off on the robot, the patients would ask me, what, you're not going to be like scrubbed in next to me?
And I'm like, no, I'm not. I'm gonna be sitting right over there. My assistant's gonna be scrubbed in with you. And it took a while for them to come to realize that. But now it's so common that people don't mind it at all. And it's those little tiny increments that are going to get us through this. Now that does not [00:18:00] address Dr.
Larson's very astute, ethical conundrum as to, okay, who's actually responsible? And the answer is, the surgeon's responsible. And anybody who disagrees can hang up their white coat right now. In philosophy and mathematic philosophy. If there is no answer to that, then it becomes an axiom.
It is something that must be true, even though we cannot prove it. And in the age of technology assisted surgery, the axiom must be that the surgeon is responsible. Now, you know, going back to the autonomous cholecystectomy, nope, that was very fun. I liked the paper. It was a great achievement. And they used another program called the Smart Tissue Autonomous Robot that operated on top of the Da Vinci, and it demonstrated, you know, certain tools that are going to be able to assist us in the next generation of robotic surgery.
But, like, say, even in the Da Vinci five, that haptic feedback solution is a way for the computer. The intelligence of that computer has [00:19:00] is to interfere with the surgeon.
It will push back on you if you're putting too much tension on the tissue and that's a good thing. But it is ultimately the robot standing in the way of the surgeon because the robot thinks or has been programmed to say, Hey, no, you can't actually pull that hard. And ultimately the surgeon is still in control of that, and that is the first step in the many, in the long pathway towards a truly surgeon robot ai collaboration here.
And it's represents a remarkable engineering achievement. These researchers overcame these technical challenges around a real world tissue characterization, dynamic mapping, and they able to use a device relatively safely , it demonstrates that AI can do way more things than we think it can right now.
Again, I think that a lot of this [00:20:00] is going to be much akin to, you know, the paradigm of nuclear fusion. It's always 10 years down the road, you know, that autonomous surgical robot for many decades, it's just going to be 10 years down the road. And so that's why, because of those caveats, I'm not concerned about losing my job, and I don't think you really should be either.
And one of those is the real reason why I think is philosophical. Surgery is not a single procedure on one organ and a healthy pig under controlled laboratory conditions that this a art based on infinite anatomical variations across multiple pathologies, infinite patient factors, and unfortunately unexpected findings.
It's the complications that demand surgical judgment. And what happens when that autonomous robot encounters inflammation, aberrant arterial anatomy, suspected malignancy, unexpected bleeding .
AI excels at pattern recognition, but struggles with [00:21:00] genuine novelty. And that's why I love surgery, especially surgical oncology. That's why everyone who's listening to this podcast loves surgery because even the most routine operation. Is novel. And I envision AI as a surgical copilot rather than an autopilot.
So as far as my prediction, you know, AI will transform how we operate, but surgeons will not be obsolete any more than pilots became obsolete when the autopilot was invented.
The aviation analogy here is a pretty good one. AI can handle the routine task, which frees me as the surgeon for high level judgment and high level interventions when the real systems fail. In aviation, that's called threat and error management. In surgery, it's just surgical judgment.
Thank you everyone. That was an amazing, amazing discussion. So we've learned a lot on this podcast already, but just quick fire, one point that you think the listeners should take [00:22:00] home from this podcast, Dr. Muji, if we start with you.
Thank you. I think that basically our responsibility is practicing surgeons and for all trainees and residents and for any other surgical colleagues listening to this, is to shape the future of our practice with ai, putting what I term the three E at the forefront. That's evidence. Ethics and equity. We need to all learn this new way of doing things and partner with data scientists to ensure that AI is deployed within the settings of the patients we treat truly for their benefit and not for profit margins as clinicians, I think that has to be our imperative.
Yeah,
amazing. Thank you Dr. Larson. One point take home for the listeners.
I think AI is a great opportunity [00:23:00] for us to have a new era of creativity in surgery. For us all to think about how enhanced decision making and information processing can be incorporated into clinical practice. And if no solution exists, go out and find people to help you create it. Don't be afraid to approach your partners in computer science and mathematics.
These people are looking for ways to apply their knowledge in high impact clinical ways. So we are really the driving impetus behind our own AI future as surgeons.
Go on. Dr. Nelson.
I'd like to kowtow on what Dr. Larson said. And that really this is a unique moment in surgical history.
And the next decade we'll see AI integration to nearly every aspect of our practice. And if we do this thoughtfully, it'll make surgery safer, make it more precise, more accessible. But if we do it carelessly, it'll introduce new errors, it will exacerbate inequalities, and it will damage that sacred patient surgeon [00:24:00] relationship.
Alright. The difference between those two futures isn't primarily technical, but it's human. It depends on whether we as a surgical profession, through the. Through the guide of AI's Development and deployment, insist that it serves our patients rather than merely serving institutional efficiency or commercial profits.
So I challenge every surgeon and especially every trainee listening, don't be a passive recipient of AI technology to be active, be especially critical and engaged in your participation in creating the future of surgery. I want you to demand transparency, insist on validation, maintain your healthy skepticism, and most of all, preserve your humanity.
Your patients deserve nothing less. Our profession demands nothing less. You too deserve to benefit from this extraordinary new technology. Advocate for yourselves to our [00:25:00] residents. You must be so loud that when the widespread AI adoption comes for academic medicine make sure that saving your time with a bit of AI investment will be seen as a worthwhile endeavor .
So remember, the needs of the patient come first, but you know, when you're a trainee, your needs can sometimes come first too. So thanks for your attention and dedicate your heart to the healing with all that we serve in the future. Zeroes now.
Wow. Those are just some great tips and takeaway points for us to all mull over. Once again, thank you, Dr. Mucker Gee, Dr. Nelson and Dr. Larson, for your time and for your wisdom. And thank you all for that super interesting discussion. I'm sure we'll be thinking about all the points that were raised moving forward as well.
We truly appreciate it.
Yeah. And thank you to the listeners for tuning in. I know in this episode there were a lot of similarities and agreement between the approaches to AI, both clinically and academically between the US and the uk, but beyond the lookout for more of our [00:26:00] episodes in the future, where you can judge as the listener who does it better, the UK or the us, and remember to
Dominate the day.
.
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.