ethics
===
[00:00:00] Hello, listeners. Welcome to the Behind the Knife series about artificial intelligence. I'm Ivan Ole, a general surgery PGY four at Duke. In today's conversation, I'm joined by fantastic guests to discuss the ethics of artificial intelligence in surgery and medicine and healthcare in general.
First, I'd like to introduce Dr. Ryan Anto. Who's an associate professor of pediatric surgery here at Duke, and whose research addresses ethical challenges surrounding the care of seriously ill fetuses and neonates. In addition, he's the founder of The Good Surgeon, which is a project that focuses on character development in surgical trainees and faculty.
Welcome Dr. Antu.
Thanks so much. It's, , great to be with you all. I am also joined by members from Oregon Health and Science University, who have been , on this podcast before. First is Phil Jenkins, a PGY four research fellow. Hey, thanks for having us back. We also have Dr. Ruhi anal, a thoracic surgeon, with a background in AI and informatics.
Thank you for [00:01:00] having me. We also have Dr. Steven Bedrick, an associate professor of Biomedical informatics and machine learning researcher. Hey, thanks so much for having me back. And finally, we are very lucky to have Professor Kate Specter, Baghdadi a lawyer and bioethicist at the University of Michigan Medical School.
As we were thinking about this episode, it became clear that we would benefit strongly by having somebody with an understanding of the legal principles. And so again, thank you for taking the time to join us. Professor Specter. Yeah, it's good to be here. So before we start with the general discussion, I just wanna start with the case.
As we often do in medicine, I want you to imagine that your hospital just signed an agreement with a computer vision company that specializes is in laparoscopic realtime analysis. It's two 30 in the morning, you get called to do an urgent lab coli on a 3-year-old woman, and the real time overlay highlights the cystic duct.
You place a clip, you divide the common bile duct, [00:02:00] and it's a very long road ahead for the patient. At the end of the day, it is the culture in medicine that most of us will take responsibility. But from each of you, I want to hear your immediate thoughts about this case. How does it make you feel with respect to each of your backgrounds and perspectives?
I think maybe we can start with Dr. Bedrick. How do you feel as a builder of tools? First of all, that is like, that is the nightmare scenario for anyone building any kind of, , you know, automation tool used in a medical context, right? The idea is to help avoid errors and make things easier for providers, not, not cause those kinds of problems.
So, um, my first thought hearing that is,, to think about first of all, , what went wrong from the tool's perspective, , and, uh, uh. Um, but also to think about, , when it was deployed and put in the hands of, of the people using it, what kind of training did they have to understand how much trust to put in that overlay, right?
So I think about automation bias, and I think about, um, [00:03:00] you know, what the developers maybe could or should have been doing differently in their testing. Um, and also keeps you, keeps me up at night worrying about that kind of stuff.
Dr. Bedrick, I think you bring up some really. Important points, which is what was in the minds of the people using the tools. And I think, you know, me being a surgeon, um, I think about all these times where we actually have unsureness, like what is the structure? Like, rarely does the anatomy just pop out.
You say, and this is what I am and I'm nothing else. And how these new technologies and computer vision tools. Can be beneficial, but can also add to the complexity of decision making in already complex situations and how do we kind of get things to align. So I think there's a lot, I think almost all of us as surgeons can empathize with what has happened in this case, and then try to figure out how do we really leverage these amazing tools to help us make decisions better in hard [00:04:00] circumstances, and try to mitigate what the algorithm perhaps couldn't have known.
There's only been so much information the algorithm had, and as a surgeon, I'm having to like now piece together all these additional pieces of information in addition to what we're kind of trained to do, which is just look at the anatomy as we see it. Dr. I wanted to kind of. Advance the scenario a little bit, and especially as an attending surgeon who has trainees, um, how do you view this when your trainee makes the mistake as opposed to you?
Um, which will, as we start to train on these, um, sorts of technologies, residents are gonna to train on it. So now we have that kind of extra layer of complexity of not only did the. Algorithm made the mistake, but then it's the trainee who actually physically clipped and cut it. Does that change, um, the way that you view the scenario or how you, or does it elevate your level of fear about these sort of things?
I think this is a super interesting point and I think it's things that we think [00:05:00] about when, um, which is the impact on education and learning when we use these artificial intelligence tools in clinical practice. I stand with the position that we need to be able to know how to make these decisions without the adjunct of the tools, because how will we know if the recommendation is right or not if we don't have a baseline?
Knowledge of what to do and what not to do, when to believe and when to disbelief. So I think, um, Phil, you bring up an excellent point. Like we are at a place where not only do we need to teach about the anatomy and the approaches to do these operations, but how to actually effectively use these tools and know when we go with what it's saying and when we don't go with what it's saying.
I think in that situation, knowing that the trainee may have, you know, had the hands on the tool clipping the duct. Me as the attending, I still hold the responsibility. I am the responsible faculty in the room and am should be knowledgeable about how to integrate all these schools, guide the learner, guide the [00:06:00] clinical, you know, decision making and practice as well.
Definitely. Yeah, go ahead. I'll also add like as a lawyer, like obviously my third first thought is as to the liability of this. It's interesting, like many of the questions that I would ask if that. Doctor then came into my office and was like, please defend me in this lawsuit, is whether they had other visibility or line sites to the anatomy that weren't sort of.
Directed by AI and whether they were supposed to be checking, right? Was the AI tool just sort of an enhancement or an add-on, but you're supposed to confirm with either visual or other tools and they were using it incorrectly, or did the AI analysis actually blind them to being able to see the anatomy in a different way?
And those are the kinds of things that would really, I think, dictate who holds a liability in this kind of case. Sort, sort of on a broader picture, when you're talking about tools like this, I think [00:07:00] inevitably we are gonna have these cases, and so kind of broadly thinking, how do you, how do you choose which tool gets deployed?
Because I think ultimately, by definition, they're all gonna have. Sensitivities, specificities. They will all have positives and negatives and, um, I'm in, I'm interested maybe first in you, professor Specter, and then Dr. An is is what if a patient comes to you now and a patient says, I just got injured, and I think now it has to do with them using ai.
How do you, how do you go about that? Does that change at all how you approach this situation? Yeah, I mean, so when you talk about choice, I don't think that there's actually going to be a lot of choice in the system, right? Like, so when you work at a hospital, the hospital picks out the tools you have access to, and some hospitals have access to better tools, and some hospitals have access to less, you know, highly advanced tools, whether you have a robot doing surgery, whether you.
One person trained on it versus 10 person people trained on it. Um, like all of these are decisions that hospital systems make every day. [00:08:00] Um, and you know, I don't think AI is going to be exceptional to that. Um, the hospital is either going to buy the tool or not and expect certain people to use it or not.
And so the flexibility might actually be quite low at the individual practitioner level. Um, if you then sort of. Go down a deeper level and talk about the patient. Will the patient be able to opt out or opt into using this tool? One. You know, patients never have the right to demand any specific care if the practitioner feels like it's inappropriate or any particular tool.
If the clinician feels like that's not the right tool to use, so they won't have a right to demand ai. Um, I think a really more complex ethical question is gonna be like, do they have the right. To say, I don't want AI used in my care if that's what people typically are using. Right? So it's like the patients that always come in and say, I want the chair of surgery to do my surgery.
And you know, the residents are like, oh no you don't. Right? Like, do you really [00:09:00] want, and I don't, I don't mean that at Duke in particular, just generally speaking. ~Uh, but you know, do you, ~do you really want. ~Like ~your surgeon who's trained with an AI tool to do it without, and like that in of itself could be another kind of liability issue.
Yeah. I think one thing that's really interesting is we're constantly using various tools. As well as relying on each other to try to prevent these sort of errors and see more clearly. , And the patients are oftentimes, uh, not privy to that. For example, , I don't know if a lot of surgeons are consenting patients to use ICG to try to better illuminate the bile ducts and things like that, right?
. And so we're constantly using, , various tools. But what's interesting in this case is despite the use of those tools, whether it's a robot or a laparoscopic, uh, technique, whether it's an adjuvant like ICG, to help us better see the bile, uh, ducts, [00:10:00] um, the final decision where to cut, where to clip, where to cut, um, still kind of lies within.
The surgeon's purview, which is why, for example, we don't, um, place blame on a trainee when the attending surgeon is standing right there and says, yep, go ahead and clip there. , We expect that that individual and their demonstrated knowledge and expertise is such that, uh, they then take on that responsibility for that trainee.
So this makes a really interesting, , new area when the recommendation, , is not from say, a colleague or from your attending surgeon or from, you know, faulty or, um. Uh, view or what not. Um, but you're getting input that might be inaccurate. And again, I think the point that everyone has really been, uh, making, which I think is so important, that at the end of the day, the patients are coming to you as [00:11:00] the surgeon because, not just because you can employ a particular technique to solve a particular medical problem, but they're trusting that you have some sort of ability to make judgements.
To adapt, to improvise as needed. , And so they're trusting at the end of the day that your reputation and your judgment is sound to try to mitigate those sort of errors or mistakes. And so when you're getting input from a source that now, you know, directed you in the wrong way, , becomes really an interesting, ,, problem.
I just wanna add, it's, it's a fascinating thing to think about how these tools are applied. You know, if we think about software versus hardware, I'll create that first division. First. Software has this ability for high throughput, rapid. Scalability, implementation much faster than hardware.
So if we think about anything about hardware in, [00:12:00] in the surgical setting, stents, implants, all these sorts of things that actually require a te a, a representative from the device company to be in the room and to actually train the user, the surgeon on the tool. The conversation we're having, I think is one of the challenges we have is, you know, our computer vision AI models will have excellent performance, but we are not.
Able to keep up with the, the dispersion rate of how it's being implemented to make sure that the users actually know how to use it. And I think this is the challenge. We will have excellent models, excellent performance, and we will have surgeons and clinicians who don't know how to use it. And I think it's, it's the fundamental difference between software deployment in the healthcare environment versus hardware implantable device development in the healthcare environment.
Another angle to that is, software can evolve and change, right? We've, I've, I've, I've heard people who use ambient scribes talk about how they can tell when the ambient scribe companies have changed the [00:13:00] model because their notes suddenly sound different or they're more formal or less formal or whatever.
, Which normally medical software that is like a giant no-no. And users will ne will come to you with pitchforks and torches if your software does that to them, but. We're now seeing that for AI powered software, people are way more tolerant of those kinds of beha, you know, changes than maybe they historically have.
Uh, that's another piece to it, right? Not only can the software be put out there a lot more quickly, it'll have the capability of evolving and changing. Why is that important? It's because as users get accustomed to one modality of use, the kinds of situations where the, you know, the kinds of visual or anatomical situations where it does or doesn't, you know, accurately identify the right bile duct.
Um. You know, their notion, their mental model of how the software works could actually change or become inaccurate, maybe is a better way to put it with software changes. And they might not even realize that happened. So I think that's an, a particular angle on AI in the space that we need to try to think about, managing that or training for that.
I think one interesting question that [00:14:00] you raise is. In terms of training and regulation, to what extent should the hardware and software, , sort of paradigms follow each other? I'm thinking of, for example, , to use the gallbladder example further when laparoscopy was first introduced for the removal of the gallbladder colon cystectomy, we saw.
As the uptake increased enormous rates of bile, duct injuries and things like that. So the problem wasn't the hardware. The problem was folks weren't properly trained to use it, understanding its limitations, et cetera. , And it really required, , moratoriums, for example, , before. People kind of stopped and recognized what was needed, , in terms of training in order to be competent in this new modality, even though you were doing the quote, same procedure.
I wonder if any similar sort of, , regulation at the hospital level or at a higher level [00:15:00] will take place in terms of the utilization of software as well. I think, I think all those points are incredible. I mean, especially the point of software versus hardware and actual having reps. I think for, for those of us that are in the or, we're very familiar with having reps just stand there and, , it, it, it's actually something you don't see for, for things like overlays or, or, or AI models and others scenarios.
And, and I kind of wonder we maybe should have that. And, and I think the companies do it for multiple reasons and liability is a part of that. And they always say, you know, you can use that stapler, but it's really not meant for that. , And we do not advise that. Right? And, and, but nobody does that for, for this stuff.
And I think we see it deployed so fast. , And, and we talked about this on a prior episode, but peop people don't quite understand what these models don't do. And I think that becomes a problem. So, you know, you, you do a CT scan of the head and maybe the model is not trained to pick up a tumor and it never will.
, Maybe it will only ever look for your sub acts or your subdurals and it will never mention a tumor. And so if you don't [00:16:00] understand that concept, then you can't use the tool properly regardless of how good it is that the task it's marketed for. Yeah. I'll also add like that is assuming that the sales rep has any idea what the device is doing and could explain it to you, right?
So I used to be an attorney for a medical device company and train drug reps who stand in the ORs and help surgeons. , And explain how the tools are used. But these tools, and maybe we'll talk about this a little bit more later, like many of them are trained on black box models and they start acting in ways that are potentially unpredictable or they get trained on data and people are not really sure why it's coming out with certain outcomes.
And when you go back and you sort of try to go spelunking through the analysis and figure out. What happens, you're realizing the data is, the device is being trained on data that you wouldn't have wanted it to be. So I think already we're making an assumption that if we flunked somebody from one of these companies in the operating room, they would have any idea more than we do, which is [00:17:00] definitely not always true.
That's a really important point. , And, you know, uh, Dr. Tel mentioned regulation. I think this is actually a place where sig where we do need to have some kind of regulatory framework in place. , I don't. There's, people have had a lot of different ideas of what regulation for AI software in the space might look like.
There's a number of different models and I don't pretend to know which is the best approach, but I think this is exactly the kind of scenario where we do need that regulation if for no other reason than to Simon's point, like people who hospitals that buy the thing have a lot of transparency into what did the vendors think it could do?
What did they think it could not do? What? What are they willing to actually. Standby in terms of, you know, da uh, uh, use cases. , I would love it if there was some enforced transparency on what sort of data was used for training. , You know, I, I don't know that it always needs to be like a super heavy handed, you need a little phase three or CT kind of scenario.
But, you know, the something, something where people actually can reason about what kind of data went into this, I think would be, key to avoid some of [00:18:00] these kinds of things that we're talking about. Yeah, I, I, I think almost de despite being, you know, loving this space and really loving AI and deployment, I think I almost fall conservative in that.
I almost think that maybe there should be a phase three trial for every AI tool that impacts a patient. You know, I mean, there should be conservative is, I think conservative is good in this, in this space, to be honest. Um, and when you have a whole industry and zeitgeist pushing so hard in one direction, it's worth asking why and wondering whether the interests of the patients are being best served by that push.
And I think we're seeing. Very significant pushes for adopting all of the things. Now, why stop and think about it. Just do it. You don't wanna miss out and we need to push it back against that inevitability. I think. Um, uh, so conservative is good. Yeah. I think it's also worth mentioning that other industries, my experience in aviation, they go through extreme measures to not change things too much because pilots have to be re-certified.
So the reason why there's a 7 [00:19:00] 37. 900 and a max, and it's still a 7 37, is because airlines don't want to have to send their pilots back through retraining for a new platform. Even though most Boeing aircraft fly very similarly in this world, we are now introducing something completely and radically new to a surgeon's toolkit that.
We've shown in our lab that most surgeons are not informaticians and have not been trained in this and don't even have the appropriate background to be able to identify these tools. So for the panel, we're asking surgeons to not just use a new stapler, we're asking them to completely re. You know, train basically in how the future of surgery looks.
And with that being led by industry in particular, , Dr. Bedrick, I know that you're at AMIA and we had a lot of conversations about this. , Where does that fall into everyone's, , I suppose fear, radar, I guess would be a word for it, but scope, I'll just make a small point, but it's not even retrained because we still aren't [00:20:00] training in medical school yet on how to use these tools.
So like everybody's gonna be need to, needed to train from the ground up. Like I was just on my first panel on integrating AI into clinical medicine at the University of Michigan Medical School. , And it's not because we're behind.
I think it's a really underappreciated and now like. Growingly becoming more appreciated how far we are be behind on the education of the people that are entering healthcare. And you know, we're just talking about physicians. AI is, you know, within our ICU care for the nurses for like, you know, all of the care teams that really constitute our entire healthcare system.
And I think we actually need a very unified approach on how to explain, understand, and most importantly question. These algorithms and just as we say, you know, outline when it's what it doesn't know, what we may not get from industry. And I think for being very [00:21:00] honest, you know, one of the challenges we have is the fact that these algorithms and these softwares are so expensive to develop a lot of the power lives within industry.
I think we often very successfully partner with industry, but I think in this situation we are honestly somewhat at odds in the sense of, you know, fast rapid deployment is. Benefit to industry, but makes it counter to our responsibility to an individual patient and the transparency and explainability that we may demand in the healthcare academic environment.
You know, for the algorithms we developed, how do we bring that counterbalance and have that sort of accountability? In industry. And I think the only way is to educate all of us that are in healthcare even more, to ask the right questions of, you know, industry driven algorithms and things. The partnership is important.
I think we just need to be a balance to each other.
And I also think that that goes to, you know, so as I mentioned, I used to be an FDA attorney. How it goes [00:22:00] to. Sort of a favorite pastime of many people is to say, well, FDA should do this better. But, you know, FDA got thousands of applications involving ai. Um. And it would take its entire workforce, not only to evaluate all of the novel applications of AI in the tools that themselves, but think of all the AI that's being used to develop the data that then is informing as research new tools that are being developed that are not necessarily AI and in and of themselves.
I mean, there's no way FDA can control it and it's purview is somewhat narrow. Right. So if you're not submitting, um, a new drug or device or exemption to FDA, it doesn't necessarily have authority over you to ensure that you act in certain ways. Like its purview is not all of AI that might be used in a hospital.
So really, um, you know, just as Dr. Awa was saying, it really is gonna require. [00:23:00] Both academia and industry and private hospitals to think more holistically about how we're going to govern as opposed to just regulate this space. And I'm glad you brought up the governance piece because that, that's, , you know, something else there, there's, , you know, that gray area of stuff that, you know, um, it's easy to say, okay, uh, stuff that needs to be FDA approved.
If it's not approved, we don't use it. But then there's this whole gray area of things that maybe don't need to be, but , the hospital might still have ideas about whether they want to be the ones to try this for the first time or let somebody else try it for the first time. So we have an AI governance committee here at OHSU that.
In theory when there's a new, when somebody wants to try a new AI powered thing or buy a new AI powered thing, we have a committee that, you know, we have an intake process. We've worked out what are the questions we have. It is fairly basic things that we try to find out from vendors or that we'd rather we try to get, whoever department wants to buy the thing to find out from the vendor.
, And you know, we decided early on that we could not [00:24:00] be the AI police and keep people from doing an ai, AI stuff. And we didn't want to be. , But we did want to make sure people were asking questions like, you know. Hey, how did, did they evaluate this? Uh, you know, when they say it's 99% accurate, what do, what, how did, how did they, how did they come up with that number?
You know, did they bother to look to see if it has differential performance on people with different colored skin or different, you know, accents or whatever and, you know, nine times outta 10? The answer is no, they didn't, and then we get to have more. Conversation about, well, in this use case, how big of a concern is that?
And we can make a more informed decision as an organization. And it's not perfect. We're still trying to figure out our process and work all the kinks out. But, , you know, I think over time there may, may become more kind of standardized norms of how to do that. , As this goes on longer and, and more places start to do things like this.
, So maybe some kind of middle ground, , of governance, you know, because having the FDA do all of it is just not feasible. Yeah, and I'm really glad Dr. Bedrick brought up this point. 'cause I think it's really important that even [00:25:00] if FDA does all of the analysis and it does meet the metrics for reliability, that's only in the population on which it's trained.
And the whole point of these AI tools is to distribute them globally, or at least nationally and in particular to communities that don't otherwise have access to some of the basics of care that we do in our, you know. Well educated, wealthy college towns. Um, and what does it mean? And, and. When you take a tool that's trained in one population, like we know currently 72% of AI tools being used in medicine are trained in academic medical centers from Massachusetts, New York, and California alone.
What happens when we take those tools and we embed them in rural America? Or in highly urban areas, what does that mean? And we've seen this happen also in like pharmacogenomics. 'cause most of my work is in genetics where we would take, a drug that was optimized for the study population, [00:26:00] which often was largely European, and then distribute it in other populations like Hawaii, , which has a much lower, , European ancestry rate.
And what they found actually was that genetic variation associated with being a Pacific Islander or other, , Asian ancestries meant that this drug was actually less effective in these populations.
And the governor actually sued the drug manufacturer to say you were advertising and tried to get on our Medicaid list. Just as any other drug does, but your drug doesn't work in our population. , And so I think we're gonna have many such moments in ai.
Thank you so much for that excellent example, professor Specter. I think that that is a really strong point. This reminds me a little bit about open ai and for those that don't know, they originally started off as a nonprofit and their entire mission was to democratize artificial intelligence and to to, to make it available to everybody and [00:27:00] really focus on the ethical challenges of artificial intelligence as they deployed.
What it ended up with is Chad, TPT and a for-profit organization that has sort of put those ethical concerns to the side. And we've seen that with our first AI liability case, which Professor Specter and I were chatting about before, before this podcast started. And so what I'd like to discuss next is data itself and.
I wanna start with Dr. An Teel and Professor Specter, because your work is focused on genome and at least. To me, having somebody's DNA is a very valuable part and very intimate part of their data, and so many companies are collecting this and using it, and we don't have much of an idea of where that data goes and down the line, you can foresee a data breach or.
Or something else where a [00:28:00] company sells off this data and it becomes consequential to the individual whose data it originally was. So I'd like to hear what are, what are your thoughts about that, Dr. Ant and Professor Specter? Um.
, Yeah, that's a great question. , To what extent, , do individuals. Consent to their genetic information, , being stored. What responsibilities do the institutions or the companies who are using that have down the road when new hypothesis are generated?
, And is there any way for individuals to even, , understand where. What's learned from their own information. In other words, like once you sign that paper, , and that information is, , entered into these large databases, , what does the company, what does the hospital, what do the investigators owe you?
, And down the road these are really, really hard questions. , And people are struggling with them with the use of genomic information, and I think. Using [00:29:00] AI could raise some really interesting questions as well. Like to what extent will patients consent for their information to, , be part of these large, , learning modules?
, And to what extent are we then responsible to relay. That which we learn from these things back to those individuals. , Will there be, , to Dr. Bed's point in terms of governance, will consent for this be, , mandated and who's gonna do the mandating? How does one opt out? , Versus opt in? , Lots of really difficult and important questions ahead of us.
Yeah, and I, and I think that there's so many companies using data and it exists in all forms. And I think even Dr. Awa, if for, for those that don't know you, you have Firefly. And that's access to, if, if, if you had a evil mind, then it would be access to every single surgical residence portfolio, , and their EPAs and every single case that they've done and how they've performed.
And you can imagine that, , that if somebody had all [00:30:00] this data with bad intentions, it could. It, it, I mean, it could go very poorly, insurance reimbursements or things like that down the future. So I'd love your perspective as well. Yeah, I think a lot of it has to be within the intention of the developers.
So kind of going to, to Dr. An Thiel's point and what you're asking, , the data is so foundational to these models. You know, some would say like, you make the patient buy back, you make the resident, you make someone buy back. Information that is founded on their data. How ethical is that or not? Like that data was required to derive the knowledge.
Should we be making patients charge or people pay to know what we learned from their information? No. So I think like, you know, for me as the developer of Firefly and having this, what is sensitive information? You have to really take care of it, and it has to go back to the right individual and it has to benefit the right people.
And there has to be a lot of intentionality within [00:31:00] that. I think it, this is where there's a lot of power in the people who are developing this, and this is where there is a true power imbalance. And, , how we, how we mitigate that, navigate that I think is. This is where all the success and all the failure is actually gonna be.
I just to, just to emphasize that some, something that you just said when you said, should patients have to pay for products that were trained on their data? No. I felt like all of the major annual AI manufacturers just like rolling over. In their graves, right? I mean, like that's not how any of this has ever been structured.
It's also not how drugs or devices have been structured, right? Like not only do patients then have to pay for future access to the drug, sometimes there's a donut hole between the clinical trial and when the D drug becomes approved that they can't even pay for it. Um, I think it's a really interesting concept.
I would, I'm not sure how it would be rolled out, but I think intellectually it's very interesting. Um, [00:32:00] I've done some research where we were interviewing a bunch of hospital systems about how they were rolling out ai and we interviewed one person. Who said, um, you know, if patients are part of these trials, you know, it's really benefiting them if we share their data with industry, because then they have, then they have access to products that are less expensive.
And we're like, really? That's interesting. We've never heard of anybody be able to proceduralize that. How do you do it? And he is like, well, they're just cheaper. And we're like, well, you know, isn't it usually like a insurance reimbursement like package kind of thing? How is this cheaper? And like at the end of the day, the punchline is of course that they weren't.
Um, it was just sort of wishful thinking, um, because that is very complex to proceduralize, but I agree is a really interesting goal. And I think while we may not be able to ever proceduralize it. I think if we at least empower the patients, the healthcare providers, to know that their data is part of the [00:33:00] system, we can at least have the conversation.
We may not be able to get anything out of it, but we can then have balance and say, you know what? How much of my data went into developing this? How do I actually, you know, use that? How are you using that? How do I make sure that it's being done responsibly since I'm essentially part of your system now?
And, you know, I, I think framing it that way is, I think it's, I think that, I think that is really important because, you know, we, we, there's a lot of things where, where there's this sort of very diffuse benefit that you know, the person, you know, it is, it is easy to look at it and say, well, in some big picture cosmic sense, yes.
The existence of a better predictive model for whatever is beneficial in some way that might trickle down to the. And, and that's kind of diffuse and fuzzy and hard to pin down. It's very easy to say. The existence of that model makes that company a bunch of money, right? Like that's really easy to see where the benefit is, and it's really to see where the costs are.
So in those kind of situations where there's that mismatch, that's where we do need a, a regulatory [00:34:00] framework of some kind. , And, , even if that regulatory framework is largely built, built around, um. Protecting people from harms rather than making sure they accrue benefits, right? Uh, we do that sometimes in other realms of medicine.
Um, and, and maybe we need something like that here, right? We have a lot of genetic privacy laws. , Maybe there need to be, which in theory, anywhere are supposed to make it. So, you know, your insurance company can't buy information that you might get about your genome from someone else who's not supposed to have, you know what, maybe there could be some analogs here for these kinds of things.
. I, I, I personally have twitched a little bit when I've heard hospitals say the same thing. Like, oh, yes, by, by, by leveraging our, our archive of patient data, we're gonna make a bunch of money and that's gonna like, keep us in business. And so that's, you know, that's beneficial for our patients. And I always kind of twitch a little bit about that, like maybe we could give patients like a discount on.
Premium or something if they agree to have their data be sent to the, you know, shared in this way. , But I, I, I, I also would like a pony. [00:35:00] So, , we have, you know, we, we, we all have, we all have dreams. , But maybe there's some opportunity, well there's precedent for that in wellness programs, right? Ah, so wellness programs, you can get a discount on your employee based insurance if you, you know, do things like go to the gym or share data or quit smoking.
Um, so there's a little bit of a precedent for that. , Although I will say in terms of, you know, regulating and requiring anti-discrimination, so I'll show my cards and just say that I used to work for the Obama administration and health policy for his bioethics commission and under the Biden administration, I also.
Help consult for his executive order, which did a lot of things in the space of ai, including trying to prohibit discrimination and at least models that were being put out by the executive department. That is not the direction that the government is headed right now, and I think the most recent executive order was called Preventing Woke ai.
Um, was actually prohibiting assessments for discrimination. And so I, you know, that's something that industry and academia are [00:36:00] gonna have to do themselves right now if it's gonna happen. Just to add, there's a new one this week that, , revived Ted Cruz's attempt to ban state regulation of AI and algorithmic decision making, which is, , I don't really have words to say how catastrophic that would all be.
Um, uh, . This is a space where regulation is needed. , And it might be this, it might be something that does need to happen at the state level. So for there to be federal, essentially putting cards on the table. , This is where professional norms and community norms within industries and healthcare fields, you know, we, we've seen norms hold up in how, in how, how different parts of the healthcare system have done certain things over the last year to, protect patients and protect our integrity as a field, and maybe this is a space where we should be developing more, um, you know, more industry and field wide norms.
I completely agree. I, I think that we need , at least within our hospitals and in industry, to take that responsibility upon [00:37:00] ourselves. Now, unfortunately, the penalty of having such an incredible panel is that some people have to go in a few minutes. So just to wrap it up, maybe we can each go around and in a sentence.
Maybe say what gives you hope about the future of AI in medicine, and then what keeps you up at night? Uh, I really wish we had another three hours on this podcast. Um, I can, I can tell you to start with myself personally. What keeps me up at night is, as Dr. Bedrick was just saying, it's the regulation and lack thereof.
That, that really worries me. , And as for, as for hope, I, I, I really hope I never have to do another administrative task again. Um, so Dr. Antu.
, In terms of education space is obviously that's important to, um, this podcast and its audience. I'm really excited about the applications for education for. Analysis of, for example, procedures you do and the sort of feedback that you can get to continue to grow and push yourselves.[00:38:00]
Um, technically, , what keeps me up at night or what worries me, , is very similar to our first, uh, case that you post to us that, , with the adaptation of these, , really amazing and powerful, , models, that one would, might, , not take the time. To gain the experience, to know when not to truly use it.
, And so, uh, ultimately our trainees need to develop the wisdom and prudence to know when to deviate from the model, when the algorithm's not right.
Who's next? So Professor Specter. Sure. Um, not to be too tongue in cheek, but what keeps me up at night is people who think regulations are gonna save us from this. Um, it's just, it's far too complex and far too vast and it's in way too many private hands. And the politics aren't there. Like, we are gonna be so far past, like the horse being out of the barn, right?
Like the horse is gonna be like halfway to Europe, , by the time we get around [00:39:00] to regulating this. So I do think that. We are going to have to look at different kinds of internal and industry governance models. , Something that gives me hope, . Is that, oh my God, I love it for research, like you are gonna have to pry my ai, AI tools outta my cold dead hands for research.
It's just, it's, I, I think the first time that I sat there with like some of these literature review tools that are like. Built into the U of M library that goes straight to my zoro, that input into my article. Like I literally cried, you know, the amount of time it can save and like doing structured abstracts and all of that stuff.
It's, it's an, it's an amazing tool. Um, and I'm so excited to keep using it and help people figure out how to use it in safe ways, and that's what gives me hope.
I think, um, I'll go on the trend of, uh, of training. So what keeps me up at night as someone who's been, uh, [00:40:00] invested in training new people, either it was flying airplanes or now as a trainee in surgery and seeing these tools take off at a speed of which we. Can don't have the ability to train. We are way behind the power curve.
Um, that course is almost to Europe, probably mostly through Russia right now. So that's what keeps me up at night is that we have a lot of very powerful tools being disseminated at a rapid rate without the training to support it. But what gives me hope. Is that there's excitement about it. This is if, and it is permeating places that I never thought I'd see.
I'd never think that being someone that knows a lot about AI would be a desirable skill to have, , in a surgical world. But yet it is. , And I think that people are more interested in learning how it works and that enthusiasm, that curiosity is what gives me hope that people will want the training when we can get around to making it.
So, , I am kept up enough of many of the same [00:41:00] things that people have brought up, but, , the, um, the, the. Enthusiasm with which I've seen people like jump onto tools that they'll admit that they don't understand, don't, that they've seen make errors that they don't even want to use, but they feel like they have to.
Right. Like the, the sort of that, the psychology of that keeps me up at night. I do not understand it. , And so, , I worry about things being adopted, , because people feel like they have to, or everyone else is doing it, or if they don't, someone else will or whatever. And that like, turning off the part of your brain that does critical thinking.
So that keeps me up at night. I'm excited by a lot of the same things other people said. , You know, using this for research is amazing, , in my research using on clinical records and medical notes. So for so many years, every project people came to me for ideas with, I would say, great, give me a research assistant in a year to do data labeling.
And, and we will, you know, maybe we'll have something that'll kind of work after a year. And now I can say. Yeah, awesome. We can probably bang out a prototype this week, like the tools are so much better. , And, and that that just [00:42:00] opens the door to a lot of science that we couldn't do before. So that, that gives me a lot of hope.
I guess that leaves me last. , What gives me hope, , is the fact that we're actually having these conversations, , and that we know there are people that are interested in hearing and being a part of the conversations, but actually at the same time. What keeps me up at night is the fact that we need to have more of these conversations and we might be the leading edge that's actually trying to converse about this, whereas others are willing to allow the automation bias and the hype drive application.
Well, thank, thank you everybody. Again, I mean it when I say I wish we had five more hours, uh, with, you know, and if I could only use chat GPT to get a copy of each of your brains and summarize it, it would be incredible. So, uh, thank you again and from behind the knife. Uh. The day.
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.