Podchaser Logo
Home
AI in Healthcare: A Promising Future with Ethical Considerations

AI in Healthcare: A Promising Future with Ethical Considerations

Released Friday, 21st June 2024
Good episode? Give it some love!
AI in Healthcare: A Promising Future with Ethical Considerations

AI in Healthcare: A Promising Future with Ethical Considerations

AI in Healthcare: A Promising Future with Ethical Considerations

AI in Healthcare: A Promising Future with Ethical Considerations

Friday, 21st June 2024
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

Speaker 0: Hello, everyone. This is Erica Spice Mason with Becker health care. Thank you so much for tuning into the Becker health care podcast series. We're thrilled to have you join us today for an exciting conversation about the intersection of artificial intelligence and health care. So to talk to us about this, I am joined by doctor Joshua to my Sa, the vice President of Innovation at vitality. So doctor Tam my Sa is a leader in leveraging tech to enhance health care delivery and outcomes. He's been at the forefront of integrating Ai da various asks of care from improving clinical documentation to identifying gaps in care and more. And in this episode, we'll explore some of the most promising ways that A is being integrated into health care systems. Doctor Tam sa will he'll share his insights on balancing Ai capabilities with physician autonomy. The critical role of physician engagement in Ai development, and the necessary training to ensure effective use of these technologies. And finally, we'll also touch on ethical considerations, steps to promote equity and accessibility and the future of Ai in healthcare. So without further ado, doctor Sa, thank you so much for being here and welcome to the podcast. Speaker 1: Alright, Erica. Thank you so much for having me. It's a pleasure. Speaker 0: We're thrilled to have you here and to talk to us about such an important and top of mind subject in health care today. So to get us started, I'm wondering if you can share, what are some of the most promising ways that you've seen Ai integrated into healthcare care systems. Speaker 1: Thanks. And I think when we talk about Ai or artificial intelligence, you know, it's such a broad category that a few years ago before large language models, we used to say that, yeah, the Ceo calls it Ai the chief Technology officer calls it machine learning, and then the data scientist with our hands on the keyboard calls it a logistic regression. And, you know, things have changed. We actually do have more advanced models that we're using now, but it's still... You know, if you've got a calculator and you hooked it to a computer, you'd call it Ai because it increases your valuation and sounds so much sex. That being said, when we look at at tech and tech that could con fall under that Ai umbrella, there are areas where it's doing amazing things like, helping me with my work and we see that in some Ai documentation solutions. To things where it's identifying things that, you know, I could identify. If I spent all my time doing it, like gaps in care or people who... You know, they actually they've got some pre prediabetes that isn't treated that no 1 has has had the time to notice and bring that to our attention. To where it gets really cool in my mind where it starts seeing things that the human can't see. And, you know, we've we've got 1 where it it can detect what we call a cult sepsis. Right? Someone who's really sick septic. But actually, all their vital signs and everything looks fine at the time that you're looking at them, but and within 72 hours, this Ai tool can identify who is going to become septic. So I kinda think it in those buckets of... It helps me do mundane work. It does things that a human could do with, you know, enough time and then the cool bucket where it does things that a human can't do. Speaker 0: Yeah. I I really appreciate that overview. So it sounds like there are pretty promising applications in documentation, gaps in care, Predicting who is going to become ill. Sounds like really great, and promising opportunities for the tech, and I wanted to also kind of address that in this environment of, you know, we're seeing health systems become larger entities through consolidation and partnerships merger things like that. And there's this really big focus today on physicians staying autonomous. And so I'm wondering, especially with your experience, there's as a clinician as well, how do you strike the right balance between leveraging Ai capabilities and also preserving physician autonomy? Speaker 1: Yeah. That's a really good question, and I wish there were a right answer because then I would say it and feel really smart. But I don't think there is 1 right answer other than a balance. And the way that I think of it is there's a framework of what a product does a jobs to be done framework. And it's basically, you know, what are you paying for? And if we think of it in the physician, like, what is the physician's job, And I think that if it falls under that job, then you really don't want Ai doing it. So for example, identifying who's going to get diabetes, I think it's my job to oversee it, but it doesn't mean that I need to be the 1 who's actually, you know, trolling through every medical record to see who it is. I think that's a fine thing for Ai to identify for me. I'm comfortable giving up that level that part of the autonomy. On the other hand, if it's sitting at the bedside and making someone feel seen and heard and understood and, like they're not on health care journey alone. I think that's another thing that I'm paid to do as a physician, and I don't think that's something that I can outsource. This And so if we think of it as outsourcing, you know, maybe we could reframe the question is, what is it okay for a position to out? Source and what do they really still need to be, you know, have their have their hands in the pot for? Speaker 0: I think that's a really good way To... To to put it. So I appreciate that explanation. And and taking that a little bit further as we're you know, seeing more organizations, adopt Ai tools and really try to get physicians on board with using Ai tools, What are you seeing... I should say which strategies are you seeing that can really help to engage physicians in developing and implementing those tools? And are there any trainings that you find are necessary to ensure that they can effectively use Ai? Speaker 1: So that's a another good question, and I would actually break that into 2 different things. 1 is what's a great way to get physicians engaged and using a tool and I'd say the biggest barriers make it useful. Right? You know, so many of these tools that we see, it's a problem that... From the patient's perspective, and a a perception that the patient has is a problem or the technologist or the payer. But from the physician's perspective, it's not really a problem. Right? Like, as an emergency physician, if I see someone with chest pain, I know what to do. All hope that I know what to do, especially if you're 1 of my patients. If I see someone with the Abdominal pan, I know what to do. And yet you see all of these things that are in the Ai space that are working on helping me know what to do for someone who comes in with chest pain. Well, I already know what to do It it's kinda like let's devi an Ai that helps you know how to start your car. Right? You'd probably say, yeah. That kinda interferes with my workflow, and I'm pretty pretty comfortable with the button over the key. And so making sure first and foremost that it's actually solving your user's problem. In this case your user being the physician. For some reason, we lose that frequently in health care because we forget that that it's actually a human activity in health care. So that's a big part of... How do you get physicians engaged is first, you actually make sure it's solving a problem for the physician. The second part of it, you asked about training. And I would say, think about your your other analog in life of when is the training, worth it. Right? When is the run worth the leap? You know, you wanna... Going back to the car example, you wanted to drive a car, driving a car was so important to you and was so helpful to your life that it was worth the training to drive a car. On the other hand, if I introduced a new technology, a for how you start your car, and I'm like, trust me, it's better. It's an easier technology once you get used to it. Here's this 80 hour training program you need to go through in order to start your car in a better way. You're probably gonna tell me... Yeah, that's yeah. No. Right? Like, the the run is not worth the leap for you. Mh. And so I think in a lot of the tools and technologies we're introducing, it's just not good user experience. Right? It it's not invisible. And for us, when we're creating these technologies and we're involved in their and and how you implement them for 1 we always start with implementation. Right? How are we going to implement this before we build anything. Right? Even as a concept, how are you going to implement this so that at the right point in that workflow. The right decision has changed altered modified. The right behavior is changed to get a return. Right? So that someone benefits somehow from it. And if you haven't figured that out, you're not ready to start building your technology because that drives a lot of what you... The requirements, the features that you need for that technology to work. Speaker 0: Yeah. Thanks Doctor Sa. It it sounds like intention is really the necessary foundation of. You know, having a great application or use of an Ai tool and then getting staff on board, has to be useful. Has to solve problems and at the end of the day, that effort needs to be worth it. So I think those are really practical steps. Appreciate it. Speaker 1: I had this experience where this was I guess this is actually during Covid. And, you know, I've I've heard so many times about, like, the elderly don't like to use technology. Right? And and physicians don't like to use technology. But, you know, how many doctors do know who don't have a smartphone. Right? Well, we had developed this, artificial intelligence virtual front door system with a partner and 1 of our partners. And it was really nice from a user experience. They spent, you know, a lot of time to make sure that it it was a great user experience. And I went down for 1 of the rollout, the first rollout out at 1 of the clinics for rolling it out. And, you know, I'd like to think of myself as this wonderful em path person whether that's true or not. But I'm gonna think myself that way anyway. And in in the waiting room, you know, first day, we're rolling out this new technology in a real life clinic with real life patients. And this elderly french woman comes into check into the urgent care, and she's, like, 93, 94 years old. And I look at her and I make all these stereotypical assumptions immediately. And I run over to help her because I'm like, oh my gosh. This is gonna be so miserable for her. I feel so bad about this. And her French and my understanding of her french to, you know, fill in her name and such was so poor. And she was so patient and nice, and after me, like, bundling it for the first 3 or 4 minutes. She's like, oh, that's me do that. And she grabs her phone and 30 seconds later, she's done with all the registration process. Process and all the questions were answered everything. And it reminded me that it's not that the elderly or doctors don't like technology. They're just less tolerant of bad technology. Speaker 0: Process Mh. That's such a powerful example. And, you know, I think also highlights. The work that all of us can do in making assumptions, and how we can apply that to technology strategies. It's really wonderful. Thanks so much for sharing that. And I think it's also, you know, when we're talking about Ai, it's hard to have a conversation about Ai and health without, of course, talking about ethical considerations. Of this of these tools. So I'm wondering if you can share in your view, what are the top ethical considerations that providers should really be keeping in mind when they're implementing these solutions. Speaker 1: Yeah. For me the... Again... I'm a bucket or if you haven't noticed. You know, I have 2 big buckets for that. 1 is how the Ai was created, and the second is how it's used. In the creation of Ai, I think we've all heard, is it appropriate that we are leveraging the fact that Ai learned off of someone else's blood sweat and tears to create content of the Ai then learned off of without compensating the content creator for their blood sweat and tears. And that's that's a big question, but it also applies to health care and any anytime we're creating a model that's... That was derived based on data that was created by people who are not compensated necessarily or generally for the Ai model that was then based on their data, I don't know that they should be, but it's certainly something that we shouldn't just blindly go through, and it... It's worthy of a discussion and making sure that we come to something that seems like a a reasonable answer to that as a society. The second is the application of Ai. And 1 of the problems I think that we have with Ai is that we allow it to substitute for too many things. And it's our own fault because we call it Ai. Right? So we call it artificial intelligence, and that really pushes us to ant it. So that it becomes a human, we talk about regulating it like a human. Right? If I said how do we regulate a physician? While we go through scope of practice and we talk about behavioral expectations, and we talk about reasons why they're not that acceptable, behaviors, and we regulate a human in a certain way. If we talked about how do we regulate a medical device, that's very different. It's very specific to the use case. And because we call it artificial intelligence and because it can use words, it seems human and so we keep... Getting into this like gravitational pull to think of it as a human and to regulate artificial intelligence as if it's a human or a human actor. When in fact, it is math, Right? It's really cool. Amazing math, but it is just math, and it doesn't have intention. It doesn't have desires, feelings, responsibilities, obligations. It's just solving a math problem. Amazing things that can do with solving the math problem. I don't mean to downplay its potential. But when we think of how do you regulate it, it's kinda like, how do you regulate a calculator? And that depends on what the use of the calculator is. And 1 of the things that frankly scares me a lot is I see kind of a push to regulate Ai writ large, and that's kinda like, how do we regulate transportation? Well, I think before... With transportation, you should have a drug test, you should have to sleep 9 hours and document that you slept 9 hours and there's a whole list of medications that you can't be on never use and don't have a list of diagnoses. Well, all those things actually probably appropriate if you're regulating a pilot between flights, but may not be so appropriate if you're a 21 year old dude trying to write a taxi, get a taxi for home. Or ride the subway. And so it... My concern is that if we get overly broad and ant, we're going to introduce regulations that really s us finding from important solutions without protecting us in the very discreet cases where that regulation needs to be much, much stronger than what may otherwise be proposed. Speaker 0: Really interesting perspective, and I appreciate this lens of not viewing Ai as a human that needs to be regulated per s. It's it's math. And so I think that's... It leads me nicely into what I was a wanting to get your thoughts on next, which is, you know, despite us thinking of Ai in this mathematical term, we know that many health leaders are concerned about potential biases data, informing Ai tools, and possibly per health and equities. So I'd love to know your perspective on any steps that can be taken to make sure that we're developing Ai tools that promote... And accessibility? And how can we really monitor and adjust those systems to prevent unintended consequences? Speaker 1: What Yeah. So I I think 1 of the things that we need to do and I think it's just a maturity model is... You know, there was a time when you could have a medical device, and you sold your medical device, and it just went out there and people used it or didn't use it, and it helped people or killed people, but it was a cool device. And there was no regulation around it. Now we've gotten to the point where we very carefully regulate medical devices, and that I believe has helped substantially in making medical devices better and in making sure that that they do what you think they're going to do. I think we went through the very similar trajectory with pharma, and I think we also went through a very similar trajectory with Cli with laboratory testing. I think early on, the technology that we were using in health care was not very important and didn't do things that were huge Right? Like... Lots of interference, lots of business work sort of things back office work, but wasn't really making clinical decisions. As we start getting into the clinical decision aspect of Ai, like seeing things that the clinician can't see. I think we need to start having the more mature model where it's regulated like leverage. For testing is regulated or regulated, like, medical devices or regulated. If it's doing 1 of those things. Right? Like, I do a lab test to diagnose you with a heart attack. For for example. I want that lab test to be tightly regulated, so I don't mis diagnose you. If I'm diagnosing you with a heart attack or because the Ai is figuring that out for me, it should have similar regulation around it. When it comes to the accessibility question that you had, broke about health equity. I think 1 of the interesting things is I wonder how much accessibility comes into conflict with equity. And what I mean by that is if we say that the future of Ai, it's slowly going to take on a lot of the the clinical work and the clinical categorization. I would imagine that we're going to unfortunately see disadvantaged populations who don't currently have access. Right? So it's who's the accessibility for, I'm going to presume it's the accessibility for those who have limited access now, so disadvantaged populations. So now you're going to have... Disadvantaged populations are going to get preferential rooted to an Ai solution versus a human solution. Maybe that actually improves our overall health equity or maybe it doesn't. And I think I don't know that I've got a clear answer for that 1, but I can see it a conflict where there. Speaker 0: Mh. No. I I appreciate your can, and I think that the industry as large is really looking for answers there. And It's okay to not have those answers now, but to be talking about them. So I I really appreciate that response. Speaker 1: That's good. I have a lot of not answers. Speaker 0: Yeah. Gosh well, I I know we're winding down with our time together. So... A lot of these subject areas we could keep going on I'm sure, but I guess just to round out for our listeners kind of looking ahead, looking toward the future, What are you seeing as the most significant advancements on the horizon or changes that will shape the future of health care in regards to Ai? Speaker 1: So I think that we are slowly moving to where Ai can see things that we can't seeing. And then we have to kind figure out what to do about that. Which I think we will get to, but I think there's gonna be a very messy middle for a while. Things like, you know, the Apple watch detecting a afib, Right? That's really cool, but we don't actually know what you do with that. Right? And we don't have settled science around that. Is it important that someone has... Pro that we never know about and wouldn't have entered Know our studies previously. Right? So we don't really have signs of what to do now that our detection is getting better and better and a lot of that is secondary to Ai. The other thing is, I think it's going to change how medicine feels quite a bit, and we've already found this and with with a tool that helps our documentation. So the documentation happens is kind of a byproduct of working rather than I have to spend any time documenting, and I I still do 6 shifts a month. I do night shifts. And it's amazing because now The documentation just magically done it. It's incredible. On the other hand, documentation actually used to be cognitive rest for me. Right? I would get to take a break and kind of be documenting and I hated it. I complained about it a lot, but I'm not interacting with patients. I'm not making hard decisions. I'm I don't have the same cognitive load. When I'm doing that. And as that gets taken away from me, my job actually gets more concentrated at the higher end of my license but that's also a cognitive harder place to being. And so I don't know if we're gonna have to change the way we do, but workloads or ships, it it'll be interesting. I don't have answers for that yet. But it's a problem that I can see being created on the horizon. Speaker 0: I think that's such a helpful forward view, considering how, yes, we're taking it away administrative burdens with this technology, but what does that mean for the the heavier and more, you know, emotionally intelligent parts of your brain that you have to use for your job on a more consistent basis, throughout the day that it's a lot to consider. And I I appreciate doctor Sa, how you've brought a lot of these things to light, and I think our listeners are going to walk away with a lot to think about. Thank you so much again for your insights today. Speaker 1: It was my pleasure. Thank you for having me into listening to me Ramble. Speaker 0: Of course. It it wasn't rambling. It was very informative. So thank you again, doctor Tam my Sa, and we would also like to thank our podcast sponsor today vitality. You can tune into more podcast. From becker Health care, by visiting our podcast page at becker hospital review dot com. It's

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features