Computer Vision for Health: Living Longer

Author: Beverly Wright


In this podcast episode, Dr. Beverly Wright, Vice President of Data Science & AI at Wavicle Data Solutions, speaks with William Figueroa, Chief Information and Technology Officer at Integrated Oncology Network. Together, they explore how computer vision is driving innovation in healthcare, particularly in oncology, by enhancing diagnostic imaging, increasing efficiency, and improving patient outcomes.

 

  • Dr. Beverly Wright, Vice President of Data Science & AI, Wavicle Data Solutions
  • William Figueroa, Chief Information and Technology Officer, Integrated Oncology Network

 

Watch the full podcast here or keep scrolling to read a transcript of the discussion between Beverly and William:

 

 

Beverly: Hello, I’m Doctor Beverly Wright and welcome to Tag Data Talk. With us today. We have William Figueroa, Chief Information Technology Officer at Integrated Oncology Network, and we’re talking about Computer vision for health: living longer with AI, which is a topic that’s very intriguing. A lot of people I know who are in healthcare are talking about some of the negative sides of AI, or even in other industries, and maybe job replacement, and you know, that sort of thing. And so it’s refreshing to hear something that’s super positive. So, before we jump into it, tell us, why are you so cool, William?

 

William: Well, I like to think that I’m cool because I’m pretty athletic. Snowboard, don’t ski, mountain bike, cycle, surf, California. Some of those sports are possible.

 

Beverly: Ok.

 

William: But really in the context of this, I’ve been in technology for 20 years, been a data nerd for all of those 20 years. I love analysis. I love leveraging data to answer questions and prove thesis statements.

 

Beverly: Me too.

 

William: And I really feel like we’re nowhere without it in technology because it’s at the core of everything that we do.

 

Beverly: Amen to that. But do you pickle ball?

 

William: Oh I’ve tried it, I’ve tried it and I’m OK at it. But for context, I lost to a 7 year old.

 

Beverly: It can be like that.

 

William: Yeah, yeah.

 

Beverly: Or a 70-year-old or, you know, yeah.

 

William: Yeah, and decided that I’d rather spectate and drink.

 

Beverly: Yes, I understand. So we’re talking about this computer vision for health and living longer with AI. And so let’s think about this. When we talk about computer vision in the healthcare space, are we talking sort of like the clinical side? Are we talking about the operational side? Like, let’s first, you know, backdrop. What do we mean by that?

 

William: Well, we mean processing radio images.

 

Beverly: Radio Images.

 

William: So not in the surgical side, more in the clinical side as we assess patients, and in my case, in the oncology scenario, so lung images, lung MRI or CT images, colon screenings, breast cancer screens, those kinds of things.

 

Beverly: OK, OK. And walk us through a day in the life of like 5-10 years ago, what did this look like?

 

William: So 5-10 years ago, I wouldn’t be in oncology, but I can tell you that the perspective that I see in our practitioners and our practices is that then it was very manual work, right? Scheduling was manual. The experience in patients is actually almost the same. Unfortunately, access to cancer care is almost the same. It’s a little bit, it’s an experience that very few, a lot more of us are having, but very few of us have any experience with. So healthcare is hard to navigate as it is.

 

And when you’re in a traumatic situation like cancer, it’s even harder. So that’s the context, and I think that’s an opportunity where we can use data to improve the life and the outcome of a patient with cancer. But as far as it, as far as we’re concerned in the question, it’s very much the same. You’re diagnosed to refer to a place for your care. There are orders for imaging depending on your cancer, and you’re referred to a radiologist where you go get an MRI, CT scan, and then that gets passed on to radiologists who then analyze images.

 

Beverly: A single person.

 

William: Single person that manages whatever number of case logs they have. And then they wait and we get maybe two or three day turnarounds, maybe longer to see those results.

 

Beverly: Yeah, meanwhile sitting at home stressed out or trying to live your life or whatever. So put computer vision into that same scenario. If you would tell us how it’s different now.

 

William: Well, it could be. It’s emerging. So there are practices that are leveraging computer vision to accomplish 40% improvement in turn times. So radiologists are doing, sorry, maybe I can say there’s two different ways to think about it. They can do 40% more work, meaning they can actually take on 40% more load and also reduce the turn times of these assessments, image assessments.

 

So that’s significant. But what’s more important is anomaly detection with computer vision. So we’re seeing studies that when a radiologist could miss a very small anomaly in an image, a computer can actually detect it and detect it in in a situation significantly ahead of a cancer diagnosis. So think colon screenings or breast cancer screenings. Mammograms. Sometimes we miss as humans, but that’s where computers can help us detect those things very early and therefore have a significantly better outcome.

 

Beverly: Yeah. So it’s two ways to look at it. It sounds like you’re saying that number one is there’s a 40% increase and the number of cases I think is what you call them?

 

William: Number of cases or images that can be processed.

 

Beverly: Oh, gotcha. By a single radiologist. OK, So that’s huge. I mean 40%. And then secondly, is the accuracy like you’re able to detect certain things more often, if you will. Does the reverse hold true too that like if they think they see something, but computer vision says no, no, no, you don’t see anything. Or is it? Is it always like it doesn’t matter as much, right?

 

William: I think it’s more of an approach, right. So yes and no. I think the, the reality is that in those situations you do want humans involved. And that’s the exception process or regardless of the analysis, the resulting analysis, the resulting analysis isn’t just a circle around an image of what was found. The recent technology in generative AI, the AI is actually adding words, sentences to their analysis. I can’t repeat them because I’m not a doctor.

 

Beverly: Sure.

 

William: I didn’t memorize them here, but there is a description that was discovered. And in most cases that description, the note is not changed by the rheologist. They either agree or disagree. And if they disagree, then they change it. But in reality, if there is a detection and they don’t agree or detection, they do agree and something that they would have missed, it is still positive, right?

 

So essentially in a few cases that we’ve discovered, we found and detected these anomalies in colon screenings that could have resulted in significant cancer diagnosis had they waited for two more years, which is why the title was this right. So if you find cancer early, you have a significantly better survival rate than if you catch them much later.

 

Beverly: Yeah, yeah. Like in manufacturing as an example, just to draw a parallel, there’s porcelain products, and before they start, before they get glazed and go in a kiln, you can modify and melt them back down and rebuild it. And sometimes there’s a thumbprint or something really small that a human can detect. Doesn’t matter how many angles you have, how much light you shine, but computer vision can.

 

And so it can save the product, especially if it’s something super expensive. But in this case, we’re talking about people’s bodies. This is humans. This could save them. So the two paths are #1 is the way more efficient heavier caseloads per radiologist and number, I may not use that term right. But images and then number 2 is more accuracy and better detection early on. So does this just make it like what we’re doing, except better, or is this transformational?

 

William: This could be transformational, right? And especially in in an industry where we’re predicted to lose or be deficient 140,000 providers in the next 10 years.

 

Beverly: Oh, wow.

 

William: So when you think about radiologists, oncologists, doctors, we’re losing that, that resource, right? That resource to retirement and we’re not entering, we’re not graduating enough these providers over time. So it’s your point earlier when you consider what AI can do for us, it does sound scary. Sure, especially when we watch movies where the robots take over, right, Terminator.

 

But there is a perspective here that it’s more about augmenting our capabilities using these technologies and really being very responsible and how we’re using them. And that’s what can really become a launchpad for us. I use the term as humans, because we can actually do and create more, right, create better experiences for patients and do greater things as humans as well.

 

Beverly: Well, sign me up. I’ll take two. I mean, so what’s the barrier? I mean, you hinted on it just a little bit, I think, or hinted about it. But this sounds almost too good to be true. Like you’ve got a workforce that’s disappearing. You’ve got, you know, presumably maybe even more sick people, especially with people aging and they’re living longer and cancer happening like it does. And so where’s the negative? Like, what’s the hold back? Is it funding? Is it culture? Is it data? Is it the tools?

 

William: It’s a combination of all of those things, but I think the bigger one of those is really our ability to navigate regulation. In a lot of cases, they can be considered as medical devices. So they have to go through FDA approvals. That is not a significant hurdle though. I think more of it is getting comfortable the solution itself and comfort comes to all of us when we feel like there’s oversight, right.

 

The scariness of it, again, is that, yeah, I could take over and real or not, you know, I’m a skeptic. I think there’s still a human constraint to AI because we’re the ones building it. I haven’t really seen anything that tells us that we’re going to surpass that. So we’re always going to have supervision of some kind, in this case, radiologists, always going to receive the results, assess the results and make their own interpretation. So that to me isn’t scary. I think is really more about the industry adopting more of this. Yeah. And this is becoming more mainstream.

 

Beverly: So tell me about the parties, like who do we need to buy into this? Like how much does a patient matter? Or is it going to be like, you know what, Henry, just let us do our thing. We’re trying to take the best care possible. Or do you think patients are going to have a say? Or is this mostly regulation or is this the doctors? Like what do you think?

 

William: Well, I think awareness is a big deal, right? So all of us as consumers have to be aware of what’s possible.

 

Beverly: Like the literacy in general, yeah.

 

William: Absolutely. And that’s also where I can help us. I like to think of this opportunity in creating a better human experience and that is what we need to start leveraging. How do we actually humanize the world that we’re in as technologists so that we can expose more information to the patient, especially as they enter a diagnosis and be specific to what they’re dealing with.

 

They’re dealing with prostate cancer, breast cancer, whatever that might be, and ensure that they understand how to navigate within their own healthcare system, right and insight of their insurance provider and what their capabilities are. And I like to think about it that way because unfortunately I wasn’t diagnosed, my sister was diagnosed with leukemia and it took a whole army of family to do research, understand what was possible, and then challenge our doctors to find the best path forward. And that was a significant effort. I mean, you’re talking me on airplane rides doing research instead of working right, staying up late nights.

 

And that is what’s been near and dear in my heart is how do I then take everything that I understood to create the ability to navigate this and then synthesize it into something that’s consumable for everyone, right. Especially maybe not, you know, as a general education for cancer awareness and how to navigate a healthcare system. But more in the moment when you get your first diagnosis, quickly educate them on what’s available. And if something isn’t a direct path to the real care that they need to survive this, then find alternate paths like clinical research studies or anything that may help give them another chance at survival.

 

Beverly: Right, I got you. And this is tricky. I mean, this not any one health system or doctor or individual or technician is responsible for AI literacy. I was talking to a dear colleague and friend and she was talking about how when you’re afflicted with something like cancer, like you can’t even think straight. And this particular person is a fighter. I mean, she will fight, fight, fight, and it shouldn’t take that, right? It shouldn’t take all that fighting to, you know, kind of get the answer, right.

 

William: I would be a wimp in that situation.

 

Beverly: You can’t be a wimp. You can’t be.

 

William: No I would be. I think about my sister. Yeah, like how are you so strong? I thought I was the strong one, but no, I’m totally with you.

 

Beverly: Yeah.

 

William: The reality is that in that situation, we’ve kind of synthesized reduce it to its most understandable core, right. We can’t say AI literacy to anyone that’s being introduced into the situation.

 

Beverly: I mean, computer vision, they might think it’s like these robots and lasers and you know, is this on the schools? How do we improve on this?

 

William: Well, I don’t have a clear answer to that. But what I do know is that no different than it’s our historical experiences, no different than the first time we opened a computer, right? And you got the phone call from someone saying, how do I do this? Right?

 

It really is more about more about awareness of capability, awareness of data. And it’s actually, maybe it’s more intuitive. Maybe I do have an answer. Maybe it’s repurposing what we do today with our phones, right? Instead of it being a social media app that we open. Maybe it’s a different approach to engage them and maybe it’s on both our providers and the healthcare systems. Because I have a healthcare app on my phone, I open it to set up appointments, right, do those kinds of things. And maybe it’s in context of that interaction at that level of engagement where, hey, we’re doing proactive work as you go into, let’s say my yearly physical and I go into a prostate screening, Hey, here’s what to expect if you do get a diagnosis.

 

Here are the next steps, right, because that is the first opportunity and the contrast there is that, and this is another, you know, topic that we can talk about it some other point. But I do feel like our superpowers of technologies have been leveraged in a way that is not conducive to the best human experience, right in social media and engagement and marketing there. So the real question I think I’m challenging everyone to do is to rethink how we’ve approached technology and engagement and repurpose that to serve us in better ways.

 

Beverly: I love that. Oh, it’s fantastic. I mean, I have a friend who still won’t use an air fryer, you know, because she’s worried about things. But I sort of look at this.

 

William: Maybe not that extreme.

 

Beverly: I mean, really, I look at this kind of like what it must have been like when people move from horses to vehicles, you know, now you got an engine. Most people don’t understand exactly how a vehicle works. Like it’s small explosions. That’s all I know. And that sounds kind of scary compared to, you know, Chip the horse. So I would imagine that people kind of feel this way.

 

So that brings up one of our last questions, which is the ethical side. And we could talk a long time about this, but the 1 dimension I kind of want to explore just a tad is, is it unethical not to use computer vision when you can do things better? Or is it unethical, you know, to use something that’s so important with people? You know, I mean, there’s that debate, right? Like where does it stand?

 

William: That is a good way to pose the question. Yeah, we have the capability. Why not use it. I agree with the question and the underlying message there. I think it’s unethical to not use.

 

Beverly: To not use something at your disposal.

 

William: Right, especially to save a life or to avoid a situation. And that’s probably the other paradigm shift we have to take in healthcare is, stop treating episode episodes of care or reacting to those episodes of care. Instead, start proactively managing care. And I think that’s the other area that we can actually leverage AI to do right? To help us do.

 

AI has the potential to be a game-changer in healthcare especially in fields like oncology where early detection is critical. Computer vision can boost diagnostic accuracy, reduce turnaround times, and help bridge workforce gaps. But realizing this promise requires thoughtful regulation, greater awareness, and a commitment to designing human-centered solutions that prioritize both trust and impact.

 

Explore the full catalog of TAG Data Talk conversations here: TAG Data Talk with Dr. Beverly Wright – TAG Online. 

 

Wavicle Data Solutions
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.