AI Struggles and Opportunities in Life Sciences and Healthcare

April 9, 2025

By Allison Proffitt 

April 9, 2025 | At the opening plenary panel at last week’s Bio-IT World Conference & Expo, speakers from pharma, healthcare, and biotech took on the future of AI for pharma and healthcare. Abbie Celniker, Partner, Third Rock Ventures, moderated the conversation which included viewpoints from Per Greisen, President, BioMap; Sofia Guerra, Vice President, Bessemer Venture Partners; Subha Madhavan, Vice President and Head, AI/ML, Quantitative and Digital Sciences, Pfizer; and Sonya Makhni, MD, Medical Director, Mayo Clinic Platform.  

“I think it’s really exciting to talk about the wins [with AI] as much as it is to talk about what we are worried about,” said Mayo Clinic’s Sonya Makhni. She opened by sharing that some of the Mayo Clinic Platform partners have seen, “amazing outcomes in radiology, reducing the time to diagnosis by anything from 25% to 35% on certain conditions.”  

She also expressed optimism about the future of patient care with AI help. “You almost can’t think of any aspect of healthcare of life sciences where AI isn’t already impacting it,” she said. But she was also measured in her enthusiasm. “What we saw in the last couple of years is sort of a democratization of some of the tools. In the clinician world, all of the sudden, these large language models are now part of our everyday vernacular because they’re more accessible, but that doesn’t imply [that they are] reliable etc.,” she said.  

In fact, Makhni flagged the disconnect between the development of clinician-facing AI tools—some presented with impressive marketing materials outlining possible benefits—and the complementary strategies to assess performance and patient outcomes. “I think our ability to monitor the safety and reliability of these tools has lagged behind, a bit,” she said. “As a clinician, I don’t know if I can trust this yet because maybe I don’t fully understand it... and at the end of the day, my actions impact patients.”  

Pfizer’s Madhavan agreed, breaking down two areas of risk: scientific and technical. From a scientific standpoint, she advised taking steps to classify the risk level of an AI tool or model from the outset. At Pfizer, Madhavan said AI models are classified into high, medium, and low risk categories with corresponding levels of governance and subject matter expertise at play. AI models that are part of the drug discovery process are impacting efficacy and safety of drugs that are going into patients are the highest risk. 

“This is where I think human-in-the-loop becomes extremely critical,” she said, and recommended the NIST risk evaluation framework for AI tools. Lower-level risk tools would be ones that are internally facing, not tools for patients or external stakeholders. “You can be a little more exploratory because there’s going to be a team of experts that will review this.”  

Explainability is key for understanding technical risk, Madhavan continued. Explainability requirements may change based on the use case. For example, if a tool is helping delineate cell margins in pathology slides, it may not need to be as explainable as a tool making care pathway suggestions.  

“We still need to be very careful on what [tools] can do at this time and moment,” added Per Greisen of BioMap, warning against faulty messaging that AI could eliminate the need for experiments or human thinking while missing what the tools can actually do. “I think one of the risks is that we don’t oversell a technology that could potentially change the world that we live in and, at the end of the day, deliver much better medicines.”  

Integration Challenges 

The panel discussed how organizations struggle to integrate AI solutions into existing workflows. Madhavan stressed the importance of viewing AI not as isolated technology but as part of broader processes: "One of the primary challenges for digital transformation... is not to think of it as a siloed product or a solution that operates in vacuum, but it has to integrate into these larger business processes." 

Sofia Guerra, approaching the question of AI from an investor's perspective, emphasized that successful technologies must solve concrete problems with measurable outcomes: “To me what’s most interesting is, what is your wedge in a market? What’s the problem you’re solving?” She is purposefully not asking about what the technology is. Technologies, including AI, are just different “pearls on the necklace”, she explained, part but not the whole.  

“What we look at for a series A investment... is, are you solving a core problem? How big of a problem is that to your user and to the person who holds the budget? And what value are they deriving from what you're providing?” She continued with several other questions that investors look at when assessing a tool. How often do users use it? What steps in the drug discovery process are you guiding? Is the tool a revenue generator? Or is it offering cost savings. What value does the customer place on the solution? 

“I think the companies that are most successful for external investment are the ones that can measure that quickly.” 

Data Sharing and Standardization 

All of the panelists agreed that high-quality, standardized data remains fundamental to AI advancement, yet sharing mechanisms remain underdeveloped. 

Mayo Clinic has embraced a federated data model, as Makani explained. “We de-identified our data for millions of patients. And we’re part of a global federated network... By having a de-identified model, and federating to different nodes, we can more easily provide access to data to everyone from large pharma companies to smaller solution developers." 

Madhavan highlighted how real-world evidence data presents unique challenges: “Most real-world evidence data is not really very big. They were not collected for regulatory purposes, and they were collected for clinical care or billing purposes.” But those datasets have grown and been harmonized over the past years. “The beauty of the situation in which we are in is that we have large amounts of data that we didn’t have even a decade ago,” she said. “Creating regulatory-grade real world data is going to occupy a lot of our minds for the next few years,” Madhavan predicted.  

Greisen advocated for greater emphasis on sharing negative results, which he acknowledged will require a mindset shift. “All of a sudden, negative data are actually also very important for these algorithms. And I think there's been a tendency, especially in publications or public data sets, to hide it a little bit away as being embarrassing that it didn't work. I think for algorithms, that is exactly what they need.” 

The Future of AI in Healthcare 

Looking ahead five years, the panelists expressed optimism about AI’s potential to transform healthcare work. Makani shared her excitement about AI enabling clinicians to “operate at the top of my license” by reducing administrative burdens: “I spend eight hours before I start a shift just reading every note on a patient in my panel just so that I can feel confident that I have the certain, reliable sort of data. That time can be better spent.”  

Madhavan expressed enthusiasm about AI as an educational tool: "I'm super excited about the concept of AI as a teacher... If you actually look at Open AI’s reasoning models... you can actually just simply add an additional prompt that says, ‘Show me your work,’ and it actually starts to walk you through the stepwise details of how it arrived at an answer.” The best way to help develop and ensure guardrails for AI, Madhavan said, is to engage with the models.  

Guerra envisioned AI adoption becoming more practical: “How do you make AI adoption less emotional? For any role that you're in, how do you make it an extension, another arm to what you do... It’s going to… give me superhuman powers to accomplish that.” 

Finally, Greisen expressed excitement about the new products he believes will be coming to the therapeutic space. “I think creativity will be at a completely new level with what we can actually do.”