Avoiding the Traps and Pitfalls of Artificial Intelligence Development

June 19, 2024

By Irene Yeh 

June 19, 2024 | Artificial intelligence (AI) was the main topic at the 2024 Bio-IT World Conference & Expo. While there was much excitement over new developments, there were also many discussions covering the obstacles that block the path to further progress—many of which have nothing to do with technology or data. During the Digital Leadership Lessons: Reflecting and Correcting panel and the Data Readiness for AI panel, speakers delved into the human errors that hindered the progress of AI, as well as methods to improve the development process. 

Rodney Marable, executive director of scientific computing & informatics at Flare Therapeutics, highlighted the misunderstandings that still surround AI—mainly, the misconception that AI is a magic solution that can generate data and results with a simple push of a button.  

“Just because the model spit something out, doesn’t mean it’s actually right,” said Marable. “There’s a difference between the AI Wile E. Coyote uses and the AI that Predator uses. There’s a difference in how we deploy our technologies.” 

On Different Pages 

But perhaps the biggest obstacles are ones of communication. Marable shared his own personal struggles when developing AI models.  

“It’s really hard sometimes to watch people misinterpret what the documentation says, decide that their interpretation is actually correct… and then proceed to build their infrastructure on top of it,” said Marable. “It’s just difficult to sort of be on the sidelines and watch things that could be prevented through better education or sort of getting people to sort of not believe their own narrative.” 

He's not alone in this experience. Rania Khalaf, chief information officer at Inari Agriculture, commented, “We often encounter people that are very used to being the smartest person in the room.” Being an expert in one field does not make them an expert in a different field, but it nonetheless does not stop them from trying to take on the project without understanding the full scope first.  

“There’s a big knowledge gap,” she added. For example, she shared a past project she worked on where a data layer was built that allowed data analysis from genotype to phenotype as they were editing plants. When they wanted to give access to someone from a different team, Khalaf’s team proposed how this access would be governed. This was misinterpreted as a restriction on a data owner's access to their own data. “In my mind… that was so far from my reality that I never could imagine someone could misconstrue the request in that way,” she said.  

Eric Zimmerman, principal healthcare & life sciences business development at Amazon Web Services, agreed that there was a knowledge gap. When discussing a past project, Zimmerman recalled how they ran into this issue. Not everyone on the product side understood what a VPC, VPM, or NAT Gateway was, and not everyone was up to date on audits or logs or configs, Zimmerman relayed. “And so, we tried to make it as simple as possible, but that assumed a baseline knowledge that a lot of people didn't have.” 

Lack of Synergy 

Miscommunication was also mentioned during the Data Readiness for AI panel, where Siping Wang, founder, president, and chief technology officer of TetraScience, mentioned an observation he made about the R&D IT informatics, the scientists, and data teams behind the creation and production of data-related projects or initiatives. 

“I honestly don’t see a lot of good synergy among these three teams,” Wang noted. “Rarely do I see a scientific stakeholder at the center of the triangle,” he continued. “They’re usually brought in out of necessity, and I rarely see them jump in and lead the discussion, which confuses me. If that’s their data, if that fundamentally improves the science that they want to achieve, why wouldn’t they be the center of this triangle?” 

With the miscommunications and misunderstandings that occur, it seems like scientific stakeholders need to be at the center to ground the R&D IT informatics and data teams and keep focus on mutual goals while also ensuring that everyone is on the same page and understands each other’s priorities and perspectives. 

Communication, Considerations, and More Communication 

So, what is the solution to these issues? It’s simple: clear communication and deliberate empathy. 

In the Data Readiness for AI panel, the subject of deliberate empathy was extensively discussed and was defined as trying to understand the perspectives, priorities, and constraints of the other parties and then coming up with a solution that makes things easier for everyone.  

Over at the Digital Leadership Lessons panel, Khalaf recalled an incident where she and her company partnered with a business that was doing text extraction from images. The model was not doing very well, so the product owner acquired $100,000 worth of new data and told the team to use that data to improve the model. The data turned out to be data receipts, which resulted in more dead ends with the model. This sort of incident could have been avoided if there were clearer communication and better understanding of each party’s goals, strategies, and visions. 

Another way to avoid the pitfalls of miscommunication is to establish a foundation from the beginning and explain what needs to be done before starting any project. Parul Bordia Doshi, chief data officer at Cellarity, recounted a project she worked on during her early days at her company. Their AWS bill was “out of whack” for the amount of data they had and number of people in their organization. Doshi set up a roadmap that indicated how to fix the infrastructure in terms of compute data and ensure no copies of the data were being built. The CEO and leadership agreed that they needed to first tackle this fundamental issue before jumping in on other projects for the platform. Because a problem was definitively determined and a goal was set, Doshi was able to explain to the leadership what needed to be done first. And since the leadership listened to and understood Doshi’s perspective, they were able to prevent setbacks and misunderstandings. 

"It's such a simple concept, but most of the time, companies fail because they're not talking to the other function units,” added Doshi. 

Going Forward 

Though AI models are being developed every day, none of them can be effectively used if the human-related obstacles are not solved first. There needs to be better communication between teams, and deliberate empathy is the starting point. Stakeholders will also need to be more hands-on in their projects. Every expert plays an important role in the project, but they also need to recognize the knowledge gap, as well as their own teams’ strengths and limitations. Only then can AI progress further and innovate faster. 

“When you communicate, you really have to keep in mind the audience and where their mind could go in assumptions,” warned Kharal. “Because it’s very new, this way of working.”