All The Gear But Still No Idea? Why Machines Alone Can’t Cure Cancer

June 11, 2018

Contributed Commentary by Michael Kaplun

June 11, 2018 | No one doubts that data, our continuously growing reserve of “new oil”, will underpin the vast majority of future decision making in everything from government, to business, to health. But there is a problem. Despite the hype and fanfare that has accompanied massive supercomputers and AI platforms in recent years, there is still no tool capable of delivering answers to questions such as, “How do I cure a particular patient’s type of cancer?”

Granted, AI platforms have an impressive array of party tricks. Google’s AlphaZero taught itself chess in just four hours and beat a champion chess program. But translating these feats into the world of oncology, science, and drug discovery, where there is a pressing need to crunch enormous volumes of genomic, proteomic and molecular data, is some way off.

Initiatives such as the US Cancer MoonShot and funding efforts from the UK Government are keeping the mission to “cure” cancer top of the international agenda, and projects like these are throwing the question of how technology can help into sharp focus. As a result, oncology has seen large investments in platforms such as IBM’s Watson. Results, though, have been unsuccessful to date. The well-documented suspension of a flagship $62m Watson project to create an Oncology Expert Advisor, by specialist cancer hospital MD Anderson, has been a notable disappointment.

Further, other non-Watson initiatives have shown how technology at present can augment, but not replace, humans in cancer care. In a proof-of-concept at Personalized Hematology-Oncology of Wake Forest in North Carolina, Elsevier’s  Anton Yuryev worked with physician Francisco Castillos to demonstrate this. Yuryev and Castillos generated expression data from patient biopsies using a microarray gene chip, and then used a large knowledge database (Pathway Studio) to interpret the results.

The identified pathways pointed clinicians to FDA-approved drugs effective for the molecular mechanism of action observed in the tumor, or to an appropriate clinical trial. All three stage-four patients outlived the overall survival estimates based on standard-of-care treatment. While a patient-by-patient approach is unlikely to be deliverable at scale, if patients can be grouped, it becomes a more realistic proposition. This approach could be translated into research experts working alongside technology experts.

Looking at these examples, and considering the nature of cancer, it should not be surprising that systems like Watson are not yet able to support R&D and treatment. Complex, open-ended questions—i.e., “How do I develop a drug that will cure this type of cancer?”—are not Jeopardy or chess. That is, not solvable in a set number of moves or series of correct answers. Watson, in short, is not a brain. It is a powerful black box capable of generating results according to a finite set of rules or possible outcomes. But when it comes to life sciences, we are far from having defined the rules for disease biology and therapeutics.

Had IBM considered these factors fully, it might have foreseen and overcome some of the issues that currently afflict Watson and other AI platforms, such as limited adoption by physicians and researchers. The issue of trust is another challenge; scientists and those in clinical settings are often case driven, and need confidence that suggested care pathways match with their own expectations and empirical evidence. This means any platform providing decision support in cancer care must be designed especially for that field, and overlaid with complex logic capabilities and systems integration, by scientists that understand cancer.

Going beyond the clinical delivery of healthcare, much talk around AI has also suggested it would uncover novel R&D approaches. However, when placed in the context of the modern drug development process—which requires a plurality of data in ways that machines (like Watson) aren’t built for today—these assertions are hard to substantiate. Especially considering breakthroughs happen at the boundaries of disciplines such as chemistry and biology, with expert input from data scientists, among many others.

On top of that there is the issue of contextualizing data in each case. Different disciplines can look at the same dataset and come away with an entirely different view. Whereas a chemist might see the potential for molecule synthesis, a biologist could see the impact on a diseases’ pathway. Therefore it’s futile to expect that one tool which provides a singular experience – as Watson does – can meet the needs of multiple different researchers. This issue isn’t helped by the fact Watson doesn’t only want to try and solve scientific problems, but also financial, automotive, and engineering problems too.

The reality is that, despite societal expectations, complex problems like curing cancer require complex solutions. The rest of our world might be plug-and-play but that simply isn’t the case when it comes to science as there is so much we still don’t know. This lack of knowledge means the evidence needed to draw conclusions is nearly always incomplete. In the future, AI will augment humans by helping us to make leaps in discovery but won’t make those decisions in isolation. The most successful platforms will ultimately speak the language of the users they serve—making scientists powerful partners, rather than consumers, of black-box data.

Michael Kaplun is VP of Digital Solutions at Elsevier. Michael helps life science companies build data and analytics tools that deliver relevant, accurate insights. He is a PhD (ABD) in mechanical engineering from Columbia University and has been at Elsevier since 2012. He can be reached at elsevier@sparkcomms.co.uk.