AI Could Assist in the Quest for New Antibiotics
By Deborah Borfitz
July 23, 2024 | Antimicrobial resistance (AMR) is “on every thinking individual’s list of existential threats,” as well as one of the cheapest ones that can be solved relative to others such as hunger and climate change, according to James Collins, Ph.D., professor of medical engineering and science and biological engineering at Massachusetts Institute of Technology (MIT). The answers will come by melding human and machine intelligence in the search for new antibiotics.
The world surely needs fresh options, and fast. Unless proactive measures are taken to slow the rise of drug resistance, AMR is expected to claim 10 million deaths annually by 2050, beating out cancer, he says.
Among the critical priority pathogens on the radar of the World Health Organization are Acinetobacter baumannii, the so-called “Iraqi bug” that has developed resistance to carbapenem, and Mycobacterium tuberculosis increasingly nonresponsive to first-line treatment with rifampicin. Both have been subjects of AI-aided quests for new drugs pursued by Collins, founder of the new field of synthetic biology.
“It truly is a broken market,” he says of the antibiotic industry. Antibiotic approvals by the U.S. Food and Drug Administration (FDA) are on the decline, and since the 1980s the medicines have all been variations of previously discovered drugs.
The problem is economics, says Collins, whereby the exorbitant cost of developing an antibiotic is tough to recoup given their low price point and small sales volume relative to treatments that might be required for months if not a lifetime. Many large pharmaceutical companies have pulled out of the business entirely.
Against the backdrop of this longstanding “discovery void,” Collins has spent the past 20 years harnessing the power of artificial intelligence (AI)—mostly standard machine learning techniques—to reengineer organisms to treat antibiotic-resistant infections. In biology, he points out, AI models are “not as data hungry as people say.”
An AI feat in his lab a few years ago was accomplished with a compound library so small as to be deemed “lunacy” by every known expert, he adds. Researchers trained a deep learning model on only 2,500 compounds (1,700 FDA-approved drug library supplemented with a collection of 800 natural products) to pick out those that could inhibit growth of Escherichia coli.
Subsequently, the model was applied to a larger library of well over 100 million molecules in the ZINC15 database to pick out those with antibiotic potential based on their structure. Notably among these was named halicin (in homage to Hal, the fictional AI character in the film 2001: A Space Odyssey), which exhibited efficacy against broad-spectrum bacterial infections in mice.
Bootleg Project
As Collins and his team uncovered early on, antibiotics are more complicated than originally thought in terms of their interactions with a target protein. In the case of penicillin, he says, the downstream effects are stress responses that can help boost antibiotic resistance.
The first wave of AI in biomedical research began back in the 1950s with a workshop at Dartmouth, followed by the development of early versions of neural networks at MIT through the late 1960s, says Collins. The second wave hit in the 1980s with the emergence of large language models following a programmed set of rules, including the first chatbot (ELIZA) created at MIT.
The third and current wave of AI, introduced in the 2010s, has been driven by big data and data science highlighted by deep learning AI and newer large language models, he continues. While MIT was temporarily “asleep at the wheel,” Collins quips, leadership positions were assumed by China as well as Stanford University and the University of Toronto.
In March 2018, leadership at MIT woke up to launch an institute-wide initiative around AI, reports Collins, adding that he took up work with Regina Barzilay, professor of electrical engineering and computer science, whose ambition was to use machine learning to tackle cancer deaths. She subsequently became a breast cancer patient herself and shifted her focus to the application of AI in diagnostics to improve healthcare decision-making.
The “golden age of antibiotics” of the 1940s, 50s, and 60s, when new medicines were mainly being found in dirt, was long since over. With essentially zero funding available for antibiotic development, Collins says, the bootleg project was born with the 2,500-compound library.
The small size of the database made it possible to quantify each of the compounds. The threshold set for a compound to be considered antibacterial was if it achieved at least 80% E. coli growth inhibition, he says.
Discovering Hal
When it came to examining the chemistry-based structure of potentially antibiotic compounds, Collins and his team worked with an AI model called Chempro developed by Barzilay and Tommi Jaakkola. It’s a multilayer graph neural network (GNN) that learns about the bonds between atoms in compounds with antibacterial properties.
The GNN model was used to validate the top 99 predictions from the deep learning model, this time on 6,100 compounds in various stages of development in a drug repurposing library at Harvard, he says. Chempro determined if a compound was antibiotic or not by assigning each drug a number between 0 and 1 and setting a threshold.
That produced a 52% true positive rate—exceptionally good in the drug discovery space where drug screens generally land at something less than 1%—but not particularly helpful in any practical sense. “When we first presented this to our colleagues in the antibiotic space, they were intrigued but unimpressed with what we actually found,” says Collins. “We rediscovered a bunch of antibiotics that we already had [e.g. new versions of penicillin].”
Taking it to the next level, Collins and his colleagues went back to the 6,100-compound library to ask which drugs predicted to make good antibiotics didn’t look like an existing one. One molecule came out—the “incredibly powerful” bacteria-killing halicin, which was originally developed to treat diabetes but never reached the clinic.
Halicin rapidly killed E. coli (along with about three dozen other multidrug-resistant clinical isolates), but in doing so also killed the rat growing it, says Collins. But the study demonstrated the utility of the deep learning approach by identifying eight antibacterial compounds structurally distant from known antibiotics (Cell, DOI: 10.1016/j.cell.2020.01.021).
And the news emerged amid rising deaths from COVID, which was typically treated with a combination of three or four drugs, he notes. Hal was working solo.
Over time, all antibiotics lose effectiveness against their targeted bacteria, stresses Collins. “Any person who tells you that their molecule does not develop resistance is lying to you and lying to themselves.” It can happen in a matter of days.
Library Expansion
Libraries that contain upwards of hundreds of thousands of molecules (e.g., the ZINC15 dataset) are simply “too large for human exploration,” says Collins. But looking at them in silico using AI can significantly accelerate and expand the search for novel antibiotics against problematic pathogens.
This realization gave birth to the Antibiotics-AI Project at MIT, launched in 2020 in conjunction with the nonprofit social venture Phare Bio that is funded by the Audacious Project housed at TED, he continues. Phare Bio is partnered with the Collins Lab at MIT and the Broad Institute of MIT and Harvard for the antibiotic discovery work. The idea here is to use AI to advance the most promising antibiotics toward the clinic through partnership and philanthropy.
One of the first efforts was expansion of the 2,500-compound training library to the current 39,000 molecules, which have to date been used for AI-based screening of seven dangerous human pathogens. The compounds were “hand selected” by Jonathan Stokes, Ph.D., with whom Collins co-founded Phare Bio. Stokes, previously a postdoctoral fellow in the Collins Lab, is currently assistant professor in the department of biochemistry and biomedical sciences at McMaster University.
The first major project undertaken was with the narrow-spectrum antibiotic abaucin as a treatment for A. baumannii, which was afflicting soldiers deployed to Afghanistan and unresponsive to existing drug treatments, he says. Stokes effectively repeated the halicin study after training the neural network using the expanded compound library (Nature Chemical Biology, DOI: 10.1038/s41589-023-01349-8).
The issue is that abaucin is not a money-making “magic bullet” that kills everything, only A. baumannii, and lacks a good companion diagnostic, adds Collins. But attitudes toward narrow-spectrum antibiotics are shifting along with growing fascination with the microbiome since targeting specific bacteria minimizes harm to the gut.
All of this comes “on the heels of the world getting quite anxious about the AI models,” Collins says, specifically a push among lawmakers for more regulation of fast-moving developments in the field. Overregulation is also a danger by stalling efforts underway at MIT, Harvard, and elsewhere to use “AI for good.”
Most recently, Collins and his colleagues have published on the deep learning-guided discovery of structural classes of antibiotics and demonstrated that the GNN models can be explainable—that is, provide insights into the chemical substructures that underlie selective antibiotic activity as well as human cell cytotoxicity (Nature, DOI: 10.1038/s41586-023-06887-8). The study identified one new class of compounds that is selective against methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant enterococci, evades substantial resistance, and reduces bacterial loads in mouse models. “It is now being advanced in the clinic,” he reports.
After news of the study spread on X (formerly Twitter), Collins says he was visited by “the guys in black suits” worried about how bad actors might use the AI models to discover and design toxic molecules that could be deployed against humans. It’s a possibility, unfortunately, he says. “But I’m more concerned about what nature has in store for us [e.g., another pandemic].”
‘Human Enhancer’
“It’s early days for AI for those of us in the life sciences,” says Collins. Despite being in the big-data phase, most groups and companies are still working with small datasets. And the people using them aren’t always trained properly. “They all want to do AI models, but they don’t appreciate the complexity for biology and chemistry.”
Moreover, bioethicists don’t like black box systems. “They want to get after explainable AI” in terms of what’s happening with the biology of the organism, Collins continues, which is what he and his team most recently did with Chempro.
One interesting question regarding that recent study is how the model learns a novel compound with a new mechanism of action and avoids any bias in the training set, Collins says. He believes this stems from the way the model builds out its calculations in layers versus looking at an entire compound. He further maintains that “all small molecule drugs are dirty,” meaning they hit more than one target and are effective against something.
The five key chemical substructures that Chempro identified as being associated with antibiotic activity across multiple compounds—and thus the basis for meaningful development of a new set of drugs—aren’t necessarily conclusions an intuitive chemist would have reached. “I think AI generally is a human enhancer.”
What’s Ahead
Moving forward, Collins says he sees a “big role for generative AI” in the design of new compounds—to for instance deal with pan-resistant strains of gonorrhea that are now present in the community. The future integration of generative AI with organ-on-a-chip technologies—including a vagina-on-a-chip device famously developed by Harvard biologist and bioengineer Donald E. Ingber—is also expected to advance drug evaluation in terms of data analysis and automation.
Diffusion models of generative AI, which can generate data like the data on which they are trained, will additionally be useful for compound discovery purposes, he adds. Collins and his team prefer their diffusion-like GNN approach where molecular fragments have become the starting point rather than individual atoms.
The “interesting challenge,” he says, is the shortage of facilities able to produce just a portion of the promising molecules being identified, says Collins. His lab has been relying on the world’s top shop for chemical synthesis, namely Kiev, Ukraine-based Enamine.
To help fill the gap, Collins has been working with AI as well as an RNA sensor that serves as a synthetic biological control for targeted therapies. RNA chips can be synthesized relatively quickly, he points out, meaning three days versus 10 weeks.
Collins is not exactly bullish on large language models in the chemical space, at least based on work he sees coming out of other labs. The other problem is that the best libraries house no more than 1010 compounds, just a fraction of the number of chemical compounds (1060) in existence, he adds.
The deep learning methods favored by Collins aren’t limited to the quest for new antibiotics. Not long ago, he was applying GNNs to the discovery of small molecule senolytics (Nature Aging, DOI: 10.1038/s43587-023-00415-z). In this case, the targets are “zombie cells” thought to underlie tumors as well as neurodegeneration and aging (e.g., skin wrinkling and scarring).
Of the 2,500 compounds screened, three were flagged as potent senolytics, he says. Collins is now in talks with several companies about the discovery, including Google life science spinout Calico and folks in the $8 billion cosmetics business.