Reporter’s Notebook: Molecular Medicine Tri Conference 2015
By Aaron Krol
February 27, 2015 | Last week offered some brief sanctuary from snowy, subzero New England as Bio-IT World hit the 22nd annual Molecular Medicine Tri Conference in San Francisco, joining dozens of other Boston-area professionals who navigated the ever-shifting Logan Airport flight schedules to reach their sister biotech hub on the west coast. Tricon is one of the year’s largest industry events for understanding human health on the level of genes, proteins, and cells, and like last year, when patient stratification was front and center, this year’s conference tracked the latest trends in personalizing therapies and research programs with sharp attention to individuals’ unique molecular profiles. This clinical approach also received a big boost this year in the wake of the Obama administration’s proposal for a national Precision Medicine Initiative.
Rolling into the Moscone Center in Yerba Buena Gardens late Tuesday morning, I wasn’t able to catch the Monday and Tuesday keynote presentations. The former was given by Eric Schadt, the systems biology pioneer who leads the Icahn Institute for Genomics and Multiscale Biology at Mount Sinai, on integrating genomics with patient-contributed data in mobile health apps ― if any readers caught that talk, I’d love to hear about it in the comments! (I do know this subject to be something Schadt’s institute is actively working on, in partnership with LifeMap Solutions.) The second keynote was delivered by Matthew Wilsey, whose story of mobilizing scientists around the world to make sense of his daughter’s rare genetic disorder is by now a standby in the rare disease community. While I sadly missed this one, you can find coverage of Wilsey’s presentation at Biospace. The combination of Schadt and Wilsey underscores the degree to which patients who take an active role in medicine and healthcare can be as vital a resource for biologists as genomes or cell lines.
Luckily, there were plenty of gems in the Tricon schedule beyond the keynotes. On Tuesday afternoon, I sat in on a call to arms from Atul Butte to make the most of our growing wealth of public data. Butte, who is transitioning from a systems biology post at Stanford to become chief of a new Institute for Computational Health Sciences at UC San Francisco, is known for his systematic approach to cycling from public data through ideas for new clinical tools all the way to commercial spinoffs from his academic lab. At Tricon he shared his recipe of piggybacking off reams of microarray experiments in free online databases, as well as the market for discarded blood and tissue samples from hospitals, to cheaply form and test hypotheses about biomarkers that might be associated with a wide variety of diseases. Once one of these hypotheses has been validated in early studies, it becomes possible to raise venture money for a new diagnostic company — letting the whole process essentially pay for itself. Butte also shared some sharp observations on the current data sharing climate, at one point noting that data from failed clinical trials, the most expensive experiments in biology, never even sees the light of day.
I also used my Tricon visit to get up to speed on the Global Alliance for Genomics and Health, a massive consortium trying to promote universal standards for storing, interpreting, and exchanging DNA data to prevent current data bottlenecks from stifling innovations in genomic medicine. David Haussler, the Scientific Director of the UC Santa Cruz Genomics Institute, gave talks on two projects under the GA4GH umbrella. The first is working on new ways of representing genetic variants, both large structural variants and small SNPs and indels. The current reference genome structure, Haussler warned, which represents one set of alleles as a sort of base truth and all other alleles as “alternate loci,” is both arbitrary and biased toward variants more common in European populations. This not only makes it difficult to capture many large structural variants, but also provides an incentive for scientists in other regions to build their own reference genomes, fragmenting one of the most valuable shared resources in biology. GA4GH is working on a new data structure that treats all possible “paths” through the human genome equally, based on short DNA fragments’ possible adjacencies in a variety of structural contexts.
Haussler also spoke about the Beacon Project, a GA4GH effort to send minimalist queries to genome databases. A beacon asks, simply, for a database to report whether it includes any genomes with a certain base call at a certain chromosomal position. “That’s a very atomic unit of information,” said Haussler. A beacon could potentially be used to find genomes that share a suspected rare disease variant during diagnostic odysseys, but more fundamentally, the Beacon Project shines a light on data sharing practices across institutions. “We’ve defined it formally so that anybody can, quote, ‘light up a beacon’ for their database,” said Haussler. “And if you can’t do it for reasons of compliance, then we ask you why. Why is your institution so unwilling to share even this one quantum of information? It gives us a chance to kind of get in your face.”
Multiple presentations at Tricon dealt with the vexing challenges of point-of-care diagnostics. Barry Lutz, a bioengineer at the University of Washington, showed some endearingly low-tech solutions to the mechanical problems of conducting fine-tuned laboratory processes in the microenvironment of a handheld testing device. His suggestions included using sponges and water-soluble paper to open and close valves predictably as liquid travels through a system — both performing reactions, and expanding a sponge or dissolving a slip of paper to trigger the next step. Later, a panel discussion with Christine Hara from Purigen Biosystems, Adithya Cattamanchi of UC San Francisco, and Philip Felgner from UC Irvine took on the added challenges of point-of-care diagnostics in high disease burden countries. The panel agreed that many setbacks in creating these tests could be avoided if developers would put themselves in the field and learn the basic limitations facing their products, which could include lack of electricity or a cold storage chain, clinical and transportation infrastructure, social stigmas around diseases, and the confounding presence of other disease agents that limit the utility of narrowly-targeted tests.
A highlight of this year’s Tricon for me was a symposium on the subject of “New Frontiers in Gene Editing.” To no one’s surprise, the focal point for these sessions was CRISPR-Cas9, the revolutionary genome engineering tool that has upturned expectations for how much and how quickly gene editing could impact medicine. Fascinating insights into the speed at which CRISPR-based tools and therapeutic ideas are developing came too fast to count, but here are a few pearls from the symposium:
Eric Olson is working on CRISPR therapies for muscular dystrophy in mice at the University of Texas Southwestern Medical Center, sending Cas9 to modify the dystrophin gene, the single longest gene in both humans and mice. Olson noted that his treated mice had unexpectedly high levels of mosaicism, with some cells expressing the repaired dystrophin gene and some the unrepaired mutant gene. This, he said, got him thinking about a question that will be fundamental to all gene editing therapies as they creep closer to the clinic: what is the minimum penetration needed to see a real therapeutic effect? Later in the symposium, Charles Gersbach, a Duke University biologist working on the same gene, also noted that dystrophin offers an interesting test bed for CRISPR. Most approaches to treating Duchenne muscular dystrophy have zeroed in on exon 51 in the dystrophin gene, but many more patients could be treated by carving out the entire length of the gene from exons 45 through 55, a region spanning over 300 kilobases. In principle, he added, with CRISPR these two approaches call for delivery of exactly the same elements: a Cas9 molecule, and two guide RNAs to direct Cas9 to sites on either end.
Gersbach also shared an anecdote from his lab that might be familiar to many gene editing biologists who have worked through the discovery of CRISPR. The story dealt with the process of building gene editing tools that could remove exon 51 from the dystrophin gene. “The same student started the project with zinc finger nucleases, and it took him about two years to get it up and going,” Gersbach said. “He then moved to TALENs, which took him about two months. And then he generated all these CRISPR results in about two weeks.” The process was so fast that the student worked up extra guide RNAs to target every exon from 45 to 55.
Both Fei Ann Ran, a postdoc in Feng Zhang’s lab at the Broad Institute, and Alexandra Glucksmann, COO of Editas Medicine, spoke about results looking at Cas9 molecules derived from different bacteria. While Streptococcus pyogenes Cas9 is the lab standard, Fei Ann Ran and her colleagues have been diligently profiling homologous molecules from dozens of species in search of new molecules that are comparably sensitive and specific. One of these, derived from Staphylococcus aureus, has stood out for having similar activity to S. pyogenes Cas9, but being encoded by a much smaller DNA sequence. In fact, S. aureus Cas9 is so small that its code can be encased in an adeno-associated virus vector, an industry favorite for delivering genetic payloads to human cells because AAV has no discernible symptoms and inserts its DNA very predictably into the human genome at a safe site. Glucksmann added that Editas is actively experimenting with AAV and S. aureus Cas9 to get CRISPR systems into cells — something the company has identified as a key obstacle to CRISPR-based therapies.
One more point I want to highlight was made by Matthew Porteus, from Stanford University School of Medicine, who is interested in using CRISPR to address severe combined immunodeficiency (SCID). SCID has a tragic place in the history of gene therapy, because clinical trials in the early 2000s inadvertently gave multiple patients leukemia when viral vectors inserted their genetic payloads into oncogenes. Porteus addressed the safety profile of gene editing head on, saying that scientists will almost certainly not be able to completely eliminate all off-target cuts made by Cas9, and the specter of dangerous mutations will always hover over this work. But he urged a practical, results-oriented look at off-targets. To understand the safety profile of gene editing, Porteus said, we need to ask “not just where the mutations are occurring, but how are they affecting cellular behavior?” He suggested that experiment need to not only try to locate off-target cuts, but also raise cells with the off-target cuts and see whether their survival is diminished. After all, he observed, we know that gene editing can be done safely even with some level of off-target cuts: our own immune cells edit their genomes through processes like V(D)J recombination.
Those were by no means all the interesting topics covered at last week’s conference. On the Golden Helix blog, Andreas Scherer notes that circulating cell-free DNA, whether for cancer diagnostics or prenatal testing, was attracting a lot of buzz this year, and Ted Slater’s Tricon takeaway message at the Cray blog was that the yawning data demands of next-generation sequencing are still far from a solved problem. For those readers who made it to Tricon this year — what were your favorite presentations? What big issues do you see developing in the world of molecular medicine?