BIOVIA Launches Biologics Informatics Program
February 17, 2015 | BIOVIA, the life sciences-oriented subsidiary of Dassault Systèmes formerly known as Accelrys, today introduced a new informatics platform for companies developing biologic drugs, in an announcement at the Molecular Medicine Tri Conference in San Francisco. BIOVIA Biologics is designed as a central repository of data, and workflow management platform, for programs to discover and develop biologics, navigate the regulatory process, and prepare these complex biological products for manufacture at scale.
BIOVIA CEO Max Carnecchia spoke to Bio-IT World correspondent Aaron Krol about the challenges to working with this rapidly growing class of therapies, and the need for IT programs tailored to the needs of biologics. This interview has been edited for length and clarity.
Bio-IT World: The pharmaceutical industry has been investing much more in biologics in recent years. How much would you say this is about the market incentives that exist — less generic competition, longer exclusivity periods — and how much is the genuine medical promise of these therapies?
Max Carnecchia: Not to be controversial, but it’s definitely a balance of both. The science is there to be able to satisfy unmet clinical needs. You can point to examples of therapies that have been delivered in the course of the last four or five years that prove that point quite effectively. At the same time, biosimilars are very difficult and expensive to develop, and we’ve yet to see what that means from an approval perspective. From a business perspective, certainly our customers in the industry want to have the most effective, safest therapies available, but they also want to have the protections that come with biologics.
When a company turns from small molecule programs to biologics, is it going to look at kinds of data it’s not used to dealing with?
It’s a new technology, a newer area of science. Small molecule drug discovery and development, while it’s very hard, is a process that is well understood. The tools are there to help automate that with high-throughput screening. I think it’s a much more open field to develop effective therapies by way of the large molecule approach. The types of data, the sources of that data, and the variability of structures of that data, have expanded dramatically. There’s a wide variety of data types associated with large molecules, associated with DNA and genetic sequencing, and with the robust imaging that’s now being used in these processes, and the variability of those data types does not process well in a traditional enterprise software system.
The more conventional enterprise software companies, like Oracle, Microsoft, and SAP, can do collaborative data management, but they don’t truly understand the science. Creating tools dedicated to understanding high-volume antibody sequence data, assay data, cell line data, has just not been the history of these businesses. We have the chance to be on the forefront of some of these scientifically aware technologies, to help take these inefficiencies out of the system, and help bioinformaticians spend their time on the science and not trying to hack together IT systems.
Does the complexity of some of this data mean that companies should expect to turn more analysis over to computational systems, and maybe away from the traditional decision-making process?
Ultimately we’d like to get to systems biology, and be able to basically design large molecule therapies completely in silico and get 90% of the way there, the same way you do when you design an automobile or an aircraft today. The first time the new Camry is tested for a crash, it’s tested electronically and that’s what’s approved by the regulators.
We’re a long way off from being able to do that with large molecule drug discovery, but even today we know there are models and algorithms and codes that can help inform and direct the experimentation, so we can reduce the amount of wet lab work that has to be done. And there are all kinds of benefits associated with that. You need fewer reagents, fewer tests, fewer cell lines, less of your scientists’ time, and there’s also the benefit of being able to advance the work more rapidly.
Moving into biologics calls for a lot of expertise that companies built around small molecules are unlikely to have in-house. Are we seeing more dispersed strategies for drug development as a result of that?
The idea that you’re going to acknowledge, as a large pharmaceutical company or a large biotech, that you do not have the market cornered on great ideas and science, and you’re going to collaborate with smaller startups in some sort of joint venture, or you’re going to outsource work to be performed on a contract basis — those are trends that have been well underway in the industry for the last decade. Having said that, I think the large molecule challenge, where the science isn’t as well understood, creates a bigger impetus for this concept of distributed or collaborative work with external partners. The challenges associated with scale-up and tech transfer with large molecules are much more significant than what we’ve had with small molecules.
If you’re going to do distributed collaboration — with CROs, academic institutions, smaller biotech partners — you’ve got to have a technical infrastructure that will allow and facilitate that. You need to have an environment that’s secure and flexible relative to business rules, and we’re watching a lot of these collaborations become very agile. With a contract research organization, you need them to work in certain confines of business rules and a secure environment for 90 days, and then you want to harvest whatever the results are and be able to basically shut that relationship down and no longer allow them access.
One of the most unique challenges for biologics is manufacturing. How early in the development process do companies need to be thinking about how they’re going to produce a new biologic at scale?
I would say it needs to start at what would typically be thought of as the lead identification stage. From an information management perspective, you’re doing sequence analysis, activity analysis, you’re doing calculations as to whether it’s developable very early on. We find ourselves spending much more time today with our customers in pharm-dev around scale up and tech transfer than we necessarily do on the discovery side, because even when you’ve got something that’s safe and effective, you’ve got to get it to scale. It’s a multidisciplinary science problem. It’s biology, but it’s also chemistry and physics. Over time we believe models will emerge to help make that more effective, and informatics is already a critical element of that today.
Do these manufacturing challenges call for new kinds of coordination across separate parts of the drug-making enterprise? Do the process people have to be more aware of the early lead candidate identification people, and vice versa?
There’s a series of political systems and boundaries within these organizations. Most of them are set up where manufacturing is different from safety and preclinical, which is different from development, which may even be different from quality. And historically each one of those organizations has its own systems, and its own people, and they talk to each other via documents that are handed over the wall without a lot of knowledge that goes along with them. We believe that from early discovery all the way through to manufacturing, and to the point of care, ultimately we will need to have an environment that allows for all of the stakeholders to have the appropriate visibility throughout that entire cycle.
If you went to another industry, that is how innovation, manufacturing, and delivery of value to an end customer is done. Boeing does not develop the airplane on its own. It’s working with the airlines, it’s working with passengers, and it’s working with a supply chain of thousands of companies that develop the six million engineered parts that make up a Dreamliner. You’re not going to do that by passing Word documents and Power Points and Excel spreadsheets around, because every time you create another version of a document, it’s just another place to have failure. You need to have a bulletproof enterprise software system, an orchestration layer for all those stakeholders.
Within our team, BIOVIA does the science, and Dassault Systèmes does the orchestration layer. When we came together a year ago, we asked, how do we take what Dassault Systèmes has done so successfully for the discreet manufacturers like the automotive and aircraft industries, and bring that to the scientifically processed and formulated industries — the life sciences, and pharma biotech, and energy?
An interesting feature of biologics is that their structural complexity is so great that it’s difficult to even have a common language for the molecular makeup of a drug candidate. Is that just a semantic issue, or does this have real repercussions for drug discovery and development?
It has real repercussions for drug discovery and development, and it also has real repercussions for intellectual property ownership. If you step into any pharmaceutical company today, there’s a generally accepted body of knowledge relative to how to describe a chemical compound, either through a chemical structure or a sketch or a chemical formula. That level of standardization and agreement does not exist with a biological entity. A small molecule drug might have a hundred atoms in it, where you could end up with thousands of atoms in a biological therapy. We’ve done, in consortia with four very large biopharma companies over the last five years, a tremendous amount of work around this concept of biological registration, but it is a multi-headed beast, and that domain is still unfolding.
Many of the unique challenges for biologics involve entirely new wet lab processes, screening programs, and production and clinical processes. How much can we expect from informatics-oriented solutions when it comes to tackling these real-world problems?
For every one of those processes, there’s an opportunity and a challenge for informatics. With those new lab techniques there will be new instruments, and those devices need to be supported by an informatics system. For the information that’s coming off those devices, there are all kinds of new calculations that may need to be rendered. And then there’s the analysis and prediction side of it.
And ultimately, as these best practices emerge, and it becomes clear that there is a better way to do something, having a calcified system, having a system that is inflexible and rigid, can really work against you. You know what the right thing to do is, but your systems won’t allow you to do it. And then it becomes very expensive to adjust and make changes. Historically, there haven’t been a lot of dedicated tools to deal with the scientific data types that come with this kind of discovery.
Obviously there has been biologics research and development going on for a long time, but today it’s typically done with very specialized resources, whether those are the researchers themselves or the bioinformaticians who support them. Part of the challenge, and part of our opportunity at BIOVIA, is to democratize participation in that process. How do we take software tools and models that are crafted by bioinformaticians, and free them to travel within a biopharma in such a way that you do not have to be the bioinformatician to run the model and understand the results? That doesn’t mean there will no longer be a need for a bioinformatician, it just means we will be able to get leverage out of what they do, and the models they create, and do that within a framework where you have traceability and auditability.