Quinten’s Savoir Faire Approach to Data Mining
August 3, 2012 | Founded in early 2009, a small but rapidly expanding French CRO called Quinten is quietly building an impressive client roster as it brings an innovative data-mining approach to a broad expanse of the drug development pipeline – from early stage target identification to its sweet spot in clinical trials and patient stratification.
Kevin Davies recently spoke with Quinten CEO and co-founder Alexandre Templier.
Bio-IT WORLD: Alexandre, please tell us about your background.
TEMPLIER: I’m an engineer by training. I specialized in biomedical engineering and got my PhD working on spinal implants. I quickly got interested in how to improve decisions in surgery. It was striking to me, as an engineer, how the surgical world is – and is not -- a science.
Alexandre Templier |
The subjectivity of surgeons can lead to decisions that can have good or bad outcomes, but I noticed a total lack of systems to capture this information on outcomes in order to improve decisions. A lot of innovation is necessary for doctors to capture information about the treatment, pathology and outcomes of their patients to provide the appropriate treatment for the appropriate patient.
After my PhD, I joined an implant company and set up a subsidiary called SurgiView. We raised money, did some European projects, which lasted 7-8 years. We developed some efficient solutions to help surgeons collect experience in networks. But the chief obstacle was not technical, but more a marketing problem. In developed countries, no social medical system is ready to invest in preventing complications or indirect costs -- they’re too busy trying to face the challenges of handling increasing costs. It is always surprising to see that for 99% patients, no information is collected to help address future questions.
[After SurgiView] I decided to go where the data were already available – instead of helping surgeons to collect information, I joined a pharma consulting company in Paris doing data mining and developing a healthcare business. After 18 months, I had the opportunity to build a new company with some friends. From the beginning, we focused on providing biopharma companies with high-end services to help realize full potential of the data they generate. Our first application sold was helping companies transition from Phase II to III trials.
Where does the name ‘Quinten’ come from?
Quinten is a character in a Dutch novel! When I read this book, I felt this character was so great! I had the name long before the idea of the company.
What is unique about your technology or approach to data mining?
The failure rate in drug development between early- to late-stage clinical trials is 90%. Only 1/10 drugs in Phase I reach Phase III. This explains the high cost of drugs and ever increasing expenses paid by public and private health payers… Cutting costs while developing safer, more effective drugs is not just about gaining a competitive advantage, it’s the only way forward for the entire industry! They have to change the paradigm.
From the beginning, we thought that the solution lies in huge amount of unexploited information in clinical trials. Those trials are primarily meant to ensure drugs produce at least the expected result and an acceptable rate of adverse events… But it is important to keep in mind that the average efficacy of a given product in a given study is just the mix of several groups of patients, some of which show much higher or lower response rate than average. What are the key characteristics of those patients? Who are they?
Most think they are addressing the question – they say that’s what biomarker discovery and translational research is bout. Traditionally, they try to anticipate patient response or adverse events via a top-down approach based on predictive modeling using machine learning technology and so on. Those technologies are expected to produce one model to help predict response or adverse events. What will be the response of any given patient?
But this is not what people need. This top-down approach does not detect the patient subgroups – the optimal responders or non-responders. That explains the poor validation rate of current biomarkers. The industry hopes biomarkers will help reduce attrition rates but the failure rate between biomarker candidates and validated biomarkers is even higher than the drug development process.
So what makes Quinten different?
We thought we had the skills to bring something new adapted to the industry’s needs. We have high-level mathematicians and software developers. The whole idea of the company is basically to mimic the way a medical doctor gains experience. It’s a bottom-up approach. They don’t learn from patients by building a predictive model! Each patient is unique and deserves personalized care. So they build multiple nodes as the patients gather in their minds. The more patients they have, the more they aggregate common features such as their response to treatment. The more patients a doctor sees, the more they see that groups are forming with key characteristics.
Our goal was to extract the most valuable information from any datasets in the form of specific profiles of patients to target treatment (or avoidance of treatment). When such groups are identified, they’re quite easy to identify in other studies.
What sort of data are you using?
Any kind of data – we’re data agnostic. We have no doctoral biologists -- we don’t need that [expertise]. We take as much data as possible. Typically we use demographics, clinical information, biological information; scores (pain, functional); genomic data; and all the omics data. We don’t deal with images per se, but we do use measurements.
We can handle multiple questions. In eight weeks on average, we deliver actionable recommendations.
How did you engage with customers early on, without a proven track record?
We don’t try to impress people with our track record, but we have a powerful approach to commit very quickly once we understand the client’s problem. We make sure the data is rich enough to generate the expected value. We didn’t ask people to believe us and invest – we explained we’d work with their data. Second, we didn’t ask them to commit big money, only a small pilot to see the first results to ensure we could deliver in full.
Our first customer was Transgene. Our head count has doubled each year. We’re now 15 people, self-financed, profitable since the first year. We’ve performed about 50 engagements for 20 or so clients, including 4-5 pharma companies among the top ten. Our customers include Sanofi, Roche, Novartis, Abbott, Bayer Healthcare, AstraZeneca, Servier, and Ipsen. We have 100% client satisfaction – every engagement was successful!
We hope to raise our profile in the US. Over the past six months, we’ve been contacting many companies, and several are now in advanced discussions to get started.
Could you describe some specific examples where your approach has helped a customer?
A common occurrence is that in a Phase II study, the result is not powerful enough. The problem is the treated patients don’t respond much more than placebo. So our customers use stratification, and decide to move forward on a narrower indication – 15% of their initial target in order to be 10% more efficient than placebo. After 1-2 weeks, we can tell them if it’s possible to identify a bigger population responding more than 10% above placebo with higher chances of validation. That’s the way we approach our clients.
Another example is when a Phase II study is very successful, then– nobody will try to identify subgroups in this study because it’s a positive outcome. But why should those specific subgroups – the very good responders and the non-responders -- why should they be present in the same proportions in Phase III? It won’t generally be the case.
We were hired by a top 20 pharma to study a given adverse event associated to a drug on the cardiology market. One of the subgroups we identified was diabetic patients with normal cholesterol (most diabetics have abnormal cholesterol)… nobody would have thought to test between diabetes and normal cholesterol since these features did not have any individual influence on the adverse event. But our algorithm tested every interaction and found that this [relationship] explained 1/3 of the global adverse events. This has resulted in a change in the label of the drug.
If you consider the drug development pipeline, how broadly applicable is your technology?
There are about a dozen applications. Early in drug development, the earliest we’ve done so far is in lead optimization – we help chemists leverage amount of information they’ve required regarding compounds they’ve synthesized. We also have other programs in ADME to identify compounds with structural features most likely to be active, selective, non-toxic, etc. We’ve provided valuable insights that are very complementary with classical QSAR approaches.
Further downstream, there are Phase I data applications – people are interested in understanding response mechanisms and signatures. These are small numbers of patients, but large number of variables. Here we look for signatures that help segregate treated patients from non-treated patients.
We’ve discussed Phase II data already. In Phase III – our approach is very useful. Most people in pharma think Phase III data can’t help much in the real world – many companies invest, authorities and payers expect to pay on the results. Once a drug has passed Phase III studies and obtained approval— now it’s the problem of the health economics department, which looks to see how the drug behaves in the real world and how much they will be reimbursed based on that. People think they can’t use Phase III data for this, but we can actually extract optimal patient profiles from Phase III studies to help anticipate real-life studies…
In one real case, there was an oncology product released on the market, but [typically] 1 patient out of 5 responds to treatment… Nobody knows how to characterize those 20% responders. We’ve faced the situation when people say it’s not possible.
We also work with academic research teams: for instance, we were asked by Institut Gustave Roussy to process the data gathered from a cohort of lung cancer patients, all of whom had undergone surgery and half of whom had received adjuvant chemotherapy. Surprisingly, these patients had the same rate of relapse whether they had had chemotherapy or not. After analyzing the data, we identified two sub-groups characterized by specific transcriptomic signatures (involving 4 and 5 genes respectively):
One sub-group (30% of the patients) was found to have a rate of relapse of 13% after chemotherapy, compared with a rate of 86% without chemotherapy. A second sub-group (35% of the patients) had an 80% rate of relapse after chemotherapy, compared to a rate of only 4% without chemotherapy. These signatures are currently being validated.