BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

A Way To Increase Drug R&D Success Rates

Following
This article is more than 10 years old.

It’s pretty well established that the process to go from an idea to an FDA approved new drug is long, arduous, and expensive, taking anywhere from 10 – 20 years and costing at least a billion dollars. It’s no wonder then that the biopharmaceutical industry is continuously looking for ways to gain efficiencies and improve the odds. A great example is the pact formed this week between the NIH and ten drug companies. In this five-year collaboration, called the Accelerating Medicines Partnership ( AMP ), the NIH and these collaborators will share scientists, tissue and blood samples, and data in order to understand the underpinnings of the disease processes in Alzheimer’s, Type 2 diabetes, rheumatoid arthritis and lupus. In doing so, it is hoped that better drug targets will be identified thereby leading to new experimental drugs that have a higher chance of successfully treating these diseases.

This is a great initiative and perhaps bodes well for further such collaborations. But it must be pointed out that this impacts only the very earliest part of the drug R&D process - idea and target selection. One still has to find a compound that selectively hits the target, demonstrate safety of this compound both in vitro and in vivo, and then begin the three stages of clinical trials. The knowledge generated by the AMP will help, but it will only impact the first step in the long R&D endeavor. Improvements are still needed across the entire process.

Perhaps a clue to improve compound survival rates lies in an excellent paper in Nature Biotechnology.  Entitled “Clinical development success rates for investigational drugs”, the authors have provided a comprehensive study of clinical success rates across the drug industry. They have analyzed 2003 – 2011 data from 835 companies including large pharma/biotech, small to mid-sized pharma/biotech, and emerging biotech. Their data set includes 4,451 drugs with 7,372 independent clinical development paths (the larger number of the latter is due to companies exploring a single compound for multiple indications such as for different types of cancer). They have analyzed the data in many ways, including survival rates by therapeutic area, by molecule type (small versus biologicals) and by lead or non-lead indication. For a drug R&D aficionado, the data are fascinating.

What was especially interesting to me was the breakdown of survival rates by clinical stage. The stage of greatest compound attrition was in phase 2 – the place where compounds are first tested in people with the disease you’re trying to treat. The data show that only 32% of compounds entering phase 2 make it into phase 3. This is not surprising. While a compound might have been tested in animal models of disease, phase 2 is the first real test of the original concept of the research program. For me, phase 2 was the most important step in the development of a potential new drug, and positive results from these clinical proof-of-concept (POC) studies were always the most memorable. I can tell you exactly where I was when I heard the successful results of the POC studies from a number of Pfizer programs that led to drugs including Viagra, Chantix, Tarceva, and Xeljanz. These are exciting happenings in a research organization. But they are all too rare.

Survival gets better once a compound makes it to phase 3, where 60% advance to regulatory filing. However, my sense is that 20 years ago phase 3 survival was much higher. In the 1990s, there were multiple entries in major drugs classes such as statins, ARBs and antibiotics, and the path forward to drug approval tends to be easier for follow-on programs. Plus, safety and differentiation hurdles were not nearly as high then as they are now. Finally, there are some therapeutic areas where a phase 2 POC is not definitive and phase 3 is where the true proof of efficacy lies. Alzheimer’s disease is such an example in that phase 2 studies tend to measure a compound’s ability to lower markers of disease, but phase 3 is where the full blown multi-year study in patients occurs. In a sense, the risk in such therapeutic areas is shifted to phase 3. Thus, one can understand why phase 3 survival rates are lower now than in the past.

One could be discouraged by the survival rates for phase 2 and phase 3 across the biopharmaceutical industry. But these are the areas that are the most challenging, where the science behind your theory really gets tested and where the greatest risk lies. There are high hurdles to overcome in these two stages and for now, I don’t see easy ways to overcome these odds. However, the main surprise to me in reading this paper was the high rate of attrition in phase 1. Only 64% of compounds are making it through this stage. Yet, phase 1 should be pretty straightforward. Normally, all you are doing at this point is testing a compound in healthy volunteers, first in single daily escalating doses, then in in multiple doses for some weeks, to confirm that the compound is well tolerated in humans before moving on to patients. Theoretically, before any compound has been dosed in humans, it has gone through extensive preclinical evaluation. It has been tested in a battery of in vitro tests to determine selectivity and safety. It has been screened against a battery of enzymes to understand its metabolism. Major metabolites have been synthesized and screened to determine if they are safe. It has been tested at high doses in animals (usually rodents and dogs) for 30 days to measure potential risks in vivo. Only after extensive testing should a compound enter human trials.

Why then don’t more compounds clear phase 1? At times, the in vitro and in vivo preclinical models don’t predict what will happen in humans. But this shouldn’t happen 36% of this time. This should be a relatively rare occurrence if the candidate selection criteria are vigorous and enforced. I can only speculate as to why these number are not higher. As this survey includes a wide variety of R&D organizations, it could be that there are major differences in the rigor of the preclinical studies conducted by different organizations. For example, small companies and start-ups might not have the resources, expertise or time to conduct a lot of upfront studies. I do recall that in my time in big pharma, some early development compounds that we reviewed  as potential licensing candidates were missing studies that we would have routinely done at that point.

So why this focus on phase 1? Well, unlike phase 2 and 3, one can envision a drastic improvement in phase 1 survival rates. I don’t think that you can ever get 100% survival, but I do think that 85 – 90% is achievable. That would improve overall survival rates from the 1 in 10 reported in this paper to 1 in 7. How can this be done? I have already noted that the importance of the NIH’s AMP initiative. Perhaps a similarly conceived precompetitive initiative can be created whereby this same group can get together, study the leading causes of compound attrition in phase 1 based on their decades of experience, and then craft guidelines – a “how-to” book so to speak - that would create a set of best practices that can be shared through-out the biopharmaceutical world. The AMP is a great step. Wouldn’t it be nice to create a similar effort designed to improve phase 1 survival?