Predictive tool could save pharma billions

By Mike Nagle

- Last updated on GMT

Related tags Drugs Pharmaceutical industry

A tool that predicts which experimental drugs will ultimately be
approved for release could cut development costs by nearly 40 per
cent, saving pharma companies hundreds of millions per new drug.

Dr Asher Schachter and Dr Marco Ramoni, from the Children's Hospital Boston Informatics Program (CHIP), developed Phorecaster, a model that could calculate the probability that a given drug would pass successfully through Phase III clinical trials and receive New Drug Application (NDA) approval from the US Food and Drug Administration (FDA). Their model is based on populations of drugs rather than the more conventional populations of patients.

It now costs more than $800m (€615) to develop a new drug with a large proportion of the cost coming from late stage clinical failures. If drug developers could predict which compounds will succeed, it would give them the confidence to invest or the knowledge to cut their losses early. However, the difficulty is to achieve this while not stifling innovation.

Terminating a failing drug earlier in development saves pharma money, which improves revenues and frees up resources for more promising drugs in development. This could empower the industry to take more risks on truly innovative medicines rather than just take the easy option of developing 'me-too' drugs.

"We're seeing a trend of rising costs for fewer breakthrough drugs. We have shown that we can reverse this trend, and reduce drug costs without hampering profitability,"​ Dr Schachter explained to DrugResearcher.com

Perhaps most importantly, the ultimate winners from a successful predictive tool would be the patients. The technology could ensure that patients are only ever exposed to safer and more effective drugs.

But pharma companies wouldn't just be investing in a predictive tool, no matter how useful that tool may be.

"The model is a single component of what Phorecaster is. Marco and I are key elements of what we are offering to pharmaceutical and biotechnology companies, big and small,"​ he said.

Schachter and Ramoni built the model using safety and efficacy data from over 500 successful and failed new drugs divided into therapeutic categories. The model was then tested on a group of cancer drugs whose fate was already known - including Novartis' Gleevec(imatinib) and Genentech's Rituxan/Rituximab (mabthera). It was found to be 78 per cent accurate in predicting clinical trial success and eventual approval.

"Our reported model is centered on safety and efficacy because drugs that are unsafe and/or ineffective are not only bad for patients, but will also inevitably end up costing more in terms of legal implications as well as public loss of trust and faith in pharmaceutical companies,"​ said Dr Schachter.

The model also contains a function to account for drug candidates that fail for more than one reason.

This model is different to previous techniques as, instead of focussing on patient populations to give overall success rates, it can address go/no-go decisions for a specific New Chemical Entity (NCE) drug, as classified by the FDA.

Dr Schachter continued: "I would also argue that market economic forecasts assume that the drug in question will actually reach marketing approval by the FDA. Our model predicts the probability of FDA approval, and therefore empowers market forecasts."

To input a drug into the model, characteristics such as therapeutic class, development source (in-licensed vs in house), preclinical and clinical data are all needed.

Schachter estimates that the model could save developers up to $283m per successful new drug (from $727 to $444). The model also predicts revenues would increase by an average $160m per Phase III trial (during the drug's first seven years on the market). Dr Schachter explained that this reflects the models ability to reduce the erroneous termination of would-be successful drugs; therefore increasing the likelihood such a drug will make it to market.

As well as using data from successful drugs (true positives), the model also takes into account false positives - drugs that were not approved despite making it through trials. These figures can be then be used to predict drugs that will fail (true negatives) and false negatives - drugs that were terminated early and might have ultimately proved successful.

On the other hand, while using the tool, Dr Schachter revealed that: "The model predicted a high likelihood of failure for approved drugs that later demonstrated significant safety and/or efficacy issues."

However, the model is not perfect. If the input data isn't up to scratch, obviously the predictions suffer. This potential weakness is exacerbated by the pharmaceutical industry's general reluctance to share data on clinical trials, particularly on failed drugs. Although Schachter approached several pharma companies, he was unable to access their data.

He said: "One of our most urgent messages is that increased data transparency would empower predictive models and would reduce industry wide failure rates, benefiting companies, patients, insurers, investors, and even the FDA which would have a lessened burden in terms of the number of questionable drugs to review for approval."

Schachter found that there was insufficient data to model combination therapies, which is especially a problem in cancer treatments where monotherapy is rare. Schachter also tried to build the model using data where every drug candidate was evaluated for a specific indication. This proved impossible because many drugs treat more than one type of disease.

The data-sharing problem has recently been addressed by the Pharmaceutical Research and Manufacturers of America (PhRMA), which has launched a publicly accessible database of clinical trial results.

Related topics Preclinical Research

Related news