How to reduce the odds of failure in late phase clinical trials

By Melissa Fassbender

- Last updated on GMT

(Image: Getty/masterzphotois)
(Image: Getty/masterzphotois)

Related tags SGS Drug development Biomarker Adaptive designs Clinical trial

Improving the prognostic value of early phase data is among one of several ways to increase the probability of successes in drug development, a process which has become more expensive as failure rates have risen, says an industry expert.

Lack of efficacy has been considered one of the “big killers” of drug development, and while pharmacokinetic (PK) measurements have improved, failure rates have not, as other operational issues challenge clinical trials, such as patient recruitment.

“Where PK improved and where it actually got better through adapting clinical pharmacology, it didn't make the impact we'd hoped for in terms of Phase III failures,”​ said Adrian Wildfire, scientific director, SGS Life Sciences. In fact, Phase III failures have increased between 1990 and 2015.

“So just because you can account for some of these reasons, some of these drivers of failure, it doesn't mean you're going to improve within the industry,”​ he told us. “What tends to happen is you still see high failure rates, but often those deviate into new areas of vulnerability.”

Additionally, while spend has been increasing, the number of new chemical entities (NCEs) and new medical devices, have fallen, said Wildfire, noting that it has become more expensive to develop new drugs – a price point that has jumped by almost a billion in the last 20 years. “The cost of failure has gone up dramatically," ​he added. 

“It’s still extremely expensive to take any of these [NCEs] through to fruition, and whether it be drug or vaccine, and unless we are going to see some extensions in patent life, or whether we're going to see better pull mechanisms being promoted, we may see some of the promising new drugs for infectious diseases suffering the same fate as antibiotics and anti-microbial resistance,”​ said Wildfire.

Biomarkers and adaptive trial designs

To avoid suffering such a fate, researchers need to spend more time and money looking at early phase data, to ensure covariates are actually correlates, “and also of real prognostic value when measuring markers of disease,”​ Wildfire explained.

“With or without biomarkers, you see different ratios of success,” ​he added, “but also it's quite clear that especially in the early phase, biomarkers can have a big effect on whether you're confident to proceed.”

Still, when examining an infectious disease state, measuring biomarkers can be difficult when it is not known at which point a patient became infected. This is why controlled human infection modeling has become quite popular to follow disease progression “right through from A to Z,”​ Wildfire explained – and also looking at how those biomarkers change, which can be good for discovery.

Adaptive clinical trial designs also are driving the industry forward by improving and accelerating timelines. “Adaptive designs where you can change your dosing regimens or dosing formulations … and adjust PK timing in terms of sampling can also be very useful,”​ Wildfire explained.

Using non-adaptive designs is becoming less popular because it doesn't give researchers the flexibility to follow clues that might emerge during the clinical trial process.

“You don’t know it all when you go in,”​ said Wildfire, explaining that part of a clinical trial is having a null-hypothesis: “You're saying nothing's going to happen, so when something does happen, you may not have the correct pathway to follow the piece of string to the end to say what exactly is happening. And we've seen that numerous times.”

However, with simpler endpoints, an adaptive trial is not always needed. Specifically, if the molecule was well characterized in preclinical and changes aren’t expected.

“Sometimes you want to keep your trial very well characterized and structured because you have some very simple questions to ask and you do not want to add extra noise through unnecessary complexity,”​ explained Wildfire. “Each time you add an arm to a trial design you're decreasing the power of your study.”

This is less of an issue in larger studies, but it poses a challenge in smaller trials which also may be restricted in terms of funding. Still, one of the biggest problems Wildfire sees is people trying to be too ambitious in their study design and trying to answer too many questions at once, which results in cloudy data that may not prove statistically significant.

His advice? Think more about the problem and less about the solution: “If you understand the problem well enough, you probably won't have to try and find so many solutions.”

Additionally, Wildfire said researchers should spend a greater amount of time looking at preclinical data before considering trial design, because the most common reason for failure is efficacy.

“There should not be a 50% failure rate at Phase III on efficacy,”​ he said. It should be lower. “So, somebody somewhere is not thinking about [the preclinical data] hard enough.”

Optimism bias and outliers

High failure rates exemplify a number of challenges, such as a lack of sufficient correlates and analytical tools that can predict efficacy in large populations. Not using the right animal models or not using human models early may also compound these issues, Wildfire said.

Another challenge is that people are frequently too optimistic about their data, which he said creates optimism bias.

Taking an evidence-based approach also requires the right number of patients, the lack of which is another issue often seen. Said Wildfire, “People don't want to put the money into numbers and they don't want to put the money into recruitment because it is seen as the easy part, an operational issue and not a scientific challenge.”

Where the money isn’t there, people may think they will get more value for their money by doing a study with fewer subjects and then trying to do “some fancy stats,”​ he explained.

Ignoring outliers also is a “quick route to failure,”​ he said. “Outliers tell you something ... So, they might be minimal in your dataset, but they might be maximal in another dataset – nature is full of noise, it’s there for a reason. Populations are heterogenic, careful selection of one group is not a guarantee of reproducibility in another.”

Inadequate investigation of adverse events and use of surrogates, which Wildfire said can be a big mistake, also can lead to incomplete datasets or inappropriate endpoints, with irreproducible data being another big problem: Only 17% of data is reproducible, according to several reviews.

“You only know where you’ve been when you look behind you,” ​he added. “Unfortunately, that’s when you see some of the errors and by then it is too late to correct for them in the study.”

Said Wildfire, “Learn from other people’s mistakes, look back at your data, ensure that you have a better understanding of what the likely pitfalls are and also a critical understanding of where the preclinical data took you, so you have a better idea of where you are going next time.”

Related news

Show more

Related products

show more

Using Define-XML to build more efficient studies

Using Define-XML to build more efficient studies

Content provided by Formedix | 14-Nov-2023 | White Paper

It is commonly thought that Define-XML is simply a dataset descriptor: a way to document what datasets look like, including the names and labels of datasets...

Overcoming rapid growth challenges with process liquid preparation

Overcoming rapid growth challenges with process liquid preparation

Content provided by Thermo Fisher Scientific - Process Liquid Preparation Services | 01-Nov-2023 | Case Study

A growing contract development manufacturing organization (CDMO) was challenged with the need to quickly expand their process liquid and buffer preparation...

Related suppliers