A recent study led by researchers at Hannover Medical School, Germany, in cooperation with researchers from McGill University, Canada analyzed the descriptions of animal studies found in investigator brochures (IB).
According to the study, which was published recently in the open access journal PLOS Biology, less than one-fifth of investigator brochures referenced animal studies were peer-reviewed. Additionally, less than 20% of animal studies described the use of techniques such as randomization blinding or sample size calculation.
“The past five years saw an increasing debate on whether poor quality and publication bias of preclinical efficacy studies might contribute to the high attrition rates in clinical research,” said Dr. Daniel Strech, professor for bioethics at Hannover Medical School and senior author of the study.
Concerns of industry representatives align with results from former meta-research studies, which found poor reporting on quality issues in animal research protocols and peer-reviewed publications for animal research, Strech told Outsourcing-Pharma.com.
To better understand the quality of the subgroup of preclinical efficacy studies – “that finally inform and justify clinical trials” – Strech said the researchers needed to look into IBs.
“We thought that the quality of preclinical studies in IBs must be better (more valid, more confirmatory in the design) than the quality of the more explorative preclinical studies published in the peer-reviewed literature,” he added.
In their review, the researchers found that about 90% of the reported preclinical efficacy studies have not been published.
“This is maybe not surprising for many stakeholders because these studies might be commercially sensitive,” Strech explained. “However, this fact demonstrates the importance that IBs do not only report the results of the relevant preclinical studies but also their design and quality features.”
According to the Strech, those involved in risk-benefit assessment, including investigators, institutional review broads (IRBs), agencies, and data and safety monitoring boards (DSMB), “cannot simply look at the results of the relevant preclinical efficacy studies but need to appraise their validity/credibility first.”
“This leads to the second finding that these quality information are more or less lacking completely,” he said. The third finding? Only 6% of all preclinical efficacy studies reported results that do not demonstrate the desired effect.
While the finding might initially be obvious, Strech said a second look raises two major concerns:
“First, we need negative preclinical studies to better understand where the ‘window of opportunity’ for dosing, time of treatment etc. is. If everything is positive how can we know that the best dosage was selected and that the time of the intervention is not too early or too late in the disease state?”
Additionally, Strech said the researchers would expect more negative results “just by chance,” if the mean sample size per group was eight animals. “These negatives are maybe false-negatives,” he added. “We then do these experiments again and then we have a positive result.”
More than 80% of all IB presented only positive preclinical efficacy studies, all with low sample sizes, which Strech said is puzzling and raises concerns about selective reporting of animal studies.
Developing preclinical efficacy study standards
For the researchers, Strech said it is “really unclear why industry is grounding their costly trials on these low quality preclinical studies.”
Why is quality seemingly not important? Strech said there must be other reasons which the researchers aim to better understand in future studies.
As such, the researchers are not yet ready to provide recommendations for industry. However, Strech recommends a discussion of the issues and clarification of “whether this is really the best way to conduct effective and efficient drug development.”
Recommendations for the regulatory agencies are clearer: “They should more explicitly discuss the need of guidance for the reporting of preclinical efficacy studies in IBs. At least minimum standards should be developed.”
Authors: Wieschowski S, Chin WWL, Federico C, Sievers S, Kimmelman J, Strech D (2018)
Title: Preclinical efficacy studies in investigator brochures: Do they enable risk-benefit assessment?
Source: PLoS Biol 16(4): e2004879