Jack Scannell, from Oxford University's Centre for the Advancement of Sustainable Medical Innovation, and Jim Bosley, from Clermont Bosley LLC, are scrutinizing the drug discovery process using mathematical tools more typically employed by economists to study decision making.
“It became clear to me that the cost of R&D had increased spectacularly – it cost 100 times more to bring a drug to market in 2010 than in 1950 – even though technology had vastly improved,” Scannell told us. “In some cases, drug discovery technologies have become billions of times more cost efficient.”
According to Scannell, there are two possible explanations: either we are running out of things to discover in which case, better technology just delays the inevitable; or else researchers are making inappropriate science and technology choices (in which case, they are doing the wrong things more efficiently).
“The problem for most people in academic and industrial drug discovery is that the more one disagrees with one of the explanations, the more one must agree with the other,” he added. “I don’t see other ways to explain how input efficiency can increase so much while output efficiency declines.”
Through their research, Scannell and Bosley have shown that the chance of discovering an effective drug is surprisingly sensitive to the validity of the experimental methods.
Scannell told us that “small changes in the validity of models can be more important than doing 10 times, even 100 times, more projects, or screening 10 times, even 100 times more drug candidates.”
For the researchers, this may begin to explain why R&D efficiency can go down when technological efficiency inputs seem to go up.
The researchers have two hypotheses for why model validly may have fallen – for one, very effective screening and disease models are retired.
“The best models yield cures so we stop using them,” said Scannell. For example, modeling stomach acid secretion in dogs to discover new drugs related to excess stomach acid is no longer done. The model worked, and resulted in H2 antagonists and proton pump inhibitors.
“What happens is we end up working on diseases where, over the past 50 years, the models have not yielded cures,” explained Scannell, “probably because the models are bad.”
More controversially, he added that there has been a fashion for highly reductionist models, which have intrinsically low validity in many cases. However, “We haven’t done the work to partition the blame.”
Currently, there isn’t a common language for communicating validity between different biomedical disciplines, which is problematic. Scannell described this lack of communication as “validity leakage.”
“A lot of implicit knowledge on validity isn’t effectively communicated, which means people make bad R&D investment decisions,” he added. “CROs need to develop better methods to evaluate model validity. They also need the right language to communicate the validity of their models to their clients.”
Additionally, Scannell added,“If you’re a CRO and you have a demonstrably valid model, you probably aren’t charging enough for it. A model that is only marginally more valid than a competitors’ model should be able to command a substantially higher price.”
A model future
Scannell hopes to move forward with two new projects. The first aims to develop training, tools, and work processes to help the diverse groups who make R&D investment decisions get better at identifying and backing the most valid screening and disease models.
“I think that it is terribly difficult to evaluate the validity of screening and disease models on a prospective basis,” he said. “However, it is terribly important, and a lot of organizations are starting from a very low base.”
The project aims to start with biomedical charities and public sector funding agencies, before considering commercial organizations.
“At a minimum, decision making groups need to understand the quantitative importance of qualitative validity judgements. They need a lingua franca with which to talk about validity-related concepts. They need to ask for the right data from project proposals, and they need to understand that quality will often trump quantity.”
The second project aims to produce a quantitative history of screening and disease models, in which Scannell would map the installed base of models that have been used in biomedical research over the past 40 or 50 years.
“The ultimate aim is to get a better sense of what a good model looks like on a prospective basis,” explained Scannell, “so we can do a better job of avoiding bad experimental models.”