The new system, developed at Carnegie Mellon University, is able to determine the effects of a large number of drugs on various proteins using an approach that more accurately predicts drug interactions in addition to reducing overall drug discovery costs.
“Pharmaceutical and biotechnology companies already know that they need to do two apparently conflicting things: try to reduce the number of experiments that they do, and try to test for more side effects of potential drugs during early development,” Armaghan Naik and Robert F. Murphy, lead authors of the research told us.
“Our approach permits the second to be done by minimizing the first: it cuts out experiments that are not really needed.”
According to the researchers, while some companies are trying to reduce or replace experiments using predictive models, the models are generally fixed (or rarely updated) and only predict a small number of “hits” to follow up on. Conversely, Naik and Murphy’s proposed model is continuously updated, which could drive large scale experimentation.
At its simplest, the new system is a computer program that uses data to figure out which cell lines and drugs are similar to each other. It does this by viewing which cell line-drug combinations give the same phenotype.
“It can then predict what phenotype would be observed for experiments that haven’t been done, and estimate the confidence that it has in its predictions,” explained Naik and Murphy.
The algorithm’s final step is to choose an experiment batch that has low confidence predictions, in addition to testing the similarities that it found. “Then the robots and the automated microscope take over, collect the data, and the algorithm begins again,” they explained.
According to the researchers, the most innovative aspect of the design is the integrated algorithm of phenotype determination, model construction, and experiment selection.
A new approach
The researchers realized the need for a new approach a number of years ago, so they began working out the algorithm through a series of discussions.
“We then tested it on datasets that were simulated to be similar to the kind of data we expected,” said Naik and Murphy. This work was published in 2013 in PLoS ONE.
To test it in practice, the researchers had to carefully work out standardized protocols for the cell culture and drug addition using liquid handling robots. According to Naik and Murphy, the process was similar to what is done in high content screening, but with one big difference:
“In typical screens, all of the wells of the culture plates have the same cell line and the robots then just add drugs by ‘copying’ a master plate of drugs,” they said. “In our case, the active learner might ask for any one of 96 cell lines and 96 drugs, so we had to write a customized program to allow the robots to do that.”
They also had to design the image analysis and clustering software to be fully automated, so it took some time before they could start actual experiments.
Carnegie Mellon has submitted a patent covering two variations of the approach. Additionally, a startup company, Quantitative Medicine, LLC, has licensed the intellectual property and is offering companies the ability to recommend which experiments to do to for their particular system, so companies that want to make use of the approach can do so right away.