Artificial intelligence stands to ‘revolutionize’ research: Bioclinica
The use of advanced analytical technologies like artificial intelligence (AI) and machine learning (ML) is not exactly new to the field of clinical trials. However, the use of such tools has exploded in recent years, due to several factors; according to a recent industry report by KPMG, the pandemic has accelerated AI’s adoption so much it has been termed “COVID-19 whiplash.”
Bioclinica chief innovation officer Dan Gebow, who has years of experience harnessing AI, spoke with Outsourcing-Pharma about the evolution of AI in clinical research, and his own experience in helping shape the tech.
OSP: Could you please share the ‘elevator presentation’ version of Bioclinica—who you are, what you do, key specialties, and what makes you stand out from the crowd?
DG: I'm a research scientist by training and over the course of my career I’ve worked on several clinical trials where I was really frustrated with the lack of effective technology. About 15 years ago, I was fed up enough to set out on my own to modernize the tools that are used in clinical trial research and bring them up to the level of sophistication that I was seeing in Silicon Valley at the time.
That mission has parlayed that into what I do today at Bioclinica, which is doing just that on a very large scale with a team of more than 300 engineers whose sole job is to invent the future of disease discovery research.
Bioclinica’s primary business is our imaging core lab. In a clinical trial, medical imaging is an important endpoint. It's estimated that around 60% of trials have medical images collected as part of the patient's review, and in oncology, it's nearly 100%.
All the images that are collected during a clinical trial must be sent to a team of independent imaging experts for timely and accurate analysis. This process is a critical factor in getting a new pharmaceutical or medical device through regulatory approval and into the hands of patients.
Several of our imaging experts read the images at the same time, but they're not allowed to compare answers to keep the process independent and free from bias. That’s important because working with medical images is very different than working with standard data points like height, weight, and gender. Medical images are very large, fragile, and there is a lot of variability in the interpretation.
Accurate image analysis is a tough job, but Bioclinica has nearly 30 years of experience and we employ leading MDs and PhDs that form one of the world's largest brain trusts of imaging science. A lot of what I work on is infusing the expertise of these doctors, technicians, scientists, and engineers into technology tools, including AI.
While I don't believe that AI will ever replace medical imaging experts, I do think it can help them be more consistent, as well as allow them to spend more time with their family or doing things they enjoy, which I think should be the primary goal of any technology that assists in human functions.
OSP: From your perspective, how has understanding and use of AI evolved in life sciences over the past few years?
DG: Like other industries, the ready availability of powerful tools like AI is just starting to revolutionize how biomedical research is performed. I think biomedical research data is a perfect use of AI. The old adage of “junk in, junk out” is more relevant with AI than other data systems using messy data because the resulting AI models are going to be less accurate or predictive.
If you think about it, the overall goal of a clinical trial is to try to control as many of the variables as possible to get the most accurate and validated datasets in a consistent manner. A high level of consistency from structured data sets the best baseline to build AI models from.
In practice, AI can help build very pragmatic tools that work to do things like making data more accurate to remain compliant with privacy regulations, help screen patients for trial eligibility, or simply support doctors by acting as a second set of eyes during data review.
A good example is a trial I worked on where toddlers who have a neuromuscular condition, where their gait and their movements are somewhat inhibited, were given a pharmaceutical to see if it helped improve how they move. Experts reviewed hours of videos of these tots, standing up, sitting down, chasing a ball, doing all these different motions, but the patient's face was shown on that video, which is a privacy violation that carries a very steep fine.
Retracting the details of, in this case, a very small face required a black bar drawn across its eyes and nose, but not the upper lip, as that was needed for the analysis. We tried doing the retraction by hand but that took upwards of 30 hours. Then we tried a semi-automated method, using software that can perform facial redaction, but that was still taking eight to 10 hours.
However, using AI we were able to build a model that can follow the subject and react to their eyes and their nose appropriately with 99.9% accuracy. It took five minutes to run! AI isn’t perfect so we had a human quality check it afterward, but the most intriguing part for me is the AI learns from its mistakes and doesn’t make it twice (unlike humans).
Another way that AI is used in life sciences is to look for hidden patterns that are not visible to the human eye. This form of AI is typically used in early-phase development of pharmaceuticals today, but I seriously believe that we're going to see an acceleration of personalized treatment of cancer and other diseases, just based upon the illumination of hidden trends in the data.
OSP: The KPMG analysis on AI attitudes/adoption is fascinating—what do you think the key takeaways might be regarding life sciences?
DG: The analysis stated that 77% of life science business leaders said that they're already using AI in their organization and about 45% believe that AI is moving at the appropriate speed. I agree with that; there tend to be early adopters, and largely we’re still figuring out how to use it. You don't want to rush in, but you don't want to be behind the eight ball either.
Additionally, about 37% said that COVID-19 accelerated the AI adoption, and they credit technology for how they responded to that. I have a great story about how it impacted us. It forced us to invent new technology right at the beginning of the pandemic, and that that new technology changed how we look not just at infectious disease in the future, but cancer and Alzheimer's and all the other things we study.
In cancer-related clinical trials, the patient is given a cancer therapeutic which might have a side effect of causing interstitial lung disease (ILD), or in layman’s terms, really bad pneumonia. It’s our job to collect data on the patients that had that side effect and determine whether the drug being tested was possibly the cause.
Rewind to last March, when we suddenly have all these patients showing up in emergency rooms with what looks like pneumonia, and they're cancer patients in these trials. We determined later that many of the symptoms of COVID overlap with the symptoms of that interstitial lung disease, so that made the pharmaceutical being tested in the trial look more dangerous than it was because it wasn't drug-induced ILD, it was COVID.
Drawing from that experience, we implemented an AI model that could quickly review a patient’s CT scan of their chest and lungs to help us answer the question, “does this look like COVID?” And, if it did, our technology instantly sent an email back to the research site stating that this person might have evidence of COVID and asking if there were any COVID test results or COVID status on this patient.
All of that happens in a matter of minutes of the CT scan being uploaded to our servers, so researchers can monitor that patient more closely and we can distinguish COVID cases from interstitial lung cases and prevent that confounding.
OSP: How do you think biomedical research will evolve as a result of AI in the future?
DG: Predicting the future is always tough, but what I foresee is akin to the analogy of doing math by longhand and then all of the sudden being able to use calculators. I think that AI is going to be like that—an accelerant to speed up how work is done. Just another tool in the tool chest.
I can give you a real-world example that we use in the cancer research room, to give you an idea: When a patient is in a clinical trial, say for a solid tumor of some type, images of their chest and lungs are sent to a radiologist to measure the tumor size – area, diameter, etc.
Usually, there are multiple tumors, and it is simply too time-consuming to analyze all the tumors in the body, so the radiologist picks three to measure and then monitors how those tumors change over time. The problem is, there is a lot of variability in what the radiologist may notice two weeks ago versus two months from now. Because the radiologist only checks three tumors, the radiologist may only be measuring the three tumors that didn't shrink or didn't shrink based upon the treatment.
Using AI, the radiologist could have the computer check all tumors at once in the body, and at every single visit wherein they get imaging done. Now they have the ability to look at how all tumors change over time, rather than just those three, which may or may not be representative of the patient's outcome.
In that way, AI is not just pushing past the limits of what humans can see, but rather pushing into a whole new science altogether. It's enabling a holistic approach to studying the entire manifestation of a disease.