AI use in life science not yet at full potential: IBM Research

By Jenni Spinner

- Last updated on GMT

(Jolygon/iStock via Getty Images Plus)
(Jolygon/iStock via Getty Images Plus)

Related tags IBM Research and development Research Artificial intelligence machine learning

Leaders from the R&D tech company weigh in on how use of artificial intelligence has grown, ongoing challenges to effective adoption, and what lies ahead.

The use of artificial intelligence (AI) to advance drug development and healthcare is on the rise. According to Vantage Market Research, the value of the global AI healthcare market could reach over $95b USD by 2028.

IBM Research is one organization working on projects aimed to put AI to work to solve a range of health-related puzzles. The company’s various projects and collaborations include:

  • Research published in Nature Communications, conducted in partnership with the Juvenile Diabetes Research Foundation (JDRF), on a new machine learning model combined with a unique data visualization tool that discovered three distinct progression trajectories of islet autoantibodies associated with different risks of Type 1 diabetes onset.

  • Models developed in partnership with the Michael J. Fox Foundation to identify symptom patterns of early-stage Parkinson’s disease and predict the progression of these symptoms’ severity and timing.

  • In partnership with Boston Scientific, discovered new methods of utilizing modeling to better track and understand chronic pain.

Outsourcing-Pharma checked in with IBM Research to discuss the use of AI, and where the field might be headed. The following experts offered insights:

  • Ajay Royyuru, chief science officer of healthcare and life science research
  • Jianying Hu, global science leader of AI for healthcare
  • Michal (Schreiber) Rosen-Zvi, director of AI for healthcare and life science
  • Joshua Smith, research staff member
  • Jeff Rogers, global research leader, distinguished research scientist, and senior manager of digital health

OSP: Could you please talk about the evolution of the use of AI in life sciences, particularly in preclinical research and clinical studies?

AR: Digitization is a trend that is transforming all industries. In the biomedical research domain, digitization is happening in the form of platform technologies (e.g. -omics) that now allow for systematic observation of the entire system, from molecular, single-cell level to the other extreme of metagenomic and ecosystem-scale observations. This presents challenges and opportunities for increasing the sophistication of data integration and reasoning.

Meanwhile, AI techniques have rapidly progressed from tasks of classification and inference to tackle increasingly complex tasks of multi-modal data representation, prediction, and reasoning. These AI techniques, along with high-performance computing, enabled simulations of biochemical and biophysical mechanisms now get applied to various biopharmaceutical preclinical research problems such as target identification, target validation, and generative modeling to create novel therapeutic entities, as well as estimating and predicting downstream outcomes of safety and efficacy.

Suboptimal patient selection and recruiting techniques, paired with the inability to monitor and coach patients effectively during clinical trials, are two of the main causes of high trial failure rates. High failure rates of clinical trials contribute substantially to the inefficiency of the drug development cycle; in other words, the trend that fewer new drugs reach the market despite increasing pharma R&D investment. This trend has been observed for decades and is ongoing.

AI techniques have advanced to a level of maturity that allows them to be employed under real-life conditions to assist human decision-makers. AI has the potential to transform key steps of clinical trial design from study preparation to execution towards improving trial success rates, thus lowering the pharma R&D burden. (Harrer S. et al., Trends in Pharmacological Sciences, 40: 577-591, 2019)

OSP: What are some of the challenges professionals have faced as the technology available to gather and process data has evolved?

OSP_IBMdata_pic
(Jolygon/iStock via Getty Images Plus)

MRZ: Modern technology has resulted in the collection of unprecedented amounts of diverse data. In addition, as pharmaceutical companies tackle increasingly complex medical conditions, attempt to broaden the research to include more diverse populations, and assess how a new medication will interact with existing ones and with co-morbidities, biomarkers that are used as outcome measures have become more complex as well.

Many novel biomarkers may now involve richer data types that are also inherently more complicated to analyze, such as co-morbidities, polypharmacy, and genomic data. At the same time, recent advancements in technologies have enabled researchers to overcome longstanding barriers such as the volume, variety, and velocity of big data, as well as security, provenance, and compliance with evolving regulatory requirements.

However, key challenges not related to technology remain. These include but are not limited to poor data quality, a general lack of understanding of big data, and the lack of professionals with big data technology skills. More importantly, a good understanding of the hidden structures of data is critical to extracting value from big data by generating novel, useful insights through prediction, and inference.

Currently, big data is too complex, multimodal, and noisy for human experts to leverage without the use of the appropriate technology. Furthermore, traditional methods and technologies for analyzing big data may not be sufficient to address the challenges associated with discovery acceleration. Consequently, as we enter the new era of accelerated discovery, there is an urgent need to develop innovative technologies that enable fast and scale scientific discovery, while providing rational explanations of variability and uncertainty in data.

OSP: How can automation of various processes help in drug discovery and development? JS: Acceleration of the drug discovery and development process essentially requires the integrated application of the right set of automation capabilities to long-standing bottlenecks within the pipeline. This requires the identification of processes that rely on complex networks of parallel layers of information and feedback loops, which exist in everything from drug target identification to therapies and outcomes.

For example, IBM Research created a generative modeling approach that can access large target-ligand interaction datasets, leveraging the information to simultaneously predict activities for novel kinase-ligand combinations for new molecular entities that are needed for drug development to modulate new disease targets. Similar time and cost-saving benefits are achievable along the entire pipeline from drug repurposing and improved safety and efficacy to trial enhancement and disease staging.

These automation capabilities are supported by a confluence of key technologies at our fingertips, including AI, hybrid cloud, and HPC, together with knowledge integration, simulation, and generative model toolkits as well as other HCLS-specific accelerator technologies that we’ve developed in this space, leveraging, for example, casual inference, multimodal data fusion, molecule generation, and disease progression modeling technologies.

OSP: Please talk about how experts at IBM Research are working to harness AI to advance health-related discoveries.

JH: At IBM Research, we are working on accelerating the discovery of new therapeutics and biomarkers using AI. We believe the recent advancements in AI, hybrid cloud, and quantum computing present an unprecedented opportunity to address long-standing bottlenecks responsible for slow, costly drug discovery and development with high attrition rates.

Building on AI technology foundations such as deep knowledge integration, generative models, and AI enriched simulation, we are developing reusable accelerator technologies targeting every step in the drug discovery and development pipeline: from the design of new molecules to hypothesis generation of novel indications of approved drugs, to AI enriched disease and therapeutic mechanistic models informing lead optimization, to the discovery of biomarkers used to enhance clinical trials and generate hypotheses for new targets.

OSP: Could you provide more detail on your work with the JDRF Foundation?

JH: In collaboration with JDRF and five other academic research teams from four countries, we formed the Type 1 Data Intelligence (T1DI) Study Group to advance scientific knowledge on the autoantibody progression for identifying persons at risk of developing T1D. Specifically, the goal of our work is to use AI to discover and validate biomarkers indicative of the onset and progression of T1D as early as the pre-symptomatic phase.

Last year, we published in Diabetes Care a breakthrough study that found that the number of islet autoantibodies present at seroconversion reliably predicts the risk of the onset of T1D in young children. Additionally, we recently published in Nature Communications the results from a follow-up study that established the progression of T1D from the appearance of islet autoantibodies to symptomatic disease is predicted by distinct autoimmune antibody trajectories.

OSP: Could you provide more detail on your partnership with the Fox Foundation?

JH: Collaborating with the Michael J. Fox Foundation, we developed novel AI models to learn from longitudinal patient data and then group typical symptoms of Parkinson’s disease as well as predict the progression of the symptoms in terms of timing and severity, while accounting for intra- and inter-individual effects. The findings from this model were published in The Lancet Digital Health last year.

We discovered that the progression of Parkinson's disease is heterogeneous and characterized by nonsequential, overlapping trajectories, which implicate patient stratification and management. Such disease progression models are useful for clinical care and the enhancement of clinical trials. They can, for example, be used to identify and select the right participants with the right disease stage or belonging to a given subtype of disease likely to develop endpoints or benefit from new treatments, effectively increasing the chances of clinical trial success.

OSP: What do you think might be on the horizon regarding advancements and developments with AI use in healthcare?

MRZ, JH, JR: The future of AI in healthcare is likely to be characterized by the convergence of both data, models, and technology. For example, considering data in drug discovery, it would be possible to seamlessly integrate evidence from multimodal patient data regarding candidate drugs, repurposing that data with computational methods applied to target identification to generate stronger, more reliable evidence for potential new indications for existing drugs.

On modeling, we are likely to achieve advancements in multimodal representational learning and fusion of methods that benefit from mechanistic modeling, simulations and AI applied to data. Furthermore, the combined use of AI, high-performance computing, quantum, and hybrid cloud will enable the unlocking of insights from empirical research and real-world data to enable breakthrough discoveries at unprecedented scales. 

Related news

Show more

Related products

show more

Saama accelerates data review processes

Saama accelerates data review processes

Content provided by Saama | 25-Mar-2024 | Infographic

In this new infographic, learn how Saama accelerates data review processes. Only Saama has AI/ML models trained for life sciences on over 300 million data...

More Data, More Insights, More Progress

More Data, More Insights, More Progress

Content provided by Saama | 04-Mar-2024 | Case Study

The sponsor’s clinical development team needed a flexible solution to quickly visualize patient and site data in a single location

Using Define-XML to build more efficient studies

Using Define-XML to build more efficient studies

Content provided by Formedix | 14-Nov-2023 | White Paper

It is commonly thought that Define-XML is simply a dataset descriptor: a way to document what datasets look like, including the names and labels of datasets...

Related suppliers

Follow us

Products

View more

Webinars