DIA 2019
AI and pharmacovigilance: ‘We can cut out a lot of this work and focus our energies,’ says MHRA
The number of adverse events (AEs) reported is trending “dramatically” upward, said Bruce Donzanti, PhD, senior group director, global pharmacovigilance innovation policy, Genentech, A Member of the Roche Group.
“When we’re looking for the needle in the haystack, we’re running out of human bodies to do this efficiently and accurately,” said Donzanti during a session at the DIA Annual Meeting in June.
Lisa George, RPh, advisor, global patient safety, case management, Eli Lilly and Company, said there is a 10-20% increase in AE reports every year – a number “not going anywhere but up.” This is being driven by an increasing number of diversified intake sources, she explained.
George is currently the global patient safety (GPS) case management business advisor responsible for Lilly's initiative to use artificial intelligence (AI) to accelerate case intake and manual entry in the safety database.
In a case study she presented, George explained that the use of technology has resulted in a significant reduction (60-70%) in the end-to-end cycle time of this work; however, she said machine learning is not going to fix everything and is not always needed.
“From an automation perspective, don’t try to do everything,” she added, noting that “human intervention” is always involved.
The regulator perspective
Speaking from the regulator perspective, Robert Ball, deputy director, Office of Surveillance and Epidemiology, CDER, FDA, said: “We want to improve the efficiency and validity of our analysis and think our experts’ time is best spent on complex tasks and not moving data around.”
There must be scientifically rigorous procedures in place to make sure AEs aren’t being missed, he said, and that reports that are not AEs, but being identified as such, are not being submitted
Addressing “the promise of AI,” Mick Foy, head of pharmacovigilance strategy, vigilance intelligence, and research group, Medicines and Healthcare Products Regulatory Agency (MHRA), said it’s a promise the agency is “very much looking forward to.”
“We can cut out a lot of this work and focus our energies,” he said, explaining that having a human “in the loop” is always going to be “very important.”
“Before letting the machine lose, it needs to be well validated and trusted,” Foy added. He also expressed the need to refine definitions and have further discussions, such as what is an acceptable error rate for AI systems.
“How do we know that the AI system knows what it doesn’t know and flags appropriately?” he asked rhetorically. In the future, Foy also said the industry will have to “think carefully” about what skillsets will be needed to address these and other questions posed by advancing technologies.