Authenticx CPO Eric Prugh on the impact of AI technology in pharmacovigilance

The-impact-of-AI-technology-in-pharmacovigilance.jpg
© Getty Images (Getty Images)

Artificial intelligence has emerged as a potential solution to inconsistent reporting of adverse events, which to this day remains one of the main challenges in the field of pharmacovigilance.

Eric Prugh is the chief product officer at Authenticx, a US-based company that recently launched an AI solution to detect adverse events in conversations to empower pharmacovigilance. Prugh leads product strategy, design, and product marketing at Authenticx and has spent more than 15 years building and scaling software companies.

In this Q&A with Outsourcing Pharma, Prugh discusses the challenges of adverse event reporting in healthcare and how AI could change this landscape in the years to come.

How does AI technology enable the detection of safety events that are otherwise left unreported?

Any gaps in reporting safety events (an umbrella term for adverse events, product quality complaints, other special situations impacting efficacy) generally can be traced back to one thing: volume. There are so many conversations flowing through the patient service touchpoints of pharmaceutical and medical device manufacturers, and manually trying to capture, review, and report every potential event is not scalable. The FDA estimates 90-99% of adverse drug events go unreported.

Additionally, agents may be over-reporting events out of an abundance of caution, diverting resources to reviewing and following up on an event that ultimately does not need to be reported. AI can quickly ingest unstructured conversation data, apply a consistent set of evaluation standards to those conversations, and quickly surface the conversations with potential risk or reportable events.  

Typically, when adverse events are reviewed to audit compliance, a small sample (1-3%) of customer interactions are audited to determine if the adverse event reported was accurately conveyed. Missing a statistically significant number of adverse events can result in costly fines, additional source data validation projects, and more. Our AI solution can flag potential safety event occurrences in conversation and eliminate the need for the agent to document the event. Once identified, the AI can automatically escalate that conversation to a patient safety or pharmacovigilance team.

How effective is AI at detecting safety events?

All in all, AI helps eliminate unnecessary, initial manual note taking by an agent in between calls by a factor of 10-15 minutes per interaction, decreases the number of “false alarms” escalated for review, and expedites the final review before any required reporting by clearly highlighting the relevant content in the conversation.

One major health organization used our AI solution to analyze over 800,000 interactions. They rapidly found that approximately 28,000 (or over 3%) of those interactions used incorrect agent language, which had risk implications in identifying and documenting safety events. By automating routine monitoring, the company could focus its human analysts on high-risk interactions, significantly reducing the likelihood of future complaints or audit findings.

What role does human oversight play in this process?

Human oversight is a non-negotiable part of responsible AI development. We deploy a “human in the loop” approach for oversight of AI performance, ensuring results without increasing risk exposure.

We measure the effectiveness of the AI by using a confusion matrix, which ultimately yields an agreement score. The agreement score tells us what percentage of the time our humans labeling and auditing conversations agree with the AI. We’ve proven this model to successfully deploy and iterate on various technical aspects of the AI to improve agreement score continually.

What types of data are typically overlooked, and why is it critical to analyze this information?

“Dark data” is a term we use in reference to all the information buried in everyday conversations that goes completely unused. Too often, organizations are sitting on data that is simply not in a format that can be easily digested or distributed, etc. This data source is valuable because oftentimes it contains all sorts of first person experience data that speaks to an organization’s processes, brand awareness and loyalty, the customer experience, and more.

Analyzing this data not only gets you answers to basic business questions you are already investing a lot of time and money trying to answer in other ways, but it also centers the voice of your customer.

What challenges do pharmaceutical companies face when integrating AI solutions into their existing systems for monitoring and reporting adverse events?

One of the biggest challenges is risk appetite. AI is still a “new” and often misunderstood technology. The idea of taking something that impacts patient experience and safety so intimately, and leaving it to AI is not a decision to be taken lightly. We offer clients a very intentional, strategic path towards slowly injecting more AI into their process as they see the results, understand the capabilities, and work with us to understand the scope and best places to plug this solution in.

We recognize the challenges of innovating an industry that brings in nearly a third of the world’s data — this is why it’s important all stakeholders are intentionally considering what problem they are aiming to solve and the impact they hope to see.

What do you see as the future of AI in pharmacovigilance?

Our vision is to eliminate manual note taking, improve accuracy in identifying adverse events, and ultimately improve the outcomes for patients through faster reporting. That really is the broader promise of AI: to do things more consistently and in a more efficient way to ensure equally positive outcomes for patients and the healthcare organizations serving them.