Interview: Mike Connell, COO at Enthought on the trials and tribulations of AI

By Liza Laws

- Last updated on GMT

Enthought - Mike Connell
Enthought - Mike Connell

Related tags AI Artificial intelligence Data management machine learning Research Patient centricity Drug discovery

OSP had a great conversation with Mike Connell, chief operating officer, at Enthought. The discussion centered around aritificial intelligence (AI) which have been the buzzwords in pharma and a number of other industries for at least the last year.

There is still a general sense of wariness from some sectors within the life sciences and we discussed why this might be, what businesses can do to reassure clients and staff and how machines are not quite set to take people's jobs yet. 

OSP: A lot of people are wary about artificial intelligence, including within the life sciences industry – is this justified?

MC:​ Any time there is a new or emerging technology, people should absolutely be cautious about its implications. Two primary generative AI risks to be aware of involve intellectual property (IP) and bias. Safeguarding your IP—your R&D data—from being used to train models outside your company should be your first and foremost priority when integrating AI. Bias comes into play when a model’s algorithms systematically produce biased results due to biased inputs and assumptions.

Automation bias, or the tendency for people to ignore conflicting information and trust the output of a computer more than a person, is another concern, especially when AI models are not transparent or explainable. However, both biases can be mitigated by remaining sensitive to bias potential in the first place, and then balancing with additional inputs, proactive training, workflow checkpoints, and QC.

And while we should be wary of these risks, we shouldn’t let them prevent the scientific community from embracing the positive and powerful impact AI can have on R&D. The human-centric nature of traditional drug discovery processes means variability and errors are inevitable, but AI offers a new opportunity to finetune and enable a more precise approach.

OSP: What do you think needs to be in place from a regulatory perspective to allay these concerns?

MC:​ As you’ve likely seen or heard, President Biden recently announced a new executive order to manage the risks of AI. While it’s good the government is calling for AI regulations to be imposed, particularly regarding privacy and safety, some items outlined in the order are going to be difficult to execute.

For instance, we don't really have frameworks for evaluating the safety or level of bias of generative AI in the usual sense, and regulations can be developed and applied clumsily, counterproductively, or in an overly heavy-handed manner, stifling innovation and commercial benefits.

Today's QC frameworks were designed based on human-engineered systems. We need to develop new QC frameworks for these new kinds of systems—frameworks designed to work with a system like a transformer model that is neither transparent nor formally verifiable.

OSP: Many believe it will eventually take their jobs, or perform their jobs better and faster than they are able to, how would you reassure those who are worried?

MC:​ With new technologies, people tend to think in "all-or-none" terms—either AI does the whole thing, or a human does the whole thing. That's typically not a great approach. There's a whole continuum of possibilities where the work of humans and AI are blended. What is still preserved is the need for human evaluation, verification, and judgment of the work produced by AI.

In other words, the most responsible use of generative AI will be to use it for what it’s great at: generation. It must be avoided for decision-making. At this stage, generative AI is too unreliable and opaque to be trusted. Humans must consistently be in the loop of a model’s generation to check its output, or its output must be validated using physical experimentation.

OSP: Could it be said there is no replacement for the human touch, particularly in trials, patient centricity, diversity, and recruitment, what are your thoughts on this?

MC:​ Some applications of AI will and should be scrutinized. It may never be safe to let AI write experimental protocols affecting humans and animals without strict human oversight. Also, with the current state of generative AI and the sometimes unclear practices of different providers, there are significant privacy concerns every company—and individual—must consider.

For example, a whole host of note-taking apps powered by generative AI are currently available to transcribe and summarize meetings. These apps need the data people provide to continue training the underlying AI models. This means the app companies are storing and working with a lot of private data. The hazards with that are twofold—leaking private data that could violate ethical norms and/or laws (e.g., leaking patient meeting transcripts from a clinical trial would violate HIPPA regulations), and leaking sensitive data that could be used to train a model other people might later have access to indirectly.

OSP: What new job titles do you anticipate changing at companies using the technology and how will any changes affect their day-to-day roles - specific examples of new title then corresponding changes?

MC:​ For the first time ever, it’s possible LLMs like OpenAI’s GPT-4 have operationalized within a machine a fundamental mechanism at the core of human intelligence. If this possibility is in fact true, then what we have is the analogue of a chunk of digitalized brain matter. The applications and capabilities we see now are impressive, but they are only the beginning of what will be possible in the future.

Determining how to leverage the capabilities of this digitalized brain matter for other applications will require more than just specialized training of models with the same architecture. It will require engineering new cognitive architectures using the digitalized brain matter itself. This will likely give rise to a whole new category of jobs we might call 'synthetic neuroscience'—the purpose of which will be to engineer such systems at every level to develop new applications that can solve new kinds of problems. Job titles within the category may include 'synthetic neurochemist', 'synthetic neuroanatomist', 'synthetic systems neuroscientist', 'synthetic behavioral neuroscientist', and so on.

At the same time, this digital brain matter could be used to more deeply understand the human brain and mind—to model mental wellness/illness, developmental differences and educational processes in a brand-new way, giving rise to new job categories such as 'computational psychology', 'computational psychotherapy', and so on, with applications at the interface between analysis and intervention.

As AI-related costs come down and access to the necessary computational resources expands, we will likely see an expansion in specialized engineering jobs—the focus of which will be to create curated data sets to train LLMs and other models, and to engineer the models themselves for specific applications.

OSP: Things can, and often do, go wrong with technology – what do you say to a company anxious about using tech solely based on their reliability or validity fears?

MC:​ Anything is possible, given the hazards and the potential for bad actors to use technology for ill.
The good news is that, by partnering with domain-specific experts such as scientists and engineers who are ingrained in the technology and know how to deploy it responsibly, pharmaceutical companies can feel more confident adopting AI for R&D. When these partnerships take a holistic approach to transform people, processes, and technology, digital transformation becomes the catalyst for value. It yields market differentiation while accelerating the advancement of innovations and their global impact.
The right partner can also help organizations build internal digital R&D capabilities responsibly, resulting in better decision-making in the lab and new products being brought to market faster than ever before. 

Related news

Show more

Related products

show more

Efficient Freezing & Storage of Biopharmaceuticals

Efficient Freezing & Storage of Biopharmaceuticals

Content provided by Single Use Support | 06-Nov-2023 | White Paper

Various options exist for freezing biopharmaceutical bulk material, but selecting the most effective and efficient approach for each cold chain can be...

Pulmonary Delivery of Orally Inhaled Therapeutics

Pulmonary Delivery of Orally Inhaled Therapeutics

Content provided by Catalent Pharma Solutions | 19-Oct-2023 | Business Advice

New classes and indications of orally inhaled therapeutics are rapidly expanding, with the development pipeline increasingly featuring both large and small...

Related suppliers

Follow us

Products

View more

Webinars