He is a neuroscientist by training. He spent ten years in academic and clinical research before moving into the clinical software industry for the last 18 years. He has worked in most areas of healthcare and technology.
In one life he worked in hospital and healthcare delivery developing large software systems. He has always supported meaningful, useful initiatives like clinical survey software, patient portals for hospitals. In another life he worked in pharma for Novartis
After Novartis, he moved to Roivant for 8 years. He has helped launch different start-ups including some out of Israel that are focused on hospital healthcare.
He is very much focused on researh and development (R&D) and life sciences.
OSP: Have you been to DIA before and what was your main purpose for this year?
This is my first DIA and I have worked across the board in life sciences, I have heard that people coming from as long ago as 2003 say that it is about 50-60% as populated as it was ten years ago which is interesting to me.
We wanted to present some new work that we've done on the versions of real-world data and our trial sites data that we've been growing. We have 2000 studies worth of operational trial data and we wanted to share the first quantitative, or what I believe to be the first quantitative correlation between real world data and trial data to predict enrollment in studies based on the actual healthcare data where patients are going to get access care.
I think that was that number one goal was achieved and our number two was to get a lot more interactions with folks that we're talking to about collaboration and our business which was both solidly achieved every day.
I had some great conversations with different global data vendors, which is good because we have just started down a path with quantitative analysis and we're going to expand across multiple forms of real-world data. Being here at DIA has a strong orientation to real world data.
OSP: At DIA there's always a buzzword and a buzz theme. Last year it was very much decentralized trials, and I'm finding that this year is all about machine learning and AI.
I spent a lot of time is developing the foundational models that are now considered AI and my view on it is first from the perspective of providing value in clinical research, I feel like using these terms AI and predictive models etcetera, they're thrown around too much and frankly our approach and my philosophy is always focus on the solution that solves the problem, and the value delivered regardless of the tool we use.
I think also it's really hard and with clinical research especially because the consumers of our applications on the study team side, they're in the evidence generation business, so if they can't understand the value or how a system works, it’s really hard to gain adoption. So oftentimes the perspective we take is let's not jump all the way to a predictive model at the outset, even though that's buzzy and cool right now, let's start with bringing data together, because, let's be honest, all models are as only as good as the data you have.
Bring the data together, let's understand and interrogate that data from a descriptive and analytic perspective, gain a common understanding of partnership, foundation and trust between a vendor and a sponsor or CRO. And then let's build the road map towards predictive analytics.
You need to develop and understand the standard, operating for their process, before you build a model. One-size doesn't come out and fit all with AI. So, I'm a proponent, but I'm also wary of the fact that we jump to these things too quickly and we don't create foundations of trust and credibility that actually foster that innovative approach and the trip to the future.
OSP: I think trust is a key word there because nobody can justify properly exactly where the AI info comes from, and I am not sure how I feel about that. What about you for your field of work?
It's a good point I think in relation to trust, I think that's key and the two ways we focus on fostering that.
Number one is engagement and trust, when we go to predictive models, we tend to focus on explainable models and I don't think clinical research right now has big data problems, it has very small data problems. There are certain models that are best for small data, but then there's also a Venn diagram of those models and they’re explainable ones. We tend to use Bayesian machine learning. Why do we use Bayesian machine learning? Because there's some aPriori experts’ inputs that are explainable as to how you've actually used which data for what, and that helps explain the output of the model.
Number two, we have historical data that's timestamped over the course of trials by 2000 studies. What can we do when we create a prediction for a study that says there's an issue? That's going to be really hard to understand, what happens if no operator addressed that issue, it’s human nature, you see an issue you want to address it.
What we've done is take, let's say, a subset of a studies. We roll it back through our platform like the platform is seeing it for the first time. Then we see if the prediction matches reality, in a blinded sense, for the platform, and then we can present that as evidence, to the evidence-based generation colleagues. We can present that as evidence to say not only is the model explainable, but we can actually do a blinded analysis and show what it predicts and that the actual events did occur.
OSP: It sounds like you’re getting the right balance - this can be quite slow industry when it comes to adopting new things. If you just go from A-Z, do you feel you're going to leave people behind?
I think a key differentiator for us is number one having data as an analytics provider means that we actually can do things that others can't. We can develop models in the absence of working with folks, we can actually roll back those data and say how precise, how specific, and sensitive our models are.
I think the other piece to your point is starting at the data not starting with AI, right? Starting with what data do we need to bring together and why? Why is that needed and explaining that to our partners and customers, etcetera, so they share a common understanding because there may be things, we haven't thought of, that they're experts in and in their indication they may say ‘oh actually I've thought about this, and we should bring that data in as well before we do any fancy analysis’.
OSP: Have you walked around and seen what others are doing, did anything you saw challenge or worry you, or have people challenged you?
I think the big thing I have a challenge with at this point is the amount of hysteria around generative AI models especially LLMs (large language models) as they are applied to clinical research - I feel like everybody is claiming that they are deploying an LLM right now. That is categorically untrue, and it makes me worry. First of all, it creates a lot of noise, and it also makes me very anxious about the understanding of the market because from our perspective, we need to educate and provide information and awareness about what we can do with AI in a step-by-step process.
Just claiming you have LLMs, without any foundation, or claiming that you are doing generative AI when you're actually not doing anything of the sort concerns me. We have just stepped back to re-educate at that point. That being said, we also know that AI's hot and so everybody's trying to get a little bit of that limelight.
OSP: Where do you think we will be this time next year? Do you think things will have changed; will the industry have prematurely jumped even further ahead?
I’ll think about this from the optimistic and pessimistic perspective, maybe starting with the pessimism. I get worried about jumping ahead too quickly without understanding the foundational values and taking a gradual iterative approach – showing value along the way and that has come from looking back at others’ failures along the way.
And I think if we go down that path, we could end up to some degree similar to what happened to decentralized trials (DCTs). Though DCT was the buzz from the last three years or so and everyone was talking about it, now all of a sudden, while I wouldn't say it's going away, it's less prominent and there’s a lot of angst in the industry about all these companies and their layoffs etcetera, because every jumped on that bandwagon. That is why I'm a little worried and I think that same thing is possible with the AI and the next year - depending on how we present it and adopt it across the industry.
On the optimistic side, back to the point I was making before, I think there's quite a bit of opportunity right now to show milestones of value along the path to progress of deploying models, and showing the value specifically related to time or cost or quality of data, savings, and clinical research, and showing that evidence.
I think if we engage along those dimensions in a gradual manner by the end of next year, there could be two handfuls of really solid foundational value and case studies that's generated around AI use cases and clinical research.
But I think it's really about two handfuls worth that we can achieve as an industry by the end of next year without claiming that it's an LLM and it solves everything since sliced bread.
OSP: So how are you going to proceed?
The way that we've been thinking about clinical research - and there’s obviously study planning like feasibility, site planning and then there's execution - what we found is while human nature likes to deconstruct things and marginalize things into groups, frankly, you're always going to be planning and you're always going to be executing in a study. Even in a in a live study, when you're executing, you're replanning for ‘what happens if I close these sites that are not performing and open these sites here’, etcetera.
The approach we're taking is to use certain analysis and predictive analysis and foundational data analysis.
At the moment, there’s the planning team seeing the model into execution, which is then being compared to the plan. They ship that over the hill to the execution team and it's like they're done, and nobody knows what the foundational objectives are, why they were created, the what and the why for that.
I think for us it's about creating models that connect both the planning and execution with the same foundational data, and that model is shared not only with planning groups, but also with the execution groups. So, there's that common understanding and that's the approach we've taken.
This is to reduce the fragmentation amongst different stakeholders to ultimately improve the trial performance as well as expedite those therapeutics to patients’ masters.