Interview: Andrew MacGarvey COO of Phastar on opportunities for big analytical data

By Liza Laws

- Last updated on GMT

© Getty Images
© Getty Images

Related tags Data Data collection Data management Artificial intelligence machine learning Phastar Research

Andrew MacGarvey is chief operating officer (COO) of Phastar. We caught up with him earlier in the summer at DIA Global in Boston to discuss the company’s origins, opportunities for big analytical data DCTs, artificial intelligence and machine learning among many other topics.

Could you tell me a little about Phastar’s background?

It was first founded by a statistician called Kevin Kane many years ago, and he was working as a contractor in a pharma company but getting really frustrated with the quality of the work the CROs were giving him. He kept saying to people, ‘I could do this better, I could do a better job’. And I think a couple of people said, ‘go and do it then’ and he did – and faster. He was very entrepreneurial, built it, got a couple of major clients and then did some really good work.

He built a really good reputation and rescued a major program – the sponsor had outsourced the work and it didn’t go well so they turned to this man, Kevin, and asked him to sort it. That was the jumping off point – then there was a substantial amount of work over a long period of time.

I joined about five and a half years ago, I knew Kevin just from being in the same space and what he was doing was growing really quickly and he wanted me to come and help him with the organization of things – it was a bandwidth thing with him. I came in as a managing director and did some organizational work and then we did a transaction to private equity two and a half years ago, and not long after that I moved to CEO position.

We’ve grown now with more than 500 people and of course we are global with offices in the UK, Boston and Durham, North Carolina, Nairobi, Copenhagen, China, Japan and Australia.

OSP: What opportunities to big data analytics offer healthcare and pharma, in your opinion, and how do you see that going into the future? It's really snowballing.

It's really a good point. The importance of data has become apparent to everybody, and we have been able to work out ways of interrogating massive datasets to get extract value. There are lots of examples of how we can access historical data, going back quite a few years - big, big data sets which can then deliver added value to the trial.

A really good example is the FDA requiring a diversity of patient population so what you can do now is use analytics to go in and interrogate massive data sets to get and extract the value. You can use analytics interrogate metadata from all sides to give you insight – where you should run your study for example, it can show you where to get the diverse population you need. There is stats models or statistical tests or signal detection so what we are now doing is marrying that knowledge with the tech and data.

We run visualizations on all our studies looking at the metadata, then we can work out if a site is recruiting correctly and how it is performing. We can run patient profiling simply so there are jobs that we faced as a programmer back in the days which can now be done by almost anybody – you don’t need to be a programmer anymore.

When we made our Danish acquisition, a big part of what they do is visualization which was really interesting. We’d always worked with clin-ops and now we work with clin-ops, HR, finance – we can work with many different groups and our sponsor companies start to pull data together and give the insights which is obviously powerful for them.

I think you’ll see that more and more, it's already very prevalent. These products have been around for a long time but the difference now, I think, is that we've got access to the big data sets, so everything’s lined up now.

When we were trying to pull some of this stuff together getting the data in a way that was compliant, through various processes you can have – and this was the eye-opener for me – this raw data prepared to run - we can do it ourselves, we can anonymize it and generalize it and use various techniques with the data.

A lot of the time we're looking at meta data, but then you're not actually looking at the patient data of some of it. So you’re right, there is a tension there and to get the full value, there is going to have to be some sort of permissions based approach.

A lot of people tell me about blockchain technology so we all have this data, we own it - and we can then determine whether we want to effectively sell it. And there'll be some sort of incentive. It is interesting to me that generation after generation with this concept of privacy, people are getting much less worried than I would be.

I think you can see again, how over time that those stars will align in the right way and people will probably either monetize their data or do it for the greater good and give permission to others to have control of that data.

As I say it's either meta, where we still have to be extremely careful, or prepare datasets that you're then going to use to go forward or get the consent but that's definitely been a challenge.

Andrew MacGarvey Phastar CEO

                                 Phastar COO Andrew MacGarvey with Liza Laws

Has data analytics from DCTs and remote patients been a big challenge too?

Not as much, it's a challenge in the diversity of data sources, so we need to be cognizant of that. The volume of data is definitely something that has been a challenge because you're getting way more data than you did back in the day when I was doing my paper-based studies 25 years ago. So that volume has been an issue but it's that volume of data that's now giving us the insight. The technology. People weren't talking about statistical environments 5-10 years ago. Now it's everywhere. We must think about the environment that we're working in so we can bring that data and then get the use out of it.

Throughout many recent interviews, we have come across a mixture of reactions to machine learning and artificial intelligence, people either shudder or get excited, how is that shaping developments for you?

Machine learning we're actively using and we use natural language processing to review documentation. So as a programmer, I could programme I'd say 95 to 99% of all the checks, but it's just that some things that are not programmable. You would always have somebody that would review the data and, it might not be economical to invest loads of time in programming.

So back in the day you would look at listings, now we train a programme to read that data and then determine whether there should be a query raised. It just very quickly gets good at what it's doing. We've got supervised and unsupervised - we're using both. From an unsupervised perspective and from a regulatory perspective, that's where I am really watching to determine what are using this for and where's it sitting in the in the trial process.

There is some work to do around how the regulatory bodies would view some of this stuff that we're doing. But if you think about what we're doing there with the labs, it's just an extra quality control step with a machine that can do the work.

We're using it certainly for things like predictive analytics and that is really important. Our view is that our jobs will change, and we will use those tools to do what we're doing more efficiently. They won't be replaced necessarily; we think we'll just be able to do more and we'll use those tools to do a better job. We're not worried about it, but we're also keeping a close eye on it.

I have a stat here that says the CRO industry reports that the sector is worth $76.6 billion this year, projected to reach $127.3 billion by 2028. What do you think is driving this growth?

One of my customers said to me, we're placing fewer brands, but they're much bigger, if you're looking at things like T cell therapies or gene therapies, they said where the modality of trials is evolving, it is getting much more complicated - and you've got many more specialist needs now.

I think a lot of people will end up doing the work internally and not outsourcing it. Because things are moving so quickly, you want the agility of a service provider. So, I suspect that's part of what is driving the growth.

You either increase the outsourcing market, because there's more of it to do, or do it internally – but supported by a CRO.

For a sponsor to leverage that data, they want control of it, they want it in-house. So, if you think about a big phase 3 trial and how many people you would have to source it, there's lots of different tech, loads of different data sources. So now if I want that control, and I want to be able to get access to that I'd bring it in, all of the technology in-house.

There's big expansion in that part of the market, where we're asked to provide expertise, going into the into the data lakes and systems. And that's logical to me. We're seeing a shift in what is being outsourced. And as I say these are complex things we're dealing with.

OSP: In terms of emerging markets and activity in places like Nairobi, do you find that how CROs and sponsors are working is different for different global markets, and that can also be useful for the American market but what about Asia Pacific?

They have a big DIA in Asia and those who attend want to see what’s happening and benchmark against it. All the tech will be in one place and most of the big players will be there. I took my CFO because he said he really wanted to understand everything, understand where the market is at, who’s doing what and see the latest developments.

From that perspective it was very good. Market wise, I don't know how many you've been watching the press recently, but there was a really interesting AstraZeneca China piece, what they've said is actually we're going to split off from the global organisation, a separate entity, and we will do research and development (R&D) and drive that in China because they are looking at doing things very quickly, making the regulations slick and you're beginning to see that. It’s a positive step.

Related news

Show more

Related products

show more

Saama accelerates data review processes

Saama accelerates data review processes

Content provided by Saama | 25-Mar-2024 | Infographic

In this new infographic, learn how Saama accelerates data review processes. Only Saama has AI/ML models trained for life sciences on over 300 million data...

More Data, More Insights, More Progress

More Data, More Insights, More Progress

Content provided by Saama | 04-Mar-2024 | Case Study

The sponsor’s clinical development team needed a flexible solution to quickly visualize patient and site data in a single location

Using Define-XML to build more efficient studies

Using Define-XML to build more efficient studies

Content provided by Formedix | 14-Nov-2023 | White Paper

It is commonly thought that Define-XML is simply a dataset descriptor: a way to document what datasets look like, including the names and labels of datasets...

Related suppliers

Follow us

Products

View more

Webinars