Large clinical trials for neurodegenerative diseases: Complex design, but demonstrative data

By Maggie Lynch

- Last updated on GMT

(Image: Getty/ Jason Butcher)
(Image: Getty/ Jason Butcher)

Related tags Clinical trials Neurology degenerative diseases Clinical trial Trials & Tribulations Trial supply management Alzheimer Alzheimer's disease Parkinson Parkinson's disease

Large clinical trials can be particularly useful in the early treatment and prevention of neurodegenerative diseases because of the amount of data collected, though patient recruitment and retention are challenging, says biostatistician.

Neurodegenerative diseases become harder to treat as the disease progresses further. Diseases like Alzheimer’s, ALS, and Parkinson’s​ can be hard to spot early but early treatment may slow progression of the disease.

According to a report​ by the American Neurological Association (ANA), neurological diseases affect roughly 100m Americans per year and the cost of treating dementia and stroke is expected to exceed $600bn by 2030.

David Schoenfeld of Harvard Medical School and Michael Benatar of University of Miami School of Medicine, wrote an editorial​ suggesting that large trials, while expensive, could reduce disease prevalence and morbidity. (OSP) spoke with author, Schoenfeld (DS), a biostatistician, about large clinical trials for the early treatment of degenerative diseases.

OSP: What are the benefits of a large clinical trial? Particularly for this type of research?

DS:​ If you think about prevention or early treatment, the rate at which people begin developing these diseases [Alzheimer’s, ALS, Parkinson’s], in people who are particularly susceptible to them, is still rather slow.

If you really want to treat a disease, or want to determine whether you can prevent a disease or treat it early before it has symptoms, you need a lot of patients. You don’t need a lot of data on each patient, but you need a lot of patients.

My experience with very big trials has to do with cancer screening trials. We did a trial with CA125 in which we had 2,000 patients, but in the end, we were looking at six cases of cancer,  – that’s the kind of thing that happens. It’s not very hard to analyze the data even though there’s a lot of it.

I was interested in ALS, particularly the genetic group, because it’s fairly rare but there’s a group of people who know they have a good chance of getting ALS. We could treat them now and possibly prevent them from getting ALS. So, we might treat tens of thousands of patients, but in the end, after a year or two or three, we’d be counting ten people in one group and none in the other.

Early treatment in cancer tends to be more effective than later treatment, this tends to also be true in AIDs. It’s possible that with neurodegenerative diseases, the reason we haven’t been really successful is that we’re treating too late.

OSP: How do these large trials impact trial design?

DS:​ Two things that are important in any given trial are the duration of the trial and the number of patients – those are the two most important variables.

One of the biggest issues in trial design is that these large trials will only be effective for drugs that are not particularly toxic. A really toxic drug isn’t suitable for a prevention trial, or an early treatment trial. If you’re going to treat 10,000 patients and there’s a 1-2% toxicity rate, you’re going to have a lot of people get sick.

So it’s [this type of trial] mainly is for drugs that are relatively easy to administer (oral drugs) and relatively non-toxic.

From an industry perspective, a prevention drug is really quite profitable. I mean the trial is expensive to do, but such a drug will be quite profitable because you will be treating a large number of patients.

OSP: What is the biggest challenge to conducting such a large trial?

DS:​ The biggest problem with doing large trials is how you access the patients.

The standard method of accessing patients is doing the trials at major universities and having the patients come to the major universities to do the trial, and that is quite difficult for a really large trial especially for a trial of a rare disease – which would be the case for ALS.

What we’d like to do, if we had a fairly non-toxic drug, is to do a trial that could be done with primary care physicians. So basically you’d send the patients blister packages with the drug or placebo; they would take them with their doctor, and they would report back over the internet what their symptoms were, keeping track of when they progressed. That would be the ideal design.

Now, this has been done – the most famous trial like this was the Physicians Health study​ where they sent physicians and nurses, blister packs with either vitamin D or placebo, or Aspirin or placebo for other trials; basically a large simple trial.

OSP: How is data managed?

DS:​ The best example is actually the original trial for AZT for AIDs​, which I was involved with. With the first trial, we just counted the deaths. When the first 12 or so patients that died were in the placebo group, we knew that AZT was effective.

At a certain point when all of those who died were placebo patients [we] knew that AZT was working and you would do the same thing for a large simple trial of ALS.

That’s often how data in trials such as this can be managed. It’s less data management but more patients to contact.

OSP: Can this be applied in a virtual or remote clinical trial?

DS:​ A virtual clinical trial has been attempted in ALS for patients with the disease. However, one of the biggest problems is that you have to keep people engaged and what ends up happening is that people stop.

People’s attention span is rather short, so what ends up happening is that you get periods of time that you end up following the patient, but not the length of the trial.

OSP: How does early identification through clinical trials affect treatment?

DS:​ I think that the effect of an effective early treatment would be very profound on all of these diseases, because right now there’s a big diagnostic delay in all of these diseases. It takes about a year to figure out whether someone has ALS, between their first symptoms and their disease. It’s called Lou Gehrig’s disease, Lou Gehrig was playing baseball with it – it wasn’t until it was really advanced that he noticed something was really wrong.

Alzheimer’s is a little bit more problematic [to diagnose] as age-related cognitive decline is hard to differentiate from Alzheimer’s disease, and then it’s not diagnosed until it’s severe.

If we could show that early treatment is effective it would give [the industry] motivation to diagnose it earlier, and that would change the whole way we approach these diseases.

David Schoenfeld is a Professor in the Department of Biostatistics at Harvard University, he also serves as a Professor of Medicine at Harvard Medical School. Schoenfeld directs the Clinical Coordinating Center for the ARDS Clinical Network, a group doing clinical trials on Adult Respiratory Distress Syndrome. 

Related news

Show more

Related products

show more

Saama accelerates data review processes

Saama accelerates data review processes

Content provided by Saama | 25-Mar-2024 | Infographic

In this new infographic, learn how Saama accelerates data review processes. Only Saama has AI/ML models trained for life sciences on over 300 million data...

More Data, More Insights, More Progress

More Data, More Insights, More Progress

Content provided by Saama | 04-Mar-2024 | Case Study

The sponsor’s clinical development team needed a flexible solution to quickly visualize patient and site data in a single location

Ultra Low Temperature Packaging solutions

Ultra Low Temperature Packaging solutions

Content provided by Almac Group | 12-Feb-2024 | Case Study

Advanced Therapy Medicinal Products (ATMPs) offer ground-breaking opportunities for treating injuries and disease, in particular for cases of severe, untreatable...

Using Define-XML to build more efficient studies

Using Define-XML to build more efficient studies

Content provided by Formedix | 14-Nov-2023 | White Paper

It is commonly thought that Define-XML is simply a dataset descriptor: a way to document what datasets look like, including the names and labels of datasets...

Related suppliers

Follow us


View more