2021 year-end wrapup

Signant Health: consider DCT design factors from the start

By Jenni Spinner

- Last updated on GMT

(Hilch/iStock via Getty Images Plus)
(Hilch/iStock via Getty Images Plus)
While the pandemic saw many trials to shift to decentralized midstream, an expert from the trial tech firm suggests a ground-up design approach is preferable.

Decentralized trials offer a range of benefits for patients, sites, and sponsors alike. However, as trial professionals have learned in the face of the pandemic, conducting a successful, smooth decentralized clinical trial (DCT) requires thoughtful planning and the right tech tools.

Bill Byrom, vice president of product intelligence at Signant Health, spoke with Outsourcing-Pharma about some of the key ingredients to a successful DCT recipe, the benefits of well-run trials, and why considering DCT at the design stage offers advantages over converting from onsite.

OSP: Could you please share your thoughts on how the decentralized format benefits patients?

BB: First, maybe we should define what we mean by a decentralized trial (DCT) as different people have different definitions. DCTs represent a spectrum of alternatives for how, when, and where patients can participate in a clinical trial.

I define DCTs as studies where one or more component is conducted away from the site. This could include a fully remote trial with no site visits, or a traditional, site-based trial with some at-home elements (such as eConsent and ePRO). We’ve been doing many things remotely for years – like electronic patient-reported outcomes (ePRO) – so it’s helpful to consider elements in addition to ePRO as we consider the move to decentralized models.

The benefit to patients is making participation easier. By reducing the need to travel to the site for in-clinic appointments, this can make it much easier to join and participate in a clinical trial.

Some patient groups find travel difficult for physical reasons, others rely on caregivers, while some patients have a rare disease that may require assessment at a specialist site a very long distance away. But beyond these, patients often have busy lives, families to care for, and jobs to maintain, so reducing travel and waiting times associated with on-site appointments can make it much easier to integrate trial participation with other aspects of their lives.

OSP: Then, please tell us about some of the challenges and disadvantages that might emerge when a trial takes place at a patient’s home. Feel free to talk about anything from communication challenges to the learning curve associated with using unfamiliar devices to self-test, etc.

BB: There is a perception that the increasing use of technology solutions for remote communication and assessment provides a challenge for some patients, particularly older people. This is important, but not insurmountable. It is true, for example, that some older people interact with technology infrequently in their daily lives (although we have seen greater uptake of certain mobile solutions through the pandemic), and this could represent a challenge when we provide a technology for use in a clinical trial. Our own research in older users has identified that they are sometimes less confident in learning how to use a new technology by experimentation for fear of breaking it or doing something wrong.

Aside from good, simple solution design, the key here is training and equipping sites to provide robust training before patients use technology solutions on their own. Performance anxiety/confidence in solution use can be mitigated through a hands-on training system and that can be repeated more than once to affirm readiness. Our research has shown that a second, supervised run-through of the training system is effective in less confident users.

A second challenge related to this is ensuring a simple workflow and experience for patients when using more than one technology solution together. As an industry, we provided multiple, disparate technologies to sites – all with a different look and feel, requiring different logon credentials, and each only doing a discrete part of the overall site workflow/activity. While this is being addressed through unification, integration, and single sign-on (for example), we cannot provide a similar experience for patients as we expand the sources of data captured, and solutions used in the unsupervised home setting.

Finally, we are challenged that our clinical trials must be robust to provide high-quality clinical evidence with which to understand the safety and efficacy of our new treatments. Moving measures collected in standardized, supervised settings to at-home collection brings with it the possibility of increasing data variability, introducing bias, and lowering data integrity.

OSP: How can trial teams work to make sure risks associated with problems/obstacles are minimized on decentralized studies?

OSP_Wrapup_Signant_BB
Bill Byrom, VP of product intelligence, Signant Health

BB: In terms of assessing technology challenges, teams should consider feasibility research in addition to understanding the usability of individual solutions. Feasibility research takes account of the use of the technology in the setting of the clinical trial protocol, to determine whether, alongside all other trial activities, it represents a practical or impractical approach.

Trial teams should also consider how multiple technologies might be used together within a study. Our approach is that the ePRO app is the interface with the patient, and we use it not only to collect ePRO data but also to deliver engagement content, orchestrate sample collection with couriers, connect with telemedicine, direct the use of sensors/wearables, and manage their data, etc.

OSP: How does designing with the decentralized format in mind offer an edge over retrofitting?

BB: We have, and continue to, retrofit traditional, site-based study designs to enable decentralization. Examples of this have been rapid adjustments to studies in the light of movement restrictions and social distancing during the pandemic. However, it is certainly optimal to design studies up-front with decentralization in mind.

Designing for decentralization means that measurement strategies can be considered in terms of what is optimized for remote assessment. As I consider the protocol’s schedule of events, some assessments needed may already be conducted remotely – such as many of the clinical outcome assessments like daily diaries and quality-of-life instruments.

Others we may be able to consider remote approaches for, such as clinician-reported outcomes (ClinROs) and performance outcomes (PerfOs). In these cases, I may be able to select the measure thoughtfully with a video-based or technology-delivered assessment in mind.

Routine follow-up assessments like changes to concomitant medications, or reviewing adverse events, may be possible to be conducted virtually – for example using a video visit. Other measures and assessments may simply not be suitable for at-home assessment.

In these cases, considering a measurement schedule where these are measured alongside other measures that need to be done at the clinic would be optimal. In terms of the frequency of assessment of these site-based assessments, what is necessary compared to what has traditionally been done in site-based studies? For example, how frequently do clinical laboratory samples need to be taken to monitor safety?

Of course, in some cases, decentralization may mean that these traditionally site-based components may instead be conducted at home via a home nurse visit, or at a more local healthcare facility such as a pharmacy or primary care physician’s office. There is no doubt that we have much greater potential to leverage increased decentralization if we design our studies with that in mind.

OSP: How can trial professionals consider measurement frequency to limit the potential effects of greater measurement variability affecting the power of comparisons?

BB: This is an important element we need to consider as we drive greater decentralization and leverage technology solutions to do this.

Let’s consider an example of a postural stability test that we might perform using the ePRO app on a smartphone provided to the patient. In this test, we may ask the patient to stand upright, feet shoulder-width apart, and hold the smartphone to their chest with both hands. We may use the device’s built-in gyroscope and accelerometer to measure movement and sway as well as derive a measure of postural stability.

When conducted unsupervised, we may see more variability due to less standardization of test administration. Patients may have feet apart by variable amounts between tests, or they may not hold the phone in the correct position on the body.

While the app we use may enable test conduct to be assessed – for example, light and proximity sensors may provide some indication of whether the phone is held against the body – and careful instructions can be delivered through the app to help standardize the implementation of this performance outcome test, there may be sources of variability that cannot be eliminated by taking measures in this way as opposed to performing a supervised, in-clinic test.

However, the ability to measure more frequently not only has the potential to provide richer insights but also can lead to greater power in statistical comparisons and endpoint estimates. So additional variability, if contained, can be mitigated by increased measurement frequency.

OSP: Not all wearables are created equal—do you have any advice on picking the right device types and features for the job?

OSP_Wrapup_Signant_pic
(Hilch/iStock via Getty Images Plus)

BB: There are some good recommendations on this topic published by industry groups: the ePRO consortium, the DIA study endpoints community, CTTI, and the Digital Medicine Society (DiMe). I have led the working groups for the first two.

Essentially, we need to consider a number of factors in picking a fit-for-purpose device: Safety and accuracy, technical device characteristics, country coverage, usability, and cost.

Safety data are typically provided by the manufacturer so long as the context of use is not changed. Appropriate accuracy and reliability evidence is vital to ensure data are suitable to support regulatory decision-making, and the industry groups mentioned above provide good recommendations on the evidence that should be collected to support device selection.

There may also be specific devices that need additional validation evidence to apply to diverse populations. Pulse oximeters, for example, should be selected to operate accurately across the range of skin colors we aim to include to meet diversity goals in today’s clinical trials.

Technical device characteristics might include how the data are provided. For example, is possible to connect directly with the device via Bluetooth and simplify patient experience in requiring only the use of a single study ePRO app?

There may also be other considerations such as battery length and storage capacity to determine whether the device is suitable for a specific protocol. Wrist-worn device straps and blood pressure cuffs may need to come in different sizes to enable all patients to use them.

Country coverage is an important consideration. We need to understand the ease of shipping devices into different countries when operating multinational clinical trials.

And finally, usability. At Signant Health, our eCOA scientists often conduct our own usability testing in small groups of volunteers or patients to provide reassurance that devices can be operated and used effectively in unsupervised conditions by patients in clinical trials. This is a vital step in device selection and is often overlooked.

OSP: What advice do you have regarding selection and evaluation of clinician assessments for remote evaluation via telemedicine or other tools?

BB: This is really important and involves working with scale authors and license holders in addition to evaluating the suitability of different scales. If we consider depression rating as an example, using a Hamilton Depression Rating Scale (HDRS), investigators perform a 20-25-minute interview of the patient to score their severity against 19 different scale items. The sum of these scores, the total HDRS score, represents the overall severity of depression.

All the items can be rated through questioning over video, and there are a number of studies confirming the validity of this approach compared to in-person assessment. Non-verbal cues, such as facial expression, intonation, and speaking rate, can easily be assessed via video and in-person visits and may influence the rating of some items.

Compare this to a Unified Parkinson’s Disease Rating Scale (UPDRS). Elements such as mental activity, behavior, mood, and activities of daily living can be as easily assessed using a video visit as they can during a regular face-to-face visit. But other elements, such as rigidity, may be less feasible to assess without face-to-face contact.

It is of primary importance to select assessments that can measure the concepts of interest of the protocol. Once established, it is then possible to evaluate measurement options to determine which may be suitable for conduct via self-assessment or a video visit, or which would require assessment during a home visit or traditional clinic visit.

OSP: Then, please talk about proper novel endpoint development and validation to drive new and enhanced insights using health technologies.

BB: Using digital health technologies brings the opportunity to measure constructs more frequently, to measure constructs more accurately, and to measure constructs that were difficult or impossible to measure previously. There are instances where we can obtain richer insights using digital health technologies.

As an example, consider a timed up and go test. In this test, the investigator uses a stopwatch to time how long it takes a patient to get up from a chair, walk 3 meters, turn 180 degrees, return to the chair and sit back down. The time taken is related to gait and balance issues, and the risk of falling.

Using an accelerometer on each ankle, the same test can be instrumented with the time taken calculated automatically. But the sensor data itself provides richer insights into balance and fall risk than simply the total time to complete the task. The number of steps taken to perform the 180-degree turn, and aspects of balance in this maneuver, provide much greater information about these constructs. This leads us towards the possibility of developing new, more informative endpoints based on the richer data that sensors and wearables can generate.

As an industry, we see significant innovative work in this area, including leveraging smartphone sensors and components to generate new, digitally derived endpoints, such as measures of tremor, balance, gait, phonation, range of motion, cognition, etc.

We know enough about the rigor needed to validate new endpoints – whether collected by digital health technologies or otherwise. Endpoint developers need to provide evidence to support the use of new endpoints including, for example, elements such as content validity, and endpoint properties such as reliability, sensitivity to detect changes, and interpretability (responder definition). These are well described in the aforementioned work by the ePRO Consortium, DIA, and DiMe.

Related news

Show more

Related products

show more

Saama accelerates data review processes

Saama accelerates data review processes

Content provided by Saama | 25-Mar-2024 | Infographic

In this new infographic, learn how Saama accelerates data review processes. Only Saama has AI/ML models trained for life sciences on over 300 million data...

More Data, More Insights, More Progress

More Data, More Insights, More Progress

Content provided by Saama | 04-Mar-2024 | Case Study

The sponsor’s clinical development team needed a flexible solution to quickly visualize patient and site data in a single location

Using Define-XML to build more efficient studies

Using Define-XML to build more efficient studies

Content provided by Formedix | 14-Nov-2023 | White Paper

It is commonly thought that Define-XML is simply a dataset descriptor: a way to document what datasets look like, including the names and labels of datasets...

Related suppliers

Follow us

Products

View more

Webinars