3 Steps for Reducing Costs and Errors in Clinical Trials

Clinical trial costs are trending upward, driven by a variety of factors, such as increasing trial complexity and rising travel costs for training, meetings and monitoring. While some of these factors are largely fixed, costs can be controlled by focusing on areas that lie well within the sponsors’ and research sites’ control.

And much of that focus should be on things that can lead to errors, which can significantly increase the time needed to complete a clinical trial, require work to be re-done and require corrective actions, such as staff re-training, that also add to costs.

But three simple steps can help clinical researchers reduce both errors and costs during studies. The trick is to focus on those parts of clinical trial preparation and conduct that can most affect key drivers of cost increases.

Clinical trials are expensive. There is no way of getting around that fact. AMarch 2022 blog on the My Onsite Healthcare website estimated that the cost of bringing a new drug to market runs between $161 million and $2.6 billion. This can translate to an average cost of $41,000 per patient, according to a 2019 article in Clinical Leader.

Myriad factors go into the cost of a clinical trial. The costs to manufacture, transport and store the investigational product, for instance, are important and immutable. Staff salaries, travel costs and the cost of equipment and supplies can also play a role. But some factors have a greater impact than others.

Errors in enacting the protocol can add significantly to the cost of a clinical trial, primarily by adding time and person-hours, as well as requiring work to be repeated for errors egregious enough to threaten the validity of data gathered.

And protocol deviations or errors are common.  According to FDA data, protocol deviations remained the top BIMO inspection observation for 2021, as they have for the last several years.

The Tufts Center for the Study of Drug Development (CSDD) agreed, noting in >an early 2022 reportthat the mean number of protocol deviations and substantial amendments—another cost and time driver—have increased across all clinical trial phases. The typical Phase 3 trial has 119 deviations, the CSDD report said.

Amendments are particularly troublesome. The timeline for protocols with three or more substantial amendments tend to require nearly three more weeks of treatment for enrolled patients and see close-out delayed by nearly four weeks, the CSDD estimated.

 

Step 1: More complex trials call for clearly written protocols


Recent attention has focused on growing clinical trial complexity as a cost driver. More complex trials tend to take longer, which adds to the cost of a study. More complex protocols also tend to offer more opportunities for errors, which can add additional time and expense to correcting those errors.

A 2021 study by the Tufts Center for the Study of Drug Development (CSDD) noted that protocols have been growing increasingly complex since 2009, with no end in sight to that trend. For instance, the CSDD report said that Phase II and III protocols as of 2020 generally had about 20 endpoints, with an average of 1.6 primary endpoints, up 27% since 2009. And the mean number of distinct procedures required for a typical protocol rose 44% during the same timeframe.

So, the first step to cost control is to focus on the protocol, ensuring that it is written in a way that is clear and easy to understand, both across sites and across jobs at any given site. As noted above, the growing complexity of clinical trials has been tagged by experts as one of the top reasons for protocol deviations or errors to occur. Protocols that include many individual procedures, require novel procedures that differ from standard of care and/or have a larger number of endpoints can be particularly prone to errors.

Including site staff early in protocol development can help ensure that protocols are written in a way that is clear to all employees. For larger trials that involve global sites, consideration of how language and cultural differences may impact protocol adherence is also important. And site resources and expertise also need to be considered when developing a protocol; not all sites will have access to the same equipment and supplies or have the same levels of expertise in all areas.

 

Step 2: Training to ensure predictable performance


The next step then must be to provide thorough, effective training designed to ensure that all staff fully understand how to do their jobs in line with the protocol. And this can be challenging to achieve with a traditional lecture- or slide-based training session.

In 2019, Harvard indicated that slide-based training and education is not effective in imparting knowledge in a way that alters performance, calling the combination of slides and lecture “worse than useless.” This is important for clinical research applications because the entire purpose of training on a protocol is to ensure that behavior is changed for the better—“better” meaning accurately carrying out tasks as described in the protocol.

For example, failure to correctly apply inclusion and exclusion criteria during patient screening can lead to a mismatch in patients and the protocol. The end result of this is likely time and money spent collecting data that ultimately cannot be used as part of an application for new product approval.

Examples of the value of simulation can be found in other industries, such as aviation, where crashes are greatly reduced when pilots are trained via simulation. Similarly, within the healthcare industry, simulation training has been used to reduce anesthesia mishaps.

Protocol simulation training provides actual practice in critical study skills, including GCPs, pharmacokinetics, informed consent, placebo arms and investigational product handling, among others.

An important feature of simulation-based training is the ability to identify and focus on weak areas, while also predicting staff and site performance before a study even begins. Simulation training provided by Pro-ficiency, for example, includes a dashboard that highlights how staff within a site performs during training, as well as performance across multiple sites. Sponsors can quickly identify where protocol procedures have been mastered, where significant knowledge gaps exist and where some corrective guidance was needed during the training process.

This lets sponsors quickly and easily identify poor-performing sites and individuals, as well as narrowing down where weaknesses exist. This allows for additional, closely targeted training and support before the first patient is enrolled in a trial, which helps to avoid errors and associated costs during the study.

This type of feedback can also quickly identify any problem metrics showing that most people struggle with a particular task or procedure. This knowledge can be applied to improve the way the protocol is written, if necessary, further reducing costs by going back to ensuring the protocol is clear.

Equally important is the ability to identify “heroes” at individual sites, as well as “rock star” sites, that who perform exceptionally well.

 

Step 3: Decision support tools to aid long-term performance


The final step is closely linked to the training. It is not at all uncommon for training to occur well before sites begin enrolling patients. This lag in time between learning protocol procedures and having to apply them can lead to less reliable recall, which can cause errors and up costs.

Decision support tools can be developed alongside a simulation-based training program to provide site employees with references that they can use once a clinical trial is underway. For example, tools to aid research coordinators and investigators in patient screening could list inclusion and exclusion criteria; the availability of simple lists of this sort can help to reactivate the “muscle memory” generated with the simulations included in training.

Novel or complex procedures required for a given protocol could likewise be supported with checklists and/or illustrations. Decision support tools provided in an electronic format could even include brief animations that employees could reference to remind them of the correct way of completing the procedures. For instance, if a special sort of imaging is required for a study, a support tool could list key steps and identify ways in which the protocol’s imaging procedures differ from standard practice.

Decision support tools can be provided in hard copy form, such as checklists, flow charts or flash cards, or in electronic form that can be accessed with a tablet, phone or through a site’s computer system.

Errors and protocol deviations are among the top factors that add to a clinical trial’s time and costs. As with many problems, proactively identifying and addressing potential errors is more effective and cost-efficient than merely reacting after an error occurs. By focusing first on writing a clear and easily understandable protocol and second on providing effective training and related decision support tools to site staff, sponsors can help assure that they address potential errors up front and reduce the likelihood that deviations will increase the time and cost needed to complete their clinical trials.

 

To learn more regarding how to reduce clinical trial costs, visit our website: http://proficiency1.wpenginepowered.com/

Developing Protocols For Patient-Centricity

Patient-centricity has become an increasingly important focus for clinical researchers in recent years. Creating a research experience that addresses patient-expressed needs and wants is an important part of recruiting and retaining sufficient numbers of qualified patients.

And it’s not enough to add a couple of features broadly considered to be what patients want, such as use of telehealth or home visits to reduce the number of site visits required for a study. The only sure way to ensure that sponsors can ensure their clinical trials are truly patient-centric is to design that focus into the protocol from the start, using a protocol optimization method such as Pro-ficiency’s Pro-Active Protocol solution.

In essence, patient-centricity marks a shift from the traditional disease-focused approach of conducting clinical trials to a philosophy that puts the well-being of patients enrolled in clinical studies first and treats them as “informed collaborators whose participation is crucial for the success of the research.”

Patient centricity has been at the forefront of the collective industry mind for the last several years. For instance, in 2016, the Drug Information Assn. (DIA) reported that a survey of pharma and biotech companies conducted in partnership with Tufts University showed that 65% of those firms had invested in patient-centric initiatives as part of their drug development programs.

By 2019, a SCOPE conference seemed to indicate widespread dedication to patient-centric trials, with representatives from various sponsors, CROs and other companies discussing how to incorporate more patient-centric aspects into research. 
[three_fourth]
The FDA has also highlighted the importance of patient-centricity, as well, developing a series of four patient-focused drug development guidance documents to address various ways of collecting and using patient experience data to support clinical research:  

  • Patient-Focused Drug Development: Collecting Comprehensive and Representative Input, a final guidance issued in June 2020;
  • Patient-Focused Drug Development: Methods to Identify What Is Important to Patients, a final guidance issued in February 2022;
  • Patient-Focused Drug Development: Selecting, Developing or Modifying Fit-for-Purpose Clinical Outcomes Assessments, a draft guidance issued in June 2022; and
  • Patient-Focused Drug Development: Incorporating Clinical Outcome Assessments into Endpoints for Regulatory Decision-Making, a yet-to-be-issued draft guidance.

 

Why is patient-centricity important?


One of the driving forces behind the move to more patient-centric studies is concern over patient enrollment and retention. About 80% of clinical trials struggle with enrollment, and some studies estimate that up to 30% of patients drop out before a clinical trial concludes, according to a March 2022 Antidote blog

“When product-centric trials neglect patient needs, dropout rates are high and the integrity of the data becomes compromised,” the MD Group agreed in a 2020 blog. Conversely, patient-centric approaches to clinical trial design can result in higher patient recruitment and retention rates.

And speaking at the 2019 SCOPE conference, Beth Zaharoff, senior director for patient-focused engagement and partnership at TESARO noted at the conference that enhanced convenience for the patient and focus on unmet needs expressed by the target patient population can help boost enrollment and ensure that patients remain in the trial for its entirety.  Additionally, patient centricity can improve trial speed and efficiency by helping to ensure the protocol is simple to administer for the patient, she said.
[three_fourth]
What constitutes a patient-centric approach to research? Schedule flexibility and use of decentralized features are factors that are often touted as means of making research participation more attractive to patients. Other options are also popular among sponsors, such as these steps recommended by the MD Group in its 2020 blog:

  • Make information accessible;
  • Involve patient associations;
  • Ensure that patients feel valued;
  • Reduce inconvenience wherever possible; and
  • Empower patients through technology.

[three_fourth]
But the answer doesn’t lie in randomly selecting features with the label “patient-centric” and attaching them to all clinical trials. Speaking at the SCOPE conference, Melanie Goodman, Pfizer director of patient recruitment programs, emphasized that patient-centricity must permeate every aspect of a clinical trial, from the protocol to the site, to be effective.

In other words, patient-centricity must be built into the protocol from inception. And that means that protocols must be optimized with an eye toward real-life patient needs and preferences. Pro-ficiency’s proprietary optimization methodology, Pro-Active Protocol, launched last month, can help achieve this goal by applying simulation to visually represent the operational flow of a clinical study. The service aims to shed light on protocol inconsistencies and gaps so sponsors and researchers can adjust and revise before a study begins.

A critical part of the service is gaining patient insight—along with opinions from study teams and regulatory agencies—about how easy or difficult it would be for target patients to participate in a study.

The only way to make sure that the protocol is written in a way that is feasible for the target patient population is to get those patients involved in the early stages of study design. This can be done via workshops that involve researchers, patient advocates and the patients themselves, for instance.

Logistical issues that don’t even register with sponsors during protocol design can have a large impact on a study’s ability to recruit and retain participants. For example, patients with long-term or serious conditions that cause fatigue may lack energy to travel at certain times. And patients whose condition causes dexterity problems may struggle to operate some devices or open certain types of drug containers.

The Pro-Active Protocol solution can “help to visualize and animate the patient journey through the study,” Beth Harper, Pro-ficiency Chief Learning Officer, explained. “This helps to set expectations for how involved the study participant will be and to help potential patients understand their commitment prior to participation; they will see how long the study will be and how involved from a procedure and time perspective.”

Having this information available makes it easier for sponsors to get feedback from patients on how likely they would be to participate in a clinical trial, including potential barriers to participation and opportunities to further simplify the study design and schedule of events.

Pro-Active Protocol can serve as an invaluable resource, whether used as a stand-alone resource or integrated with Pro-ficiency’s existing simulation-based protocol modules to boost site and study team understanding of the protocol requirements upon start-up of the clinical trial.

 

Site feasibility also supported


But ensuring that the protocol is truly designed with the target patient group in mind is just one of two critical areas that loom large for protocol optimization efforts. The second, and equally important, key area is ensuring that the protocol is clear and understandable for the research sites that will be implementing it.

“The most patient-centric thing sponsors can do is to make the protocol easy, or easier, to implement,” Harper said. “The easier it is to implement, the less change of errors (deviations), both on the part of the sites and the patients.”

Aidan Gannon, Advarra director of client services and innovation, agreed in a 2021 blog that there is a close relationship between patient-centricity and site-centricity, as well.

“One could argue patient centricity is not possible without site centricity, as sites deliver the patient experience in clinical trials,” he wrote. “The best intentions of any sponsor or contract research organization (CRO) make little difference if sites fail to engage with participants.”

Sponsors should listen to site experiences and suggestions for improvements, as these often have very practical implications to how well a clinical trial is conducted, he suggested. Sponsors should also offer process improvements and other support to help address any challenges that sites report. 

These types of improvements can help sites run clinical trials more smoothly, efficiently and accurately. When not struggling with study-specific challenges, sites can turn more time, attention and resources to ensuring a patient-centric experience.

For example, Advarra noted one large pharma sponsor conducted a Phase 3 breast cancer study involving a short timeline and small patient population in 16 countries. The company provided custom training, engagement options and guided visits, among other things, which reduced screen failures by 50% and protocol deviations by 21%.

The better sites understand the protocol, the more confidence they will have in explaining it to patients, Harper said. In this way, clarity for sites ties back into patient needs, by providing sites with the intimate knowledge about how a protocol will really work so that their staff can communicate that information clearly to patients.

With this growing focus on improved patient-centricity, it is critical that sponsors provide protocols that address real-world patient needs, constraints and priorities. And they must make sure that the research sites they select to carry out their studies both understand and are able to implement the protocols in the most patient-focused way possible.

But many sponsors lack the tools in-house to develop the necessary insight to implement these types of changes to their protocol development process. This is where resources like Pro-Active Protocol come in. By pairing industry expertise with effective simulation techniques, this product can help sponsors ensure deep understanding of patient needs that can be incorporated into their protocols and communicated with crystal clarity to sites. The end result: well-written protocols that account for real-world patient experiences and are easy to implement correctly.

 

Click here to learn more about our Pro-Active Protocol solution: http://proficiency1.wpenginepowered.com/pro-active-protocol/

Simplifying Training Analysis with Pro-ficiency’s New Admin Dashboard

This week Pro-ficiency has launched a major upgrade to the platform’s Admin Dashboard. The goal? Faster, easier and more intuitive access to key analytics describing both individual and site-wide performance in study protocol training.

 

Simulation-based training, like that provided by Pro-ficiency, has a proven track record for ensuring that clinical research staff fully understand how to perform the most critical aspects of a protocol. But equally important as the training modules themselves are the analytics generated as staff at different sites complete the training.

The data and analytics provided are just as much a boon for clinical trial sponsors as the learning provided to the sites. And the new redesign of Pro-ficiency’s admin dashboard provides sponsors with one-click access to all the analytics they need to ensure that their trials go forward smoothly, with fewer problems or protocol deviations.

Sponsors no longer have to go into each module and determine what data they want to see; they can make that determination and get to the necessary information directly from the dashboard through the new interface allowing administrators to quickly get what they need.

For instance, images of commonly used displays such as the user performance metrics and the high-level heat map that indicates where stronger versus weaker performances occur are displayed prominently. A simple click on those images takes users directly to those analytics, giving them instant, intuitive access to critical data.

Sponsors who have been using Pro-ficiency’s training packages know what key displays like the performance report and heat map look like. Having these images displayed directly on the dashboard visually cues study leaders to click on the information that they most need after a single glance. 

This upgrade is the first of many system updates aimed at transforming how study training is conducted in clinical trials today and in the future. By driving greater understanding of a study for sites, and predictive risk management performance metrics for leaders, Pro-ficiency is setting a new standard for what sponsors should expect from their training dollars and efforts.

 

 

Even simpler analytics, such as tracking how many staff have completed requisite training, can be quickly and easily accessed. For instance, sponsors can easily look and see what percentage of people at a given site or across multiple sites have finished training, what percentage have begun the process and what percentage has yet to begin training.

Easier use of these analytics could lead to more frequent use to check in on training progress, including instances when a protocol amendment demands new training or retraining with a new module. The more readily sponsors access and evaluate the analytics provided via the dashboard, the more agile they will be in identifying and responding to shortfalls.

Data can also be exported with a single click, allowing easier sharing both among a sponsor’s own departments or with sites and CROs.

And that is an important point with the Pro-ficiency training package. Sponsors purchase the data as much as they do the learning modules for the sites. But once a study starts, sponsors become increasingly time-stretched, making it challenging to stay on top of training problems or weak areas. The upgraded interface makes it much more convenient and less time-consuming for sponsors to stay on top of evolving training demands.

The end result is an intuitive dashboard that makes it easier for administrators to locate and view the information they most need at any given moment. This respects the value of sponsors’ time and allows them to quickly access more needed information with substantially less effort.

[one_fourth]
To schedule a walkthrough of the new Admin Dashboard, simply reach out to us here with a few dates and times that work for you, we’ll set something up!

Embrace Innovation to Dodge Rising Trial Costs

The cost of conducting a clinical trial has been rising in recent years and that trend shows no signs of abating anytime soon. And that means that the clinical research industry needs to overcome its traditional resistance to change and adopt innovations that can save both money and time to mitigate the impact of those rising costs.
[one_fourth]
But it’s not enough to merely adopt new technologies or other innovative practices and hope that costs go down. It is important to carefully select the innovations adopted based on four tests:

  • Does it reduce cost and/or time?
  • Does it cause creative destruction, or replacement of old processes with improved ones?
  • Does it require doing business differently, in a better, more efficient way?
  • Do users love it and actually use it?

[one_fourth]
The cost of everything is rising rapidly, and the cost of developing new pharmaceuticals is no exception. Increases in areas like food, gas and airfare play a role in the increased cost of pharma R&D, but clinical trials also face unique factors that add to their expense. According to 2021 estimates by Deloitte, the average cost of development has soared from $450 million in 2013 to $2.45 billion in 2020.

In response to pressures—including those due to increasing costs—many industries will seek out innovations that improve the efficiency of their processes. The clinical research industry, however, has traditionally been resistant to innovations out of fear of disturbing the status quo.

But the rising costs facing biopharma R&D mean that continuing to do everything the same way is not going to work for long. Clinical research sites and sponsors must find ways of reducing costs wherever possible in order for pharma R&D to continue at the same brisk pace currently seen.

 

Innovation and cost control


Innovation in key areas will be key to keeping costs under control in the future. Industries must evolve to survive; it’s neither wise nor safe to continue doing things the same way indefinitely. And the clinical research industry is capable of pivoting quickly to innovate when needed. For instance, COVID-driven adoption of fully or partially decentralized clinical trials proves the benefits of judicious innovation. 
[one_fourth]
In fact, the Tufts Center for the Study of Drug Development (CSDD) concluded earlier this year that decentralized clinical trials provide net financial benefits, with ROI of about $10 million for Phase 2 studies and $39 million for Phase 3 trials. CSDD analyzed real financial data from Medable-enabled clinical trials. Among the key findings were than decentralized research led to:

  • Shorter development cycle times;
  • Lower trial screen failure rates; and
  • Fewer protocol amendments.

[one_fourth]
Reductions in cycle times have the greatest impact on ROI, the CSDD study said. And decentralized or hybrid trials look likely to remain in place for the long-term. These factors indicate that adoption of innovations that allow decentralized trials meet the four tests of a useful innovation. 

Application of new technology is the form that innovation tends to take. For instance, use of artificial intelligence (AI) and machine learning (ML) is another area of innovation that shows promise in reducing the cost of clinical trials. An article in Pharmanews Intelligence earlier this year suggested that greater use of AI and ML can decrease some costs associated with drug research by as much as $26 billion per year.

And technology allowing for remote or partially remote trials—such as wearables that automatically track key patient metrics and use of telehealth—is featured frequently in trial decentralization efforts. But industry adoption of new technology, as well as new methods of operation, remains somewhat slow, especially compared to other industries.

But a 2018 study by Tufts Center for the Study of Drug Development (CSDD) showed that although 80% of respondents who have invested in technology report time savings for site initiation through activation, most indicate that their tools could be improved. 

 

Protocol deviations and the role of training


While many innovations in clinical trial operations aim to reduce costs by keeping cycle times shorter, one of the primary reasons clinical trials take longer—and cost more—than expected is simple error. Protocol deviations are common, consistently topping the list of BIMO inspection observations over the last several years, according to regular FDA reports

And some of the most promising innovations also seem to come with the risk of more frequent errors and the associated costs for dealing with them. Risks include protocol deviations, queries, reconciliations, SUSARs and data rejections. Remediation of protocol deviations can make up 27% of costs.

Decentralization, for instance, can lead to larger and more complex trials. In early 2022, CSDD noted that protocols have been growing increasingly complex since 2009, with no end in sight to that trend. For instance, the CSDD report said that Phase II and III protocols as of 2020 generally had about 20 endpoints, with an average of 1.6 primary endpoints, up 27% since 2009. And the mean number of distinct procedures required for a typical protocol rose 44% during the same timeframe.

DCTs, adaptive study designs and use of synthetic control arms are among some of the shifts in recent years that contribute to protocol complexity.

In addition, most clinical trials will use multiple vendors and technologies. This makes it ever-more-complex to communicate information about the study, as well as managing all the roles and rosters at both sites and vendors.

The solution to these problems could well lie with more innovative approaches to training. Traditionally, sponsors have provided protocol training via massive slide decks, often accompanied by an in-person or web-based lecture. But this is not the best approach for ensuring that clinical research staff perform protocol-mandated tasks and procedures correctly.

In fact, a 2019 Harvard study indicated that slide-based training and education is not effective in imparting knowledge in a way that alters performance. And the entire purpose of protocol training is supposed to be ensuring that staff perform optimally in carrying out tasks exactly as described in the protocol. 

Where slides and lectures fall down, however, simulation-based training can shine. Protocol simulation training provides actual practice in critical study skills, ranging from GCPs through informed consent, proper handling of the investigational product, patient screening and more. 
[one_fourth]
For example, the inclusion requirements in a protocol for the study of a major depressive disorder (MDD) treatment might specify that patients meet five specific criteria, such as:

  • The patient must have experienced a previous MDD episode.
  • That episode must have lasted more than 10 weeks.
  • The patient must currently be taking two antidepressant medications for a current MDD episode.
  • The patient must be experiencing treatment failure with those medications, defined as reduction in symptoms of less than 40%, according to the ATRQ scale.
  • The patient must have been in treatment for at least eight weeks.

[one_fourth]
Standard lecture-and-slide training might quiz research staff on these criteria by offering a multiple-choice question asking them to choose the correct criteria from among various options, most of which included at least one incorrect item. And the researchers may very well pass this test immediately after sitting through the lecture or slide presentation.

But whether they can correctly apply the inclusion criteria in real-life situations is neither trained nor tested in this approach.

Conversely, in a simulation approach, learners might be presented with a variety of patient charts. Some of the patients will meet all the inclusion criteria, while others meet some or none. In this approach, research staff will be able to play-act the actual screening process. This trains them to correctly identify patients who meet or do not meet the criteria, as well as providing actual practice in conducting the patient screening that more closely mirrors real life.

Broad application of simulation training for healthcare providers in Ethiopia for the CDC showed significant improvement in performance after simulation training compared to more traditional training methods. After simulation training, 97% of healthcare providers successfully executed a simulated patient encounter, compared to a previous pass rate of 69%.

And the ability to track performance during simulation training can help sponsors predict site performance and intervene to address weak areas before a trial even begins. For example, sponsors can track performance of staff at different sites in accurately applying inclusion metrics, a critical part of enrollment.

In addition to reducing errors, simulation can reduce training time by 50%, further contributing to lower costs. 

In these ways, simulation-based training passes the first three tests of valuable innovation with ease. Additionally, sponsors that have applied this type of training consistently respond positively and continue to use simulation, meaning that it also passes the fourth test while emphasizing its potential for creative destruction. 

The effectiveness of simulation-based training has been borne out in other industries. Aviation crashes, for instance, are greatly reduced when pilots are trained via simulation. And in the healthcare arena, simulation training reversed an industry-threatening volume of anesthesia mishaps. 

While the clinical research industry traditionally has been slower than average to adopt innovations, that needs to change. Judicious adoption of innovations can help keep clinical trial costs under control. Of particular value is adoption of new training approaches, like simulation, that can prevent time-wasting and cost-increasing errors, as well as reducing the overall cost of training from the start of the process.
[one_fourth]
Visit our Pro-Active Protocol webpage to learn about our approach to protocol optimization: http://proficiency1.wpenginepowered.com/pro-active-protocol/

Decentralized Clinical Trials: Balancing Patient and Site Needs

Decentralized clinical trials (DCTs), including hybrid studies, have become the latest trend in pursuit of more patient-centric trials. Spurred largely by the success of decentralized approaches to many trial activities, including e-consenting and replacing site visits with telehealth visits, as methods for dealing with COVID restrictions, sponsors and sites alike have been touting decentralization and flexible visit options for patients in efforts to highlight their efforts to make clinical research as patient-friendly as possible.

But the full value of DCT methods and technologies may be dependent upon specific patient characteristics and individual site capabilities. Rather than assuming that use of DCT methods is beneficial to all patients, for instance, sponsors and research sites might do better to look closely at their specific patient cohorts to determine what individuals in those groups truly need.

Recent research by the Tufts Center for the Study of Drug Development (CSDD) and Medable has shown that DCTs offer strong ROIs to clinical trial sponsors via shorter development times, reduced screen failure rates and fewer protocol amendments. For instance, the results indicated that a $3.126 million investment could see returns of $17.3 million to $77.8 million.

This alone could provide a strong incentive for sponsors to design trials that include at least some DCT components, such as remote visits, telehealth, mobile apps, wearable devices, and home and local assessments. Additionally, adding decentralized technologies to protocol designs could be seen as an obvious way to tick the “patient centricity” box that is viewed as critical to successful enrollment and retention.

Florence Healthcare said in a February 2022 blog that, as of 2021, 95% of research sites use at least one form of decentralized technology. Meanwhile, professional conferences continue to features sessions about decentralized, hybrid and virtual trials.

And patient reception has been positive, with about 92% of patients indicating they want clinical trials to include some sort of remote feature using innovative technology.

Similarly, results from a survey published in the July 2022 JAMA Network Open indicated that more than 80% of cancer patients and survivors would be willing to use most remote options, such as oral medications delivered to the home, e-consenting and wearable technology.

 

Listen to patients to achieve patient-centricity


But it’s important to remember that one size does not fit all in terms of patients. In any clinical trial, every individual patient will be unique. Some may not want a home healthcare provider in their house at any time for any reason. Different locations that are nearer to or farther from the research site as well as a given patient’s level of comfort with technology used to assist in fully or partially decentralized trials are other key factors that play a role in whether a patient prefers remote or on-site visits.

In many cases, sponsors may be adopting decentralized features to check a box for patient diversity or increased population diversity, both of which have been touted as benefits of DCTs and hybrid studies. Because this is often done based on an assumption and without consideration of what specific patients say they really want, it’s unclear whether patient-centricity is truly being served.

A July 2022 ObvioHealth article, for instance, listed three reasons why patients may prefer virtual clinical trials. Chief among these was safety, especially with COVID still a lingering threat that many patients may seek to avoid. The second reason was accessibility, especially for patients who live at some distance from clinical sites or otherwise have difficulty getting to a site; this has widely been cited as a major factor driving perceived patient preferences for DCTs. The third reason was flexibility, with reduced or nonexistent site visits, clinical trial participation creates less disruption in patients’ daily lives.

These benefits are thought to boost both patient numbers and diversity among patient populations by helping clinical trial sponsors reach a broader swath of people.

But there is risk in assuming that all patients are going to have the same preferences or face the same pressures. One factor that should not be overlooked is the degree to which many patients value relationships. In fact, the relationship with a trusted provider or a coordinator who led them through a consenting procedure is often the reason why some patients enroll in a clinical trial at all. Shifting from that model to use of varied home health providers and novel technology could turn some patients off from clinical trial participation. 

The use of technologies can also raise concerns. Florence Healthcare noted in its blog that, in addition to losing a sense of connection with physicians, a top patient concern with DCTs revolves around technology learning curves.

To aid with technology use, some sites have added a new position: a patient technology navigator. While it seems a good idea to have a tech specialist available who is well-versed in both teaching and trouble-shooting the technology used in a clinical trial, it remains uncertain how valuable patients find such individuals. There is a tendency among some patients to prefer to have just one or two trusted contacts at a clinical trial for everything.

 

Sites left out of discussions?


And just as every patient is unique, the same can be said of every research site. Offering patients more choices in how they participate in a clinical trial is generally seen as a patient-centric move. And flexibility can be a plus for many patients. A degree of flexibility can lie within many sites’ capabilities. But it is also possible for a study to be too flexible in what it offers. Too many choices can lead to dissatisfaction, especially if there is variability among site capabilities and patient characteristics at different locations in a multi-site trial.

Sites can have significant variation in staffing and available technology, among other factors that can influence whether a DCT is a good option. Additionally, service providers and their preferences and capabilities must be considered.

As one example, some sites, particularly larger ones, may already have their own home health operations, while others would have to hire third-party providers. This would reduce the opportunity for patients to build close, trusting relationships with coordinators, investigators, nurses and other key study staff.

And if particular technologies are to be used to provide patients some choice between at-home and on-site visits, every site participating in that study needs to have—and be proficient in using—those specific technologies and associated. 

And that means that training in all of those technologies will be required for site employees, which can raise the question of where that training will come from. If a particular vendor provides the technology, that company may provide training, but additional education may be necessary to ensure that research staff understand how to use the technologies and associated systems as specifically needed for an individual protocol. 

That additional training could place additional burdens on frequently overworked staff. If a site must incorporate some decentralized features into a clinical trial, then the sponsor needs to have a plan for teaching both research staff and patients how that will happen. Staff need to be trained in the systems that will be used to communicate with patients and to gather and report data.

Patients, for their parts, must be trained in the use of trial-specific technology, such as e-diaries and wearable devices. While it’s often assumed that study coordinators or other research staff will provide this type of training for patients, that may not be practical. The staffers themselves may lack sufficient proficiency to teach a new user of a technology effectively.

And having individualized training for every piece of technology or new system deployed poses serious challenges. It can be difficult to validate whether staff and/or patients fully understand how to use the technology as needed for the clinical trial, for instance, especially if multiple providers are conducting the training.

A better option might be an all-in-one remote training solution that is designed specifically for conduct of a DCT or hybrid study. And a remote training approach with capacity to provide instant feedback to users can greatly streamline the training experience and thus reduce the time burden on site staff. 

Additionally, troubleshooting technology and data and communication systems for patients is another sure way to add to site workers’ burdens. Even at larger research centers where specific tech support workers may be hired, patients are more likely to call the study coordinator or another research staffer with whom they have built a trusting relationship.

In the spirit of being patient-centric, some of the conversations around full or partial decentralization of clinical trials may have left sites out of the discussion. Sponsors and others making decisions about study designs may lack full understanding about how much goes into preparing a given site to be ready for a DCT or hybrid study. The effort needed both in preparation and during the study can add to staff burden.

To avoid overburdening staff and causing burnout, it’s important to balance what is good for the patient—or at least what sounds good—with what is feasible at the site level. One way of doing this could be for sponsors to map out a day in the life of a site. This would require them gaining a deeper knowledge about what the sites they hire do, which employees are involved, how much time critical and non-critical tasks take and how much time is spent on research- or patient-focused activities versus administrative tasks.

One thing that could be of great help to research sites is a deep dive into how the various systems employees must use interact with each other. Looking at what the staff at the site touches and what the patient touches could be a good guide to which systems would benefit from this type of scrutiny. Use of remote training that tracks learner performance and feeds that information back to sponsors can aid these efforts, showing where skill gaps may lie and helping to highlight which technologies used within a DCT will truly add value, rather than additional burdens, to a study. 

And most importantly, it would help sponsors see whether changes made in the name of patient centricity create more or less of a burden for those sites, as well as how sites may differ in their capabilities.

In short, sponsors should avoid assuming that DCT or hybrid trials are automatically going to meet all patient preferences. At the same time, study designs need to take into consideration the realistic capabilities of the specific sites that will be conducting the research and interacting with the patients.

There is simply too much complexity across too many possible permutations to be able to state with confidence that DCTs are better options for any patients, sites or types of clinical trials.

To learn more about a remote training approach ideal for both sites and patients, please visit the our simulation webpage: http://proficiency1.wpenginepowered.com/simulation-training/

Selecting a Study-Specific Protocol Training Strategy

“The right tool for the right job” is a saying that applies in many situations across many industries. When it comes to clinical protocol training, research sites and sponsors need to determine what their ultimate goals are to correctly identify the best training tools for the job.

 

When it comes to training, those goals will revolve around the competency of the staff being trained and can be placed into three broad categories:

  • Ticking a box to meet minimum standards under applicable regulations;
  • Ensuring staff’s ability to regurgitate information from the training in the short term; and
  • Affecting staff’s long-term decision-making behavior.

 

In the first instance, a read-and-sign procedure, wherein employees are given some form of written instruction to review and a form to sign stating that they have read and understood it, might be the most appropriate training tool. Simple presentation of written information, such as an SOP or slide deck, may be sufficient in cases where the training covers well-known procedures for experienced staff; in such cases, the training may simply be mandated to meet regulatory requirements.

For the second, however, this would be insufficient; an exam following some form of training—which could include reading materials, attending a lecture with slides or participating in a webinar, among other options—would be a typical training tool applied. This is useful when it’s necessary to confirm a level of understanding of the training material beyond ticking off the “training” item on the regulatory checklist. The ability to pass a test after completing the training can serve as an indicator that the employee understands the material presented.

But test-taking doesn’t allow accurate prediction of performance, particularly in complex tasks like talking to patients or identifying inclusion/exclusion criteria from medical charts, tasks that play a key role in driving clinical trial processes.  And for training that can be shown to ensure better decision-making by researchers conducting the clinical trial, a training tool that can measure performance on key tasks while it is being applied might be a better choice. Simulation-based training can fill this bill.

There is a saying: “There’s no substitute for on-the-job training.” But this is not 100% accurate. Simulation-based training can, in fact, substitute for on-the-job training, and without risk to patients. It’s well-established that learning by doing is the most effective way of absorbing knowledge in a manner that will ensure appropriate changes in behavior that will yield stronger performance. Pilots, for example, spend many hours in flight simulators before they are allowed to handle planes with passengers on board.

And this simulated learning-by-doing doesn’t require an exam to prove that the information was absorbed. If the staff completes the simulated activities successfully, that proves competency. Performance during simulations can accurately predict job performance, as well as highlight any problem areas. For instance, if 70% of a cohort failed in a first attempt at a simulated procedure, the organization would know that was a problem area in the protocol that needed to be addressed.

So, organizations need to begin by identifying the level of competency they need and evaluating how crucial that competency is to the overall success of the clinical trial, including the decisions that must be made continuously throughout study conduct to sustain the necessary processes. 

Another crucial question concerns the skills or concepts that must be taught. Organizations need to consider whether these skills can be modeled or if they are completely abstract. For instance, it would be impossible to create a simulation for things like the company culture, whereas the specific steps of a medical procedure as part of a clinical trial could easily be simulated. With all of this information in place, it’s easy enough for an organization to identify and apply the appropriate tool to ensure the ideal level of training

Reducing Study Team Burden With Shorter Training

Shorter training is an ever-present item near the top of nearly all clinical research site wish lists. The time and energy needed for training is among the many drains that researchers complain about, and the one that clinical trial sponsors have the greatest control over. 

The growing complexity of clinical trials and the accompanying increase in researcher responsibilities has yielded a series of pain points for research sites. For instance, in August 2020, CenterWatch reported that sites were feeling overwhelmed by growing administrative responsibilities, among which training is often included. 

The good news is that training is an area in which sponsors could work to limit the time burdens they place on sites. And they can do this simply by rethinking the traditional model of protocol training, looking instead at a more modern approach that embraces current learning theory, engages researchers and maintains a tight focus on the needs of each specific protocol.

When it comes to protocol training, the biggest downside to the traditional approach is that sponsors are trying to dump a massive amount of information—often in slide decks that could number up to 300—on the researchers, rather than engaging them and clarifying what is most important for a particular study. With too much information to digest, researchers cannot sort out the most critical from the less important and may struggle to remember everything they need to. This ultimately can lead to protocol deviations that could cause delays and increase the costs of a study.

Another major weakness of this approach is that it does not engage investigators and other research staff and it does not prioritize the information that is most important to successful conduct of the clinical trial.

The problem is that sponsors developing training often lack understanding of what information is critical, leading to failure to prioritize the information that will have the highest impact. This could include areas where deviations are more likely or where the consequences to the study or patient care would be the most serious in the event of a deviation.

So, sponsors need to consider whether the training they provide is focused enough on the specifics of the protocol. An approach that emphasizes protocol-specific skills and knowledge while identifying areas of poorer performance can ensure that those that need extra help get it, while not forcing more proficient staff to sit through unnecessary training. And that would mean that no researcher needs to spend excessive amounts of time on training.

The simulation format is really the solution to all of these problems. Simulation-based training like that offered by Pro-ficiency helps put researchers in hypothetical situations that mirror those they will face during the study itself. They are then able to practice making critical decisions that comply with the protocol. Any mistakes made are corrected with immediate feedback, allowing learners to improve their performance in a consequence-free environment.

And, most importantly to researchers undergoing the training, this approach allows them to progress at their own speed. If some individuals are very proficient in all of the protocol procedures, for instance, they may need to devote a relatively short amount of time—as little as 20 minutes, in some cases—to training. And even for those who need more time to fully master some of the procedures, the time needed for simulation training that is tightly protocol-focused is much less than the typical two to three hours required for lecture-based training.

And future developments in Pro-ficiency’s training will further enhance these capabilities. For instance, adding expertise in game theory can make the training even more engaging by allowing a “choose your own adventure” approach to training.

Pro-ficiency’s simulation-based training also provides analytics that can flag areas that multiple researchers may have difficulty with. In such cases, the sponsor can take action, such as providing additional targeted learning or focus on that issue during monitoring. Further, if particular individuals perform exceptionally well in some areas, those employees can be used as “site champions” to help other staff. 

The end result: A clearer understanding of the protocol and greatly reduced chance of deviations and happy sites that don’t feel that training is robbing them of valuable time.

3 Barriers Preventing Clinical Trial Innovation

The move to more decentralized trials in response to restrictions imposed by the COVID-19 pandemic illustrated that the clinical trial industry is well able to take new technology on board and adapt it to use in the conduct of studies. But outside that example, the industry has been and largely remains resistant to any types of change, including the adopting of new technologies, even when such technologies offer clear time or cost benefits.

Although adoption of new technology and innovation has seen a rapid rise in the world at large over the last decade, organizations that lag in adopting new technology and new ways of doing things are more common than not in the clinical research industry. A recent survey by the Tufts Center for the Study of Drug Development (CSDD) indicated that innovations in the clinical trial arena take nearly six years to adopt. Planning and initiation of an innovation can take almost 14 months, with evaluation of viability and impact takes nearly 16 months. Another 16 months may be spent deciding whether to move forward with an innovation; it can then take 23 months to implement it.

“The timeframe to go through each stage of the process varies by company size, but overall, respondents report the later stages of the process—deciding to adopt the change and implementing it—are the most difficult,” the report said.

The CSDD survey results, published in the center’s July/August 2022 Impact Report, indicated that while 67.8% of respondents rated their companies’ abilities to adopt innovations as “excellent” or “good,” 60% also report that their companies are slower to adopt those innovations compared to similar organizations.

Many other professions demonstrate a much higher degree of adaptability and willingness to adopt new technologies and new approaches to getting things done. For example, schools from elementary through college levels had to pivot quickly to remote learning during the pandemic, and most did so with aplomb.

And the clinical research industry likewise demonstrated during the pandemic that it is capable of making swift, agile changes to cope with unusual circumstances. With COVID driving efforts to socially distance and avoid people gathering indoors, research sites, sponsors and CROs quickly folded in use of various technologies—telehealth, passive data gathering via wearables, remote health visits and more—to allow a decentralized approach to clinical trials. And the approaches taken to address the need to keep patients out of research sites during the pandemic have been successfully applied to provide more flexibility to how patients experience clinical research.

 

Regulatory uncertainty worries researchers


But where does this resistance come from and what are its impacts? Three deeply ingrained institutional barriers appear to be at work.

The first of these is grounded in regulatory obligations. The clinical research industry is one of the most highly regulated in the world, and this is the root of the second primary barrier to change acceptance. Due to the myriad regulations that clinical researchers must follow, there is a very real fear that any changes—no matter how useful or well-intended—could lead to problems with the FDA.

The CSDD survey listed lack of regulatory clarity as a significant barrier to adoption of innovations. The regulatory and legal departments in research organizations may be hesitant to adopt new technology or approaches due to vagueness in current laws and regulations, for instance. 

Additionally, the FDA and other global regulators showed themselves equally flexible in taking on board novel technologies and methods to allow this shift; the FDA even issued guidances explaining how these new approaches fit into the existing regulatory scheme. This experience could ultimately be a driver to more acceptance to changes in technology used in clinical research.

A permeating attitude of “if it ain’t broke, don’t fix it” underlies this concern. Due to regulatory and risk aversion, the industry as a whole is unmotivated to make any changes that are not mandated. Even if a new technology or approach could make part of an operation cheaper, any uncertainty or risk or even lack of proven effectiveness can be enough to prevent a research site or other organization from even considering adoption of new technologies or other changes.

And this can be rather short-sighted. For example, non-compliance with the protocol is consistently the most-common observation during FDA inspections. And whether a protocol deviation is noted during an inspection or not, it can still lead to delays and added costs, such as those associated with retraining erring staff—and sometimes all staff.

This is an acknowledged problem that occurs with relative frequency. In a paper published in April of this year, for instance, CSDD reported results of a working group study, which showed that Phase II and III protocols have a mean total of 75 and 119 protocol deviations, respectively. These figures potentially involve nearly a third of all patients enrolled in a clinical trial.

And they should indicate to research sites, sponsors and CROs that something is, in fact, broken in the current system, likely linked to the way in which research staff receive protocol training. That is, in itself, a strong indicator that the industry needs to do something different from what has been done in the past.

For example, in terms of improving researchers’ performance in protocol compliance, training is an obvious place to look. Decades of providing massive slide decks and lecture-based training has not yielded greater protocol compliance. Simulation-based training, which has proven more effective in teaching concrete skills in the medical, airline and other industries, has been gaining some traction in the clinical research sphere recently.

 

Research staff burdens concern sites


A second key barrier to technology and innovation adoption lies with growing concerns about the workloads imposed on research staff. Many employees are staggering under existing workloads; with trials seeing increasing complexity and the emergence of new approaches like decentralized and hybrid trials, many research positions are having tasks added frequently.

And technology burnout, in particular, is a common complaint among research sites, as well as CROs and sponsors. Many feel there is an overabundance of technology, leading to tech burnout. This has made sites and sponsors reluctant to add yet another log-in and password to staff’s plates. Going back to the training example, sending a slide deck or link to a webinar via email is one less system for sites to log into and can appear on the surface to reduce staff burden.

But a change in this area could ultimately reduce site staff burden if approached properly. In the area of training, for instance, changes from the familiar delivery of slide decks, with or without an accompanying lecture, might be viewed skeptically due to fear that this would add more time to already-long workdays, without accompanying payment that might take the sting out of extra work.

But simulation-based training that generates immediate information about staff performance in lifelike scenarios could allow staff to fly through parts of the training in which they are already proficient and allow them to focus more closely on areas specific to a given protocol where they may require more guidance. This could include things like a procedure that is different from standard of care or a particular way of gathering data that may be unfamiliar.

And while it’s important to consider the impact of any change on researchers’ time, it’s equally critical to ensure that both individual employees and the clinical trial as a whole are actually benefiting from a chosen technology or method. Technology tools, such as simulation-based learning, are not a gimmick. In fact, they should be developed to help the learning spectrum and improve staff performance by ensuring that all employees have full understanding of the requirements of a protocol.

 

Uncertain returns on investment hinder change


And the third main barrier to needed changes in the clinical research industry is simply institutional uncertainty about the potential benefits and impact of adoption of new technology, new approaches and new training, among other changes. The CSDD survey also pointed to the challenge of return-on-investment assessments as a barrier to innovation in the industry.

Changes must both have some sort of defined and measurable reward and be fairly certain of not causing negative consequences. Changes in one aspect of clinical research can impact many other aspects. And that means changes can be costly from an investment and potential loss standpoint.

This ties back into the fear of something going wrong such as a regulatory or legal consequence, or additional cost or time required. If something has been done the same way and there were no serious problems—e.g., there was no regulatory fallout, no patients were harmed, no-one got fired—organizations may be loathe to risk problems if they take a chance with a new technology or approach.

And the complex interplay among research sites, sponsors, CROs and often multiple regulatory bodies, along with IRBs means there are a lot of moving parts to evaluate. The challenge of an effective risk-benefit analysis may be, in some cases, deemed to take too much time and money, not to mention staff resources, to complete properly. In such cases, the risk of a change causing regulatory problems—despite recent FDA signals of flexibility in areas of new technology use—will often tend to tip the vote in favor of the status quo.

A closely tied concern for researchers is change management; many sites lack efficient, established procedures for evaluating and implementing changes. But that means that, in addition to allowing greater time and cost efficiency, more patient-focused approaches to research and reduced protocol deviations, the effort towards incorporating appropriate new technology or operational approaches could also lead to improved change management processes. This, in turn, would make it safer and simpler for organizations to adopt new innovations in the future.

In a nutshell, it’s easier and less scary to keep doing things the same way. But many burgeoning technologies can be folded into the clinical trial process in a way that gives back more than they take. Shifting to simulation-based training in place of slide decks, for instance, can ultimately reduce the amount of time spent on training activities while improving a long-standing weak area—protocol compliance.

Change can improve many situations, and when new technology or new approaches show clear rewards, they should be considered.

Pro-ficiency’s Commitment to Customer Service

Good customer service starts with a great product. Without that foundation—a product that is reliable and meets client needs precisely—it doesn’t matter how well and quickly customer service can address problems. In other words, a product with a bug count of zero and the capacity for customization to meet each client’s individual needs are the two blocks on which ideal customer service is built.

Reliability is, in fact, the primary customer expectation for any product. From the customer’s perspective, the most important aspect of any tool, including training programs, is that it is well-constructed and consistently works as intended, with minimal—if any—glitches. That means that the first goal for the design and dissemination of any product should be a bug count as near to zero as possible.

With reliability in place as the solid foundation, the second critical building block is the ability to customize the product, fine-tuning it to each clients’ specific needs. If a training product is designed in a way that allows easy changes and modifications responsive to any specific protocol, then it is more likely that customers will be happy with the end product and consider excellent customer service to have been delivered.

Pro-ficiency’s simulation-based training product, for instance, can not only be customized to a given protocol, but can also take into account any potential downstream modifications that might be needed. The platform allows for such changes to be handled seamlessly and quickly distributed to investigators at all sites. This makes it easy for the customer to handle any additional training associated with protocol modifications without delays in clinical research operations.

For our new customers, this begins with a careful review of their protocol, including feedback on any areas that might be difficult to implement, such as internal inconsistencies. The experts at Pro-ficiency, with many protocol reviews under their belts, have developed a rigorous system of protocol review that helps design not only a targeted and effective training approach, but also helps to bulletproof each protocol as much as possible.

This review is important, as it avoids “putting wheels on problems” by enabling training in weak or challenging areas of a protocol that run the risk of generating deviations. This careful review adds value to the customer experience by identifying areas where improvements may speed the training process and ensure successful patient enrollment.

While the ideal customer experience begins with a highly reliable product that prevents the need for a customer to call for assistance, in the rare instances where aid is needed, the amount of time required to render it is critical. Responses to customer problems must be made in a timely basis, even for a product like Pro-ficiency’s that is used by investigators around the world. Monitoring product defects that spur customer service calls allows for anticipation of areas that may need improvement to prevent future issues.

Tracking turnaround time on a monthly or other regular basis for responses to customer service calls is also useful; in addition, tracking the frequency with which calls go to technical support with problems that require help from a software engineer. In this way, top-quality customer service can be assured from initial purchase throughout the training’s use life cycle.

Reducing Study Team Burden with Targeted Training

An approach to training that carefully targets site and individual weak areas, along with critical protocol procedures, could not only reduce deviations, but can play a key role in reducing overall burdens on research site personnel.

Booming administrative tasks, customized technology systems and redundant certification and training requirements are among the primary factors that clinical research staff say eats up time better spent with patients and adds excessive burden—often uncompensated—to study teams. Some of these developments have been laid at the feet of growing protocol complexity and trial sizes, which are likely problems that are here to stay. But others lie within sponsors’ capacity to change.

In August 2020, CenterWatch reported that sites were feeling overwhelmed by growing administrative responsibilities. CenterWatch pointed to a survey indicating that about 80% of sites found that administrative responsibilities—such as managing study documents and regulatory file preparation—had increased “significantly” or “somewhat” over the preceding two years. Increased diversity in individual sponsors’ requirements, increased site responsibilities and increased protocol complexity topped the list of items most burdensome to sites.

Any given line item on a research professional’s task list in a protocol could include multiple steps, such as meetings, patient visits, labs and procedures, Beth Harper, chief learning officer at Pro-ficiency, noted. For instance, a protocol might call for patients to have two options—one on-site and one virtual—for some visits. That is convenient for patients, but that means that a site has to develop two pathways, along with the requisite training for both site staff and for patients.

The number of hours a site is putting into preparing for a study is the amount of time they are not working on that study, Jenna Rouse, chief experience officer at Pro-ficiency, said. Sponsors need to consider whether they would ask their own teams to do the same.

“The time taken to deal with these burdens is time that cannot be patient-facing or study-facing,” Scott Ballenger agreed. “I think it also weighs on the relationship with the sponsor and CRO, as well as contributing to site staff burnout.” 

This is not a new problem. In a 2016 white paper, WCG pointed to use of multiple systems as a major source of overwork for clinical study teams. In the decade-plus preceding the white paper, biopharma companies began using individual clinical trial, regulatory, drug safety and learning management systems from a variety of vendors. These, in turn were customized to be more client-centric, which made it difficult for vendors to integrate their own products even within a single sponsor, much less across multiple companies and studies.

Specialized technology can also pose a burden. Ballenger noted that many a study comes with a unique suite of software applications, each of those requiring a learning curve. While some may be intuitive to use, others are frustrating and time-consuming. 

And not only are all these processes cumbersome and a drain on site professionals’ time, but they also up the possibility for errors or protocol deviations, which can add to the time and cost needed to complete a clinical trial. In its January/February 2022 Impact Report, Tufts Center for the Study of Drug Development (CSDD) reported that the number of protocol deviations and substantial amendments seen in clinical trials has increased in recent years.

“Deviations are a sign that something is not in accord with the protocol,” Rouse said. “If you train in a way that highlights areas where staff may not understand how their job is to be done in a specific trial, you can take action before deviations start costing sponsors money and research sites time for retraining.”

 

Targeted training holds answers


And training is one area in which sponsors can exercise a great deal of control over the time burdens they place on sites. Regulations require that sponsors provide appropriate training, so the safest position for pharma companies is to provide an immense amount of information on everything possible, Jeff Kingsley, CEO of Centricity Research, said. And when the information is both voluminous and irrelevant to a person’s job, staff are likely to tune it out.

GCP requirements, for instance, haven’t changed in decades. The content of the regulation remains the same, and has never included a mandated length of time for GCP certification to last, he said. But in order to check off all possible boxes, most sponsors mandate GCP certification—usually their own—for every individual trial.

This type of ‘spray and pray’ training—in which all staff get the same exhaustive training—doesn’t make sense and wastes clinical staff’s valuable time. Additionally, when research staff are inundated with masses of content irrelevant to their jobs, the information overload makes it more likely that they will forget content that is relevant.

But tailoring training to both the protocol and the people who need it not only will better assure a sponsor that each employee at a site is getting exactly the training they need, but it will save research staff the time burden of going through training they don’t need. 

And the right training approach—one that helps identify areas of weakness or poor performance before a trial begins—will ensure that those that need extra help get it, while not forcing more proficient staff to sit through unnecessary training. This approach can also reduce protocol deviations, thus avoiding time- and money-sucking retraining after a study has begun. 

Training should reflect the need to change or develop specific behaviors, Rouse explained. If a procedure or patient visit is to be done in a way that is different from the standard, staff will have to change familiar and ingrained behaviors. For example, when sites are trained in enrollment procedures for a given trial, they are expected to implement those procedures over time, sometimes with a considerable amount of lag time between receiving the training and enrolling patients. Training that lacks clear illustrations of important things like inclusion/exclusion criteria, visit timelines and schedules will be harder for staff to retain, Rouse said.

Sponsors need to consider whether the training they provide is focused enough on the specifics of the protocol. While some companies recognize that it’s not necessary to include standard-of-care, common procedures or well-known clinical practices, others take more of a cover-your-ass approach and include everything remotely relevant to the daily practices of a clinical trial.

A compromise approach for sponsors that want to be able to say they have covered every contingency could be to provide a written manual or slide deck for staff to use as a reference, but make the training more targeted, Rouse suggested.

 

Trainer, resources also important


Another consideration is who provides the training. This task is often given to CRAs, whose skills may not include effective teaching and commonly defaults to a mere re-reading of the protocol or rote presentation of PowerPoint slides that don’t address knowledge gaps at the research site.

“Someone reading the protocol is not useful,” Kingsley said. “We’ve already done that. Go into the science behind it and discuss what isn’t written in the protocol.”

Visualizations, flow charts and decision trees can help staff quickly understand the fine details of specific procedures than a half-day webinar or 200-page manual, Harper said, adding, “The goal of training should be to demystify the protocol and help site staff through the operational procedures.”

Kingsley added that sponsors should consider adult learning theory when developing training. For instance, training could have pieces aimed at people who learn best by reading, those who learn best visually and those who learn best by doing the activity. The 70-20-10 rule could also be useful, acknowledging that only 10% of adult learning comes from curriculum; 20% comes from mentorship and the remaining 70% from the capacity to connect with others on how best to accomplish tasks and goals. 

He also advocated use of online communities where research site staff can pose questions to and have discussions with others working in the same areas or on the same protocol.

And it’s important that sponsors not assume what sites need in terms of training. Sites should be selected because of their general clinical trial skills, so training should focus tightly on the protocol, Rouse said. One way this could be achieved is by getting input from sites on what training they need to ensure thorough understanding of the protocol.

This approach to training does require an upfront investment. But when enrollment lags or deviations occur due to training failure, sponsors will have to take the time, effort and cost to retrain staff anyway. And retraining further adds to the time burden on research staff.

 

Standardization sought, industry solutions few


Standardization of boilerplate training that applies to most clinical trials—ideally provided by an independent third party rather than a sponsor—would go a long way toward reducing training-related burdens on research staff, Kingsley said.

Training redundancy can act as a huge time sink for research sites. Some industry efforts have been made to reduce redundancies. Transcelerate, for instance, has aimed to reduce inefficiencies through efforts like a shared investigator platform to streamline certain boilerplate activities like general GCP training. The idea is that if researchers at a site are certified through a Transcelerate partner GCP course, that certification would be accepted by other partners. 

“A lot of times, anything in the training that is not protocol-specific is going to lead to redundancy,” Rouse said. “The simplest thing is to see if there is a standard version of the training you want to deliver—such as EUCTR, informed consent, or GCP—and tap into that to relieve redundancy burdens on sites. An hour of a site’s time is worth saving; it is better for them to spend that time on finding the right trial participant than doing more routine certification.”

Kingsley also pointed to retraining related to protocol amendments as an unnecessary burden. Many protocol amendments are narrow in scope and irrelevant to many staff. For instance, an amendment that affects how labs operate still leads to the entire study staff having to take time to undergo the retraining. It would be better to focus amendment-related training to only the staff that would be affected by the change, he said.

“But redundancy is not necessary,” Rouse said. “Anytime a sponsor can choose a central GCP or other program as a standard, that will reduce the burden on sites.”

The ACRP also provides a GCP training program, including a simulation approach launched last year, with an eye toward a standardized approach that reduces the number of times sites have to complete GCP certification.

And WCG offers a survey designed to help sponsors determine how much of a burden their study would place on a site based on protocol and operational complexity. 

However, it’s unclear how much traction this type of project can gain, as many sponsors want to use their own courses.

But sponsors should begin to view research site staff as high-value customers as a first step to addressing these burdens, Ballenger suggested. That means examining all processes and requirements to find ways to reduce friction and support the site. For instance, in terms of training, a stack of PowerPoint slides is unlikely to be appealing to most sites; sponsors should look at ways to signal that they care about the site’s learning experience.

To learn more about a training approach that is tailored to the needs of users and sites, visit http://proficiency1.wpenginepowered.com/simulation-training