Behavior Should Drive Clinical Trial Training, Not Test Scores
Behavior Should Drive Clinical Trial Training, Not Test Scores
Applying analytics that focus heavily on measuring changes to research staff behavior during training can be a strong predictor of performance during an active clinical trial, leading to better protocol and GCP compliance. This can mean fewer errors that can affect data, more effective and efficient patient enrollment and better regulatory compliance, among other benefits.
It’s well-accepted that training is important to ensure that researchers conduct clinical trials in accordance with the research protocols. While it’s easy to provide training materials to research staff and check off the “training completed” box, it can be more challenging to ensure that the training provided will yield behavior changes that ensure adherence to the protocol.
Despite the understanding that training is a crucial part of study start-up activities, “failure to adhere to the protocol” remains a leading observation during BIMO inspections, according to FDA’s 2020 statistics. And beyond regulatory compliance, errors in such critical areas as patient enrollment, investigational product handling and conduct of routine tests and patient visits can hinder the success of a trial.
There are several areas that warrant tracking, as Beth Harper, chief learning officer at Pro-ficiency, discussed in an October blog. For instance, things can often go wrong with patient enrollment if staff don’t fully absorb training on how to assess whether a patient meets enrollment criteria. Enrollment of patients who are not eligible can negatively affect data integrity, while missing qualified patients can hinder enrollment of a sufficient population.
Preparation and administration of study drugs is another critical area. Researchers must be able to take the right action if, for instance, a study drug arrives out of the required temperature range or appears contaminated upon inspection. Other challenges might include patient non-compliance with the dosing regimen or non-tolerance of the specified dose.
Other areas that Harper highlighted included handling of missed site visits, problems with equipment needed to perform study procedures and AE management and reporting.
In addition, deeper analytics are possible, De Castro noted. These could include separating critical from non-critical metrics and measuring the level of study impact that particular wrong decisions might have.
That means it’s important for research sites, as well as sponsors, to have confidence that training has rendered all staff capable of making the right decisions in line with both GCPs and protocol requirements when challenging situations present themselves.
But how can organizations accurately assess training effectiveness? The key lies in the ability to measure how the training affects learners’ decision-making abilities, rather than how well they regurgitate information via a test or other traditional approach.
Traditional training approaches that often feature a simple pass/fail test to assess research staff’s basic knowledge post-training, generally fail to capture this critical assessment.
A simulation-based approach to training, on the other hand, can provide this type of essential feedback to help research sites and sponsors evaluate whether staff that has undergone the training can reliably make the right decisions.
Four tiers of training measurements
For Instance, under Pro-ficiency’s simulation training method, rather than a test or exam to confirm knowledge, the participant is testing continuously by successfully navigating various simulated scenarios. This adaptive learning environment helps to develop critical thinking and decision-making skills necessary to successfully implement a protocol.
This behavior tracking can generate analytics that identify where individual staff or sites that are stronger or weaker. These analytics can even flag areas of a protocol that could prove problematic to all participating research sites. This information can allow sites to take quick action to ensure that all research staff are capable of conducting key tasks correctly.
There are four tiers that research sites and sponsors need to look at when evaluating the effectiveness of training, Catherine De Castro, chief solutions officer at Pro-ficiency, explained.
The first of these is the learners’ reaction to the training; this includes ensuring that the content engages them, which boosts the likelihood of the training having a meaningful impact on behaviors and decision-making over the course of a clinical trial. However, engaging well with training does not guarantee changes to future behavior.
The second tier is application of learning. In traditional training, learners’ ability to apply information is usually measured via some sort of a test. But passing a test is also no guarantee that researchers will alter their behavior or decision-making in real life. Test-taking can measure an individual’s short-term recall of recently learned information, but has little ability to predict whether—or how well–
The third tier—focusing on learners’ behavior—is the meat of the matter. At this level, learners’ behavior is tracked to determine whether the training improves decision-making to avoid deviation from procedures mandated by the protocol. Simulation-based training, such as that provided by Pro-ficiency, can offer a critical way to not only focus on this tier of evaluation during training, but also to track its effectiveness.
“This is a hard thing to track,” De Castro said. “With simulations, you’re not just asking questions and giving answers. You are putting [learners] in specific scenarios, looking at the decisions they are making, and tracking that behavior to predict what they’ll do in real life.”
The tracking of behaviors provided by simulation-based training can help in this area by essentially modeling a decision tree. Each decision by a learner in different scenarios is tracked; the simulation responds with consequences—good or bad—that reflects real-life applications.
The fourth tier, De Castro noted, focuses on the results of training. In the case of clinical trials, the key question to answer is whether the training reduces protocol deviations. Whereas the tracking of behaviors during training can predict research staff performance, data from sponsors collected during the course of a clinical trial is necessary to determine real-world results.
As Chief Experience Officer at Pro-ficiency, Jenna helps clients realize the full potential of their partnership with Pro-ficiency. Jenna has spent 16 years at the Association of Clinical Research Professionals (ACRP), including serving in various roles such as Business Development leader. Throughout her time at ACRP she helped hundreds of companies integrate competency-based learning, hosted by Pro-ficiency, into their workforce initiatives. In this work, Jenna began noticing her clients’ appreciation for the incredible Pro-ficiency platform, simulations, and customer service. Now, Jenna helps clients (including ACRP) realize the full potential of their partnership with Pro-ficiency.
Comments are closed.