Meetings and Slides Are Ineffective for Training Site Staff and Could Increase Patient Risk

According to Edgar Dale and dozens of supporting studies, we only retain 30% of verbal information.

Pro-ficiency is based on simulation-training. The same technology employed by airline pilots, surgeons, astronauts, and the military. Imagine if airlines started training their flight crews using meetings and slides!

Both the Pro-ficiency online platform and the simulation-based training content housed there are fully validated and audit-ready according to FDA CFR 21820/11 and ICH/GCP.

Pro-ficiency transforms your study training from an ineffective cost center to a powerful performance management system

• Use the best training for your sites and reduce costs while reducing error

• Play it safe by using validated systems and content that meet 21 CFR 820/11 and ICH/GCP

• Protect patient safety by validating site training content

• Train 100% of all site staff throughout the study


Predictive Analytics (real-time access)

The Pro-ficiency dashboard shows investigator performance in the training simulations across dozens of critical protocol and GCP metrics (the vertical columns). A blue checkmark indicates that a specific decision was made appropriately. If the learner makes a mistake, then a brief video – based corrective action is launched and they get a yellow icon. If the learner makes that same mistake again, they get a red icon. Lots of yellow icons indicate that, while knowledge gaps were identified, they were also easily corrected. Lots of red icons means those investigators are either not trying or they are having real problems understanding the protocol.

Based on the performance of the 2 sites above, which one would you trust more with your protocol and expect better performance from?


GCP and PV training

GCP and Pharmacovigilance training is a requirement for all sites that participate in clinical trials. Non- compliance with ICH GCP E6(R2) regulations results in inspection findings, cost overruns, data rejection, and potential study delays. As a result, sponsors often require that sites take their specific GCP and PV training, so investigators wind up taking the same training again and again. These modules are often delivered face-to-face at sites with slides or through eLearning. Research shows that we only retain 20% to 30% of verbal information whereas with simulation / experiential learning, we retain 70% to 90%.

The Pro-ficiency GCP and PV training uses real-life scenarios in an online simulated environment. The better the learners know the material and the more they pay attention to the training, the faster they can get through it. Like in a choose-your-own-adventure novel, site staff make their way through realistic scenarios that cover the principles of GCP and PV in an entertaining way.

In fact, 76% of 1701 site staff learners “Liked” or “Loved” the Pro-ficiency simulation-based GCP training while 20% said is “OK” and only 4% preferred eLearning or face to face training. We are doing something right when thousands of investigators tell us that they LOVE our GCP training!

This Pro -ficiency training is an improvement on traditional training (meetings, slides, videos).

83% agree

10% undecided

7% prefer traditional methods

Predictive Analytics (real-time access)83% agree

The Pro-ficiency dashboard shows investigator performance in the training simulations across dozens of critical protocol and GCP metrics (the vertical columns). A blue checkmark indicates that a specific decision was made appropriately. If the learner makes a mistake, then a brief video -based corrective action is launched and they get a yellow icon. If the learner makes that same mistake again, they get a red icon. Lots of yellow icons indicate that, while knowledge gaps were identified, they were also easily corrected. Lots of red icons means those investigators are either not trying or they are having real problems understanding the protocol.

Based on the performance of the 2 sites above, which one would you trust more with your protocol and expect better performance from? 7% prefer traditional methods

Unlearning in Clinical Trials – the cure to repeating our mistakes

How many stories have you heard where mistakes – made by clinical research sites – caused problems in a study, or even worse, patient injury? The practice of medicine is inherently risky given the consequences of mistakes or misinformation. This is even more the case in research, where we are layering on an extra level of complexity with the protocol. Often, we ask investigators to do things that are counterintuitive to their medical training, such as curbing bedside manner and not providing emotional support to a patient in a study if the resulting placebo effect could impact the end points.

Despite the difficulties and challenges investigators face, sponsors still provide only the most basic support in terms of helping them understand the protocol and conduct a successful study. Study teams rely heavily on investigator meetings and slide decks to train site staff and not even all the site staff, often just the investigators and perhaps one study coordinator. These people are not professional trainers and they are very busy, so why do we expect them to successfully train the rest of their staff? I hate to break it to everyone, but face-to-face lectures are a 3,000-year-old training technique that has been proven time and again to be ineffective (Bajak, 2014). As the Chinese proverb goes:

What I hear I forget,

what I see I remember,

what I do I understand.

Check out the 2014 study titled, “Lectures aren’t just boring, they’re Ineffective, too”. A recognized improvement to standard lecture-style training is simulation (Cook, Hatala, Brydges, 2011), where there is “strong evidence demonstrating improvement in learners’ knowledge, skills, and behaviors.” Even across cultures, simulated training has been shown to be rated as a valuable learning experience by learners and is linked to better academic performance (Williams, Abel, Khasawneh, Ross, Levett-Jones, 2016). This means the message you are teaching doesn’t break down when you open those sites in Croatia or the Ukraine.

I understand that with hundreds of millions of dollars at stake, it can be scary to use new technology to improve the clinical trials process, but frankly, modern training and human performance improvement techniques such as simulation are NOT new. If I were running a global phase 3 study, I would find it much scarier to use an approach that is proven not to work rather than using technology that is held as the gold standard by multi-billion dollar industries such as airlines, the military, and NASA.

It comes down to unlearning. We need to unlearn the old, ineffective, ways of doing things in order to make space for new methods and approaches. Just because something is being done the way it has always been done does not mean that it is lower risk. If you jump off your roof 10 times, you may only sprain an ankle once, but is it really more risky to try a ladder? It’s like John Cage said, “I can’t understand why people are frightened of new ideas. I’m frightened of the old ones.”


Bajak, A. (2014). Lectures aren’t just boring, they’re Ineffective, too, study finds. Retrieved from

Cook DA, Hatala R, Brydges R, et al. Technology-enhanced simulation for health professions education: a systematic review and meta-analysis. JAMA. 2011; 306:978-988.

Williams B, Abel C, Khasawneh E, Ross L, Levett-Jones T. Simulation experiences of paramedic students: a cross-cultural examination. Adv Med Educ Pract. 2016; 7():181-6.

Chatman, J. The study is published in The Leadership Quarterly, Vol. 21, Issue 1 (2010), 104-113.

When Failure is Not an Option

Air travel is really safe. You have a greater chance of being struck by lightning or attacked by a bear while you read this article than dying on a US commercial flight. That’s because the odds of dying in the US commercial plane crash are near zero* (approximately 50 people die from lightning a year in the US and 17 get attacked by a bear **). This has not always been the case. Air travel used to be dangerous but the technology we use to train and support our pilots has nearly eliminated pilot error. It would be unthinkable not to train pilots through simulation these days.

In stark contrast, pharmaceutical companies invest hundred of millions of dollars in new drug development in order to bring life-saving medications to market, and yet how do we train the people that are responsible for making these massive clinical trials happen? Are they using advanced technology such as simulations? Sadly, no. A lot of these companies are using “old school” methods, such as lectures and slides. Imagine if that is how we still trained our pilots on new aircraft. Personally, I would be taking a lot more trains.

Today I heard another example of a study that was unblinded and when the data was examined, it was determined that the site staff were improperly trained, had made too many mistakes, and the study had to be repeated. This is not an uncommon occurrence. Several companies are now augmenting their training using virtual and physical simulation techniques for the benefit of site staff. This provides an additional advantage of being able to assess competency, predict errors before they happen and measure improvements in skill – none of which can be effectively achieved using didactic training methods such as lectures and investigator meetings.

With clinical development success rates dropping to 10%, we should have a near-zero tolerance for human error derailing a study. One way to accomplish that is by using modern human performance management techniques such as simulation training.

David Hadden is Chief Game Changer at Pro-ficiency, a leading provider of online simulation training for clinical trial sites, CRAs and Study monitors. In previous lives, he brought to market the fields of VPS (virtual patient simulation) and ROI tools for human performance interventions. His previous company installed hundreds of medical simulation centers in Africa and trained over a million doctors in 190 countries through simulation. He collaborates with his Co-CEO, JoAnne Schaberick, in his latest endeavor, Pro-ficiency, which uses simulation to improve study quality while reducing cost and time.

* “Without minimizing the tragic loss of even a single life as a result of any airline crash, I would still argue that you are safer onboard a commercial airplane in the United State than you are at virtually any point in your entire life.”

** NOAA Data

Source Code in the 21st Century

Guess what this is. Spirograph? Really complicated Venn Diagram? If you guessed either of those you would have been close. This is actually the “code” for a Pro-ficiency simulation. The dots, “nodes”, all represent a scene in a choose-your-own-adventure style simulation that teaches clinical trial professionals, how to conduct a study according to the study protocol. This is what we get to work with every day. The user navigates the nodes via our front-end interface and makes decisions. Each node plays a short video (10-30 seconds) which unfolds the learning experience. The green dots are correct choices, the red dots are incorrect choices. Yellow nodes are patient encounters and blue ones are a didactic review of the protocol. The nodes can be easily added, moved and deleted for changes and protocol amendments.

Here is another one:

When we designed this tool, we did it to make simulation authoring easy. We never imagined it would be so beautiful.

Dave Hadden

Chief Game Changer, Co-CEO and Cofounder


Hidden Risk = Hidden Benefits

Last week I was on a climbing trip in Austria. At the most challenging part of the climb, I found myself 2000 meters above ground on an overhanging cliff (thanks to my girlfriend for taking the awesome pic above). While in this apparently precarious situation, I was entirely focused on my personal risk (as any sane person would be). However, my risk was very well identified and managed. I was tied in at multiple points and the moves I needed to make were well within my capabilities. I would have been in more danger painting on a 20-foot ladder.

Exposing risk and understanding it is also 90% of the battle when it comes to risk management in clinical trials. Clinical research operations are driven by predicting, preventing and managing the risk of hundred million dollar trials, and yet there is a big chunk of ill-defined risk in clin-ops that often results in huge costs and delays: human performance variance. It is here where enrollment suffers and protocol violations occur and it is one of the reasons that there are no economies of scale in trials – the more trials you conduct, the more expensive each one becomes(1). However, the human performance variance is also the richest treasure trove of recoupable cost and time. So, why does the industry keep using the same inefficient management systems to manage the largest expense – people?

With so much of the success of a study relying on human performance, we should be using better tools to manage and predict that risk element; tools that will not only increase the accuracy of the trial but cut millions in costs, weeks or months of time and that massively reduce protocol violations. This tool is simulation, the state of the art for human performance management (think pilots, supertanker captains and the military). Is it not time for the tools we use for clinical trials to be as innovative and high-performing as the products they test? We have to be careful to not squeeze so hard in our efforts to create cost efficiency, that there is no room for innovation and transformation. Let’s get off the ladder and onto the mountain!

Dave Hadden, Co-Founder and Chief Game Changer at ProPatient and ProCT.

(1) For companies that have launched more than three drugs, the median cost per new drug is $4.2 billion; for those that have launched more than four, it is $5.3 billion. Even if a company only develops one drug, the median spending is still a hefty $351 million. – Matthew Herper, Forbes, August 11, 2013

Research Abstracts to be Presented at the 10th Annual International Meeting on Simulation in Healthcare: Phoenix, Arizona

An Expert Systems-based Virtual Patient Simulation System for Assessing and Mentoring Clinician Decision Making: Acceptance, Reach and Outcomes

D. D. Hadden; TheraSim, Durham, NC.

INTRODUCTION: Traditional clinical training methods are expensive and nonstandardized, remove clinicians from the practice setting, and impact is difficult to measure. We report on user performance with an interactive web-based simulation and data analysis program in which practitioners have managed hundreds of virtual patients with vast arrays of medical conditions.

METHODS: Using an interactive virtual medical records interface, clinicians receive electronic mentoring and testing (dual mode) by reviewing histories, ordering tests, making diagnoses among hundreds of choices, and choosing treatments from more than 1,000 medications and other therapies. The simulations can allow or hide insession diagnostic and therapeutic information which is produced by an expert system based artificial intelligence engine (A.I.). The A.I. provides guidelines and evidence based feedback on the appropriateness of choices. Finally, the simulation shows an explanation of reasonable choices for the case, a mini-review of the general topic, and the user’s errors, warnings and deviations from guideline, evidence and expert consensus – driven recommendations. All choices are recorded for analysis. This paper summarizes 5-year results from 422 cases from 81 CME programs involving 41 medical conditions and appearing in a variety of internet and hospital venues.

RESULTS: Usage: 122,990 registered users representing 200 countries have attempted 402,508 sessions with a completion rate of 49%. Of the approximately 5 million page views, the average user viewed 71 pages (31 pages/session) and spent 18 minutes/case. The average score was 60 points (of 100), and 66% scored 80 points. Fewer errors occurred with testing, while there was an increased number of therapy related errors made by users failing to diagnose appropriately. Using an analysis of a neurology program involving 1,946 users and 2,642 sessions, all clinical guidance was turned off for 100 sessions in each of 3 patient simulation cases. Success in making a difficult diagnosis increased from 12% to 36% with guidance operative, an incorrect diagnosis was avoided in 74% with guidance vs. 48% without, and appropriate treatment was more likely with guidance turned on: 74% vs. 44%, 67% vs. 52%, and 42% vs. 23%. User satisfaction remains positive for most users, including average scores of 4.2 of 5.0 using various questionnaires. In 5 HIV training simulation deployments during 2006–2007 in 3 African countries using WHO guidelines and involving 2,780 pre-/post-test simulations, 241 clinicians passed 71% of pre-tests. After clinical feedback was activated, scores increased by 35 points, resulting in a final pass rate of 93% (p 0.001 vs. pre-). Similar improvements were noted in 3 separate programs at 80 hospital sites in 2008 and 2009, the latter utilizing a competency-based model, which eliminated the need for formal post-testing.

DISCUSSION/CONCLUSIONS: Expert systems-based virtual patient simulation shows promise as a mechanism for assessing practitioner skill, detecting skill gaps and for electronic mentoring. These systems have the ability to extend the patient simulation process into chronic and infectious disease states, an area that has been primarily overlooked by mannequin-based simulators.