Screen Failures in Clinical Trials: Optimize the Unavoidable

screen failures in clinical trials

Screen failures in clinical research are a necessary, unpredictable, and disruptive part of the process. How can something be all three of these things?  

Well, they are necessary in a sense, because the rigor of clinical research oftentimes demands a reasonably selective cohort of participants. While it certainly makes sense to cast as wide a net as possible to maximize enrollment, if the participant population is too heterogeneous, it makes it more difficult to understand if an investigational medication might work at all.

Screen failure ratios – the number of participants that pass vs. fail screening criteria – are also hard to estimate because humans and healthcare are complicated. Research scientists may do their best to estimate, say a 3:1 screen failure ratio based on lots of great data, EMR analysis, and site feedback, yet still miss the mark by a mile. Though this unpredictability has been difficult to manage, we are at least able to categorize reasons for screen fails in the following two buckets: 1) Medical history (prescribed intervention, comorbid diagnoses, etc), and 2) Protocol screening procedures at Visit 1 (safety labs, assessments, etc).

The sad truth is that many if not all of those patients who screen fail due to medical history should never have walked in the door.  In fact, Shea Overcash, Associate Director of Quality Assurance at Javara reports, “Approximately one-third of all screen failures are due to something found in the medical history, and in some studies, this is even higher.” On a recent visit to Hightower Clinical Research, Brad Hightower relayed a surprising story to us about a man, who walked into his clinic for visit 1. This particular trial was enrolling patients that have undergone an amputation. Brad was curious to see a man with very life-like limbs and a normal gait walk through the door asking to be checked in for the visit. This man seemed to move so naturally, that Brad thought it must have been a high-tech prosthetic limb. Upon further conversation, it became clear that this man had all of his limbs intact, down to the pinky.

 

Javara Quote Card

This man took off half a day to be here, and the clinic staff blocked off the same amount of time to conduct this lengthy visit. But in the end, it was a total waste. How could this have happened?  Or more importantly, how could this have been prevented?  The answer is obvious - were the staff able to review the medical records before scheduling the visit, the mishap would have been avoided.

Scenarios like these happen all the time and are extremely disruptive to all involved. The patient wastes their time and ends up feeling as if they did something wrong.  The research site loses that time to conduct a visit that earns them revenue, and the Sponsor further delays data collection. But it gets worse. Many Sponsors pay for screen failures according to a predetermined ratio - e.g. 3:1.  In other words, they will only pay the cost for 3 screen fails, for every 1 randomized.  When too many patients start screen failing for avoidable reasons and the sponsor doesn’t adjust their payment model, what do you think happens at the site?  Yep, they just stop enrolling.

Check out ProofPilot’s proprietary integration with HumanAPI enables patients, sites, and sponsors to use an evidence-based way to pre-screen patients and eliminate screen failures due to findings in the medical history.

 

Joseph Kim

Comments

Related posts

Search Discussing the Challenges in Clinical Trials with Brad Hightower
Normalizing Access to Clinical Research within Healthcare Search