Real world research has produced recent newsworthy findings on topics as diverse as COVID, mental health, cancer, and dementia. But much of the published RWE, even among studies that end up in headlines, doesn’t include crucial information readers need to know in order to trust the results—by that I mean information on the validity of the link between the coded information that forms the basis of the study and the actual health states of interest.
A recent study in JAMA reported a 5-fold increase in suicide-related ED visits in adolescents. But what the authors actually found was an increase in the use of a single ICD-10-CM code. And in addition to suicidal ideation, that code is used for low self-esteem, excessive crying, and other symptoms.
I’m not suggesting there isn’t a huge mental health crisis underway in this country, and I wouldn’t doubt that more young people are contemplating suicide. However, the authors did not provide, nor could I find in their references, any evidence of validation of the link between this ICD code and actual suicidal ideation. That is, of all the people with the code, how many truly had thoughts of suicide? And conversely, of all the people with suicidal thoughts, how many were diagnosed with this code?
A classic study from more than a decade ago found that only about 1 in 20 RWE studies provided any evidence for the validity of the codes used, leading the authors to conclude, “People with a code frequently do not have the condition it represents.”
I ran my own simple version of this study: I examined the 10 most recent RWE studies available on PubMed that used ICD coding. Seven1-7 didn’t mention the validity of their codes or algorithms. Of the three remaining, one8 was primarily a report of the validation of a new algorithm (sensitivity of 58% and specificity of 85%), another9 conducted a simple validation as part of the study (sensitivity 25–35%, specificity 69–90%), and the last used codes that had been validated in a prior study (sensitivity 90%, specificity 83%)10.
This is an improvement from a decade ago—3 in 10 is more than 1 in 20—but the quality of the codes still varied considerably.
So the JAMA paper isn’t an outlier, but I’m certain things will be different in 10 years. The demand for validated patient reported outcome measures skyrocketed once there was a clear path to including them in drug labels. PROs appeared infrequently in labels at the turn of the millennium, about 20% from 2005–2015, and more than 25% in the last few years. Labeling claims based on RWE are on the early part of a similar curve.
Over the last 10 years, my colleagues at PHAR and I have conducted validation research that runs the gamut from simple (e.g., pull 10 medical records and confirm the codes for a single condition) to complex (e.g., multi-center studies at academic institutions aiming to validate algorithms for subtle clinical differences between patients11-19).
Some RWE researchers don’t want to be bothered doing this back-to-basics work, and it’s often incorrectly seen as unnecessary. But the validation train is getting ready to leave the station—get on board now or prepare to be left behind!
Dr. Michael Broder, a board-certified obstetrician and gynecologist, has 30 years’ experience in health economic and outcomes research. He received his research training in the Robert Wood Johnson Clinical Scholars Program at UCLA and RAND, attended medical school at Case Western Reserve University, and received his undergraduate degree from Harvard University.
In 2004, Dr. Broder founded PHAR, a clinically-focused health economics and outcomes research consultancy. PHAR is a team of dedicated, highly trained researchers —individuals who are singularly focused on delivering high-quality health economics and outcomes research insights to the life science industry. PHAR has successfully conducted hundreds of studies resulting in more than 800 publications on a wide variety of therapeutic areas, and maintains an expansive network of collaborators, including 8 of the top 10 academic institutions in the US, as measured by NIH funding. Download our bibliography here.
Unencumbered by corporate bureaucracy, PHAR can efficiently execute contracts and complete projects on time and on budget. PHAR prides itself on being reliable and responsive to clients’ changing needs, and welcoming the challenge of tackling problems others can’t.