Published in the April 2017 issue of Today’s Hospitalist
GIVEN HOW MANY quality measures are being used to gauge physicians’ performance, analysts hope that sharing those data with doctors will drive improvement.
But that doesn’t always happen. That has experts puzzled by the limited impact of quality-reporting programs that feed performance data back to doctors.
“We don’t really understand when performance data work or don’t work in changing clinician behavior,” says Sushant Govindan, MD, a critical care and research fellow at University of Michigan at Ann Arbor.
“We don’t really understand when performance data work or don’t work in changing clinical behavior.”
~ Sushant Govindan, MD
University of Michigan
Part of that may be the sheer volume of measures, with groups like the National Quality Forum now endorsing more than 600. But there’s also this factor: Many doctors may not know how to interpret the performance data they’re given.
That may be particularly likely for risk-adjusted data, says Dr. Govindan, because physicians are often unclear what factors data are risk-adjusted for. And even if they understand the feedback data they receive, doctors aren’t sure what they’re supposed to do with them in terms of their own practice.
“Physicians have told me, ‘We don’t know what the actionable feedback is in these figures. How do we take this information and move forward?’ ” Dr. Govindan points out. That’s why he and two colleagues decided to take a step back and design a survey to test how well clinicians understand quality data.
What did they find? Published in the January 2017 issue of the Journal of Hospital Medicine, their study concluded that clinicians’ comprehension of quality data “appears low and varies substantially.”
The survey presented several pieces of data about CLABSI rates in eight hypothetical hospitals, with 11 questions about those figures. As Dr. Govindan explains, the questions were designed to test three domains of data comprehension. The first is basic numeracy.
“That’s the ability to do numerical adjustments on non-adjusted data,” he says. “We asked, for instance, if the raw number of infections—the actual measured number of infections—was doubled, what would happen to the raw rate.” You don’t need to understand risk adjustment to answer that question correctly, he points out. “You just need to understand basic numerical concepts.”
The next level tested was risk-adjustment numeracy, which again involves numerical computations but with risk-adjusted data. Those questions try to ascertain if respondents can identify risk-adjusted data and manipulate them. An example from the survey: If hospital B had its number of projected CLABSIs cut in half, what would its standardized infection ratio be?
The third domain of questions is the most complicated: risk-adjustment interpretation. “This requires a deeper level of understanding about risk adjustment,” says Dr. Govindan. “It’s not just understanding how to manipulate data numerically, but also how second-level concepts translate into interpreting the data.”
An example: Suppose hospital A begins using a central line with an antibiotic coating that cuts its number of infections in half. What would that hospital’s number of projected infections be?
Not surprisingly, the percentage of answers answered correctly varied, depending on which domain each question belonged to. Among the 72 clinicians who answered all 11 questions, physicians got more right answers than nurses (68% vs. 57%).
In terms of basic numeracy, the percentage of correct answers was 82%, but fell to 70% for questions related to risk-adjustment numeracy. As for risk-adjustment interpretation, the average percentage of correct answers fell much further, to 43%. As for one of the (very practical) risk-adjustment interpretation questions—”Which hospital is most effective at preventing CLABSI?”—51% of respondents picked the right response, while 49% chose one other answer.
Those results don’t mean, says Dr. Govindan, that physicians suffer from a lack of training. “Doctors actually have extensive training in how to interpret various types of data,” he says. “But that training is a broad, 30,000-foot understanding.”
Several factors make interpreting quality data complicated. For one, risk adjustment isn’t standardized across measures, and data can be risk-adjusted for all sorts of variables. Also, while CLABSIs are tremendously costly, both financially and in terms of patient outcomes, many doctors may encounter few CLABSIs themselves. Or they may be presented with their own CLABSI-performance data very infrequently.
And while millions of dollars are spent developing performance measures, substantially less effort goes into figuring out how to best structure the feedback doctors receive.
“The biggest hypothesis I come away with is, ‘Message matters,’ ” says Dr. Govindan. “You can’t just deliver data without also paying attention to the message you’re trying to convey with them. If performance data are so complicated that someone can’t take actionable feedback, is that optimal?”
Does that mean that hospitals need directors of data interpretation, the way they now have directors of patient experience? That may not be so far-fetched, he says, given the overwhelming amount of performance data that doctors are expected to digest and act on.
He also points out that one of his advisors is a researcher who, for the last two decades, has helped provide feedback to ICUs in the U.K. One lesson learned is that decision-makers have questions about the data being fed back to them, and they need a resource that can explain them.
“Because that researcher made herself and others who understand the data available, that has helped the process of providing feedback,” Dr. Govindan says.
To improve clinicians’ data interpretation, Dr. Govindan plans to test different infographic formats that could be used to deliver performance data. He and his colleagues want to see which formats, if any, may promote better comprehension—and if better comprehension drives greater motivation to change behavior.
They also, he adds, want to re-do their study, this time with a sample of experts. (See “A novel methodology.“)
“We’re asking almost the same questions to infection epidemiologists who are part of a research consortium,” he says. “Their answers will give us a sense of how complex these data are.”
Phyllis Maguire is Executive Editor of Today’s Hospitalist.
A novel methodology
TO FIND CLINICIANS willing to answer questions about quality data, Sushant Govindan, MD, a critical care and research fellow at University of Michigan at Ann Arbor, adopted a study methodology that’s being used in other fields: reaching out to the Twitter followers of his two study co-authors.
Both those colleagues—Vineet Chopra, MD, MSc, and Theodore Iwashyna, MD, PhD—are prominent researchers who each have well over 1,000 Twitter followers. Mining those followers, says Dr. Govindan, “was a convenient and rich sampling that gave us a good number of responses in a short time.” However, the sample size used in their study was such that the authors decided to not employ Twitter “analytics.”
“Within Twitter, there are a variety of advanced metrics that can enrich the data obtained,” he notes. While he and his colleagues did not use those metrics in their current study, they may deploy more surveys via Twitter and employ those analytics in the future.