Published in the May 2014 issue of Today’s Hospitalist
THOSE OF US WHO ARE PASSIONATE ABOUT QUALITY “and I hope that includes everyone in medicine “are constantly concerned about the gap between current practice and what is possible.
The lowest hanging fruit in patient safety is avoiding unnecessary procedures; a PICC line or blood transfusion that isn’t warranted raises the opportunity for harm that didn’t have to exist. But many patients legitimately need invasive devices or blood transfusions, patients who are often very ill and prone to acquired infections and complications.
What’s the acceptable rate of preventable harm? The rate that you should accept if a family member was hospitalized. Most of us think our goal should be “none at all,” and ideally, we want to hit zero in terms of preventable harm. But while it is our duty to get as close to that target as possible, it is unrealistic to claim that every hospital-acquired infection or complication is preventable.
Measures before evidence
We actually do not know why some patients develop complications. I have reviewed papers and reports from several hospitals that report “zero” PICC line infections. I closely examine their methods and techniques to see if their bundles include any step that we are missing. So far, I have yet to find any difference in terms of what these exemplary hospitals are doing compared to our very dedicated PICC nurses.
Yet we somehow end up with a rare infection. Do we have a different patient population? Or do our robust surveillance methods capture data other facilities may not be finding? Just because you observe zero harm doesn’t mean that zero harm has occurred.
A study published in the Oct. 9, 2013, Journal of the American Medical Association (JAMA) put the spotlight on surveillance bias when measuring safety. The study looked at VTE rates and asked whether the VTE rate of an individual hospital is a valid quality measure.
One would think that answer would have already been known because, at the time, VTE rates were being used as a quality metric. The AHRQ had implemented a postoperative VTE rate measure as a patient safety indicator, a measure assimilated into various quality improvement programs and public reporting initiatives.
But here’s the problem: That metric “and its conclusive link to better quality “had not been studied. Instead, a passion for quality and safety had veered into being a cult that jumped the gun before the evidence was conclusive.
The JAMA study looking at hospital VTE rates came up with some interesting findings. For one, hospitals that had higher rates of VTE prophylaxis actually had higher rates of VTE! Why? The answer was imaging, as the authors explained in their conclusion: “Increased hospital VTE event rates were associated with increasing hospital VTE imaging use rates. Surveillance bias limits the usefulness of the VTE quality measure for hospitals working to improve quality and patients seeking to identify a high-quality hospital.”
Documentation or safety culture?
The irony not lost on many is that while those in the “quality industry” constantly cite evidence-based medicine as their unwavering objective, they often pick measures without good evidence to back them. (The AHRQ has since stopped using VTE rates as a patient safety indicator.)
That leaves many people understandably concerned about the reliability and integrity of the data that hospitals report. I have at times been skeptical when I read of hospitals that have no health care-associated infections (HAIs) for several years running. I wonder not about their stellar safety culture, but about their surveillance methods and documentation.
Here’s what members members of the Healthcare Infection Control Practices Advisory Committee wrote in the Nov. 5, 2013, issue of Annals of Internal Medicine: “Determining the presence of an HAI often relies on documentation of a provider’s clinical assessment, and the variability between individual clinician determinations and documentation of those assessments can be considerable.”
One hospital’s infectious disease specialists may document a central line-associated infection, while clinicians in another facility may just pull the line without noting a line infection in the chart. As a result, their documentation and claims data will differ, even though a true clinical difference between the two doesn’t exist. Documentation, or lack of it, and coding drive claims-based measures.
More transparency
Scorecards have their importance, but our biggest focus must be actual clinical care. Did the person placing the central line utilize every measure in the best-practice bundle to prevent infection? That is what counts. If an infection occurs despite every possible intervention to prevent it, then that infection was not preventable.
We also don’t want to get into a situation where we start misrepresenting risk to patients because of pressure from the quality industry. As the authors of a piece in the Sept. 1, 2009, Clinical Infectious Diseases wrote: “Practitioners need to discuss potentially harmful procedures with their patients and explain that the risk will not be zero even if all preventive action is taken.”
Because all these numbers are now publicly reported and tied to dollars, the entire health care system must become unbiased and transparent. The proliferation of different organizations in the quality industry generates even more complexity that can affect reporting reliability.
Some hospitals can afford the staffing resources it takes to comb through charts to find documentation that invalidates any “misses” they have in terms of core measures and other metrics. Measure definitions also change from year to year, and changing classifications further increases variability.
The best-intentioned organizations so far have not eliminated the subjectivity in data gathering. Without a third party to validate methods in every hospital every day, it is hard to verify safety and quality claims.
At the end of the day, our focus must be on improving patient care. Some may argue that payments should not be tied to outcomes, but that ship has already sailed. The first cycle of any change usually reveals flaws, and success often requires multiple cycles of change with improvements building on each other over time.
Here’s hoping the quality industry learns quickly from its mistakes and eliminates metrics that aren’t backed by evidence.
Gil Porat, MD, is chief medical officer for Penrose Hospital and St. Francis Medical Center in Colorado Springs, Colo. He’s also a practicing hospitalist. You can listen to Dr. Porat’s free “Hospital Medicine” podcast on iTunes.