After another meeting in which we reviewed “quality metrics,” I found myself thinking: “If only we could define ‘quality’ as easily as we can define obscenity, all of our jobs would be much less difficult.” Admittedly this statement requires a bit of an explanation.
I give you the late Supreme Court Justice Potter. In the case of Jacobellis v. Ohio (1964), which attempted to define obscenity, he famously said: “I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description [“hard-core pornography”]; and perhaps I could never succeed in intelligibly doing so. But I know it when I see it, and the motion picture involved in this case is not that.”
I think most of us know quality in the hospital when we see it. The opposite is even more true: We know when care is lacking–and we see plenty that is lacking. In an effort to improve hospital processes, we have developed a host of “perfect care” metrics because a subjective sense of good or bad is essentially useless when it comes to enacting culture change. To improve care, we need hard metrics. Simple concept, difficult deliverable.
Many hospitalists have become experts at documenting why or why not an ACE and ARB for a feeble heart was given or contraindicated at discharge. And we have adapted a lot of protocols that have resulted in decreased VAP, line infections, UTIs and bedsores. No doubt this increased awareness has led directly to better care for patients.
But is it working? Not if a recent NEJM study is to be believed. To quote: “In a study of 10 North Carolina hospitals, we found that harms remain common, with little evidence of widespread improvement.” Dr. Wachter, as per his usual, touches deftly upon this subject here.
But why aren’t our metrics rooting out inefficiencies and bad practices in health care? The answers are, of course, multiple and complex, but I will hit upon my top three.
One: Taking care of patients is not like flying airplanes, despite the endless analogies to the contrary. Airplanes are guided by simple rules of physics that are extremely amenable to simple checklists.
I’m sure that patients benefit from checklists and I don’t mean to diminish their importance, but patients are also subject to the vicissitudes of human physiology, frailty and emotion. All of those are much less predictable than turbulence, and the health care delivery system is infinitely more complicated than a 777. Ultimately, until we develop a system that can be run on computerized autopilot, many of our efforts will be fraught with failure.
Second, we spend an enormous amount of time chasing quality that can be easily defined but may be of questionable overall benefit. Back to our ACE/ARB. Was it given? Yes. These binary data are imputed into a report and–presto!–you have your quality score. But what is not binary is difficult to compute and therefore often ignored, despite the fact that it may be many times more important.
For that same CHF patient, you could also ask: Was the care coordinated among all providers? Was the patient given meaningful education? Were there needless consults that resulted in more care but no appreciable benefit?
Did everyone wash his hands prior to entering the room? Were you able to assess the patient’s home situation and make a plan that would result in a safe discharge? And was end-of-life care discussed, if appropriate? After all, no amount of “quality” care is going to work if the patient is truly end stage. Again, these aren’t easily defined yes-or-no questions that work themselves nicely into a systems report.
Third, because these reportable scores have taken on such great importance, we may be at least unconsciously rigging the system more than improving the process. While I doubt that any hospital is being blatantly dishonest, one does take pause at this JAMA study. To quote: “there is significant variation in the application of standard central line—associated BSI surveillance definitions across medical centers.” That variation, the study concludes, “may complicate interinstitutional comparisons of publicly reported central line—associated BSI rates.”
I know pay for performance is still in its infancy. But if it really does put big dollars at stake, don’t you think a little more see-no-evil-hear-no-evil could possibly exist? And even if some observer bias proves to be inevitable, I still don’t believe patients en masse are going to rush to the hospital next door if its HealthGrades score tanks but the valet parking is excellent.
Hey, I am a doctor, and my faith in most of these scores is such that I would be happy to have my own care delivered at a hospital that had great food and a hospitalist program I trusted, without the least regard for the most current “quality” metrics. You tell me you haven’t had a VAP in the last five years? Sure, OK, right, do you have steak on your menu and a hospitalist I can talk to?
At a minimum, I have just supported my hypothesis that health care quality would be better off if it was more like obscenity, at least in terms of being able to define it. If we could just see poor care and act upon it, we would be so much better off. But I am not naive enough to believe that this is possible nor am I jaded enough to think we can’t do better, no matter how big the obstacles may be.