Home Clinical Where’s the evidence?

Where’s the evidence?

January 2013

Published in the January 2013 issue of Today’s Hospitalist

INCREASINGLY, it seems that we are being asked to provide care that is not supported by medical evidence. Hospitals and physicians are then torn between having to do what is right for the patient vs. what they need to do to score accreditations and preferred status by the “quality industry.” It can be many years before that industry catches up to the reality that front-line providers are facing.

DVT prophylaxis and stroke patients?
An example: To become one of the Joint Commission’s designated primary stroke centers, hospitals must make an investment to meet standards determined by core measures issued by the Centers for Medicare and Medicaid Services (CMS). Those measures specifically place significant weight on initiating venous thromboembolism (VTE) prophylaxis by the end of hospital day No. 2.

In the CMS’ “Specifications Manual for National Hospital Inpatient Quality Measures, Version 3.3,” here is the rationale offered: “For acutely ill stroke patients who are confined to bed, thromboprophylaxis with low-molecular-weight heparin, low dose unfractionated heparin, or fondaparinux is recommended if there are no contraindications.”

We have somehow given organizations permission to be the gospel in issues of hospital quality based on faith, not proof.

I have seen hospitals that want the all-important primary stroke center designation hire staff to make sure that DVT prophylaxis happens. They also start penalizing doctors who don’t take the time to document the reason they “failed” to provide this measurable care. Using funds to pay for dedicated staff and prophylaxis instead of putting that money toward more nursing care and other improvements would be justified if it saved lives. But where is the evidence that such DVT prophylaxis is helpful for most stroke patients?

The American College of Physicians reviewed this topic in the Nov. 1, 2011, Annals of Internal Medicine, and concluded, “For patients with acute stroke, heparin increased the risk for major bleeding, with no effect on mortality, symptomatic DVT, or PE. Studies comparing LMWH with UFH did not show any differences in clinical outcomes.”

It turns out that other DVT prophylaxis measures that are now mandatory for doctors to use may even be causing harm. The same Annals review also proclaimed: “No improvements in clinical outcomes were seen with mechanical prophylaxis in patients with stroke, but more instances of lower-extremity skin damage occurred.”

“Safety” scores
The Leapfrog Group provides another example. An organization comprised of big employers and other health care purchasers, it was recently in the headlines for releasing its first set of patient safety scores for more than 2,600 hospitals nationwide. Those scores included very low grades for poor hospital safety performance for prominent institutions such as the UCLA Medical Center and the Cleveland Clinic.

The American Hospital Association has pushed back against those scores, saying that they’re not accurate. But even if the Leapfrog Group is measuring data accurately, here’s the bigger question: What is it measuring?

It turns out that the scores are based on three-year averages for 26 different safety measures. One of those measures grades hospitals on their ICU staffing, with Leapfrog making the following claim: “Mortality rates are significantly lower in hospitals with ICUs managed exclusively by board-certified intensivists.”

But who made Leapfrog an authority on this issue? The largest study that I could find on this topic, which reviewed the care of more than 100,000 patients in 123 ICUs, came to exactly the opposite conclusion.

That study, which was published in the June 3, 2008, issue of Annals, found that critical care provided by intensivists was associated with more procedures and higher mortality compared to care delivered in the ICU by other doctors.

You might think that mortality for intensivists would be justifiably higher if they treated sicker patients in the ICU. But that was not the case. Instead, authors wrote, “Analyses that adjusted for severity of illness and the tendency for sicker patients to be managed by critical care specialists still showed higher mortality among patients managed by the specialists.”

Opinion-based measures
It’s clear that we have somehow given organizations like the Leapfrog Group permission to be the gospel in issues of hospital quality based on faith, not proof.

Unfortunately, we’ve gotten into the habit of using (and measuring) many other metrics that we simply don’t have good evidence showing that they either prevent bad outcomes or boost good ones. Here’s another example: preventing hospital falls.

An article in the July 2011 issue of the Journal of the American Academy of Orthopaedic Surgeons pointed out that the CMS, since 2008, has not paid to treat the complications of hospital-acquired conditions, including inpatient falls, “that could have been prevented by following evidence-based guidelines.” But the authors go on to write, “Our review of the literature revealed that the risk of fall is only slightly greater in the hospital environment than in the home and that there is no medical evidence that evidence-based guidelines are effective in fall prevention.”

This is what happens when opinion becomes stronger than evidence. As a hospitalist and administrator, I care a great deal about patient safety. I want my family to have a safe, healing experience in American hospitals.

Many of the measures we are using are indeed needed, and several are evidence-based. Mandating the use of maximum sterile barriers while placing central intravenous catheters, for instance, has been shown to prevent infections. We also know that using antibiotic prophylaxis in surgical patients prevents postoperative infections.

And we know that utilizing at least two ways to identify patients before giving blood and medications or performing procedures helps decrease tragic errors. We have seen hospitals put assets toward such important initiatives with remarkable improvements in measured outcomes. But it is time to get rid of emotional convictions as a basis for certain measurements and report cards. If our hospitals and doctors put too much emphasis on the wrong stuff, we limit resources that we could be putting instead toward the right stuff.

GilGil Porat, MD, is chief medical officer for Penrose Hospital and St. Francis Medical Center in Colorado Springs, Colo. He’s also a practicing hospitalist with Colorado Springs Health Partners. You can listen to Dr. Porat’s free “Hospital Medicine” podcast on iTunes.