Published in the November 2005 issue of Today’s Hospitalist
A few years ago, the hospitalists at the University of California, San Francisco, were surprised to find that when it came to Medicare performance measures looking at the care of pneumonia patients, they ranked near the bottom on one of the measures.
The data showed that only about 10 percent of inpatients with pneumonia who were treated by UCSF physicians received a pneumococcal vaccine, a number that was well below the national average. A nearby community hospital, by comparison, was giving pneumovax to nearly three times as many patients, while other hospitals around the country were inoculating more than half of their patients.
According to Robert M. Wachter, MD, professor of medicine, associate chair of the department of medicine and chief of the medical service at UCSF, the university’s poor performance on pneumovax flummoxed many of his physicians.
In a presentation at UCSF’s annual meeting on managing hospitalized patients in September, Dr. Wachter said that his physicians had never put too much stock in the value of pneumovax for elderly patients because of data that showed the strategy didn’t have much of an impact. A hospitalist at UCSF at the time who specialized in research on community-acquired pneumonia, for example, made a strong case that to save one life, physicians would have to administer nearly 50,000 doses of pneumovax.
“We’re card-carrying evidence-based medicine people,” Dr. Wachter explained, “so we concluded that this wasn’t a particularly important thing to focus on. But now data being reported on the Web were telling the world that when it comes to an important quality measure for one of our most common DRGs, we stink.”
The right thing?
Physicians and housestaff at UCSF were able to turn the situation around and improve their use of pneumovax. But to this day, Dr. Wachter said, the notion of basing clinical practice on publicly-reported quality indicators makes some physicians wonder if they’re doing the right thing.
At a recent faculty meeting, for example, when the question of how UCSF could improve its pneumovax rates came up, some physicians asked why they should care about what seems like a relatively unimportant measure.
“Some members of the group asked why we care about this so much when there are 100 other things that are more important,” Dr. Wachter recounted. That’s when one of the physicians gave the rest of the group a message that was blunt but on the mark: Get over it.
“We came to the conclusion that, when it comes to pneumovax, all of this evidence-based medicine stuff has to go out the door, that this is something we have to do,” Dr. Wachter said. “That creates a tremendous amount of internal angst for all us who like evidence-based medicine and want to think we’re practicing in the best style.”
The group had learned a hard lesson about the realities of performance-based medicine.
“I say that as long as we’re not harming people, we should just do it, and it’s hard to argue that pneumovax is harming people,” Dr. Wachter explained. “Does pneumovax really deserve the priority it’s getting? The answer is no, but it will receive significant attention because it is being publicly reported. We simply have to start getting over it.”
If that seems wrong, he added, consider the alternatives. “If your hospitalist group is getting resources from your institution in part because you’re thumping your chest and saying you improve quality and you improve safety and you have 10 percent performance on a publicly reported measure,” Dr. Wachter explained, “it’s not going to compute.”
The downsides of performance measures
While Dr. Wachter describes himself as a believer in quality improvement, he said that he sees some obvious bumps in the road to quality improvement via public reporting and pay for performance.
“Does anyone know what happens when patients get seven shots of pneumovax?” he asked. While the question elicited laughter from many of the hospitalists at the meeting, Dr. Wachter said the scenario is a distinct possibility, particularly at hospitals that have been burned by bad scores on measures.
“When patients come into our hospital and we’re not 100% sure that they didn’t get the pneumovax yesterday,” he explained, “we’re going to give it to them.”
“We’ve done such an effective job of talking about this with our housestaff that residents will periodically present the pneumovax status before talking about septic shock,” he quipped.
Dr. Wachter added that he foresees other problems with popular quality measures, such as indicators that track how long pneumonia patients wait before receiving antibiotics. Because hours often pass before the emergency department receives BNP levels and chest X-rays, a diagnosis of pneumonia is often delayed. That reality doesn’t mesh with performance measures that urge hospitals to give patients with pneumonia antibiotics within four hours.
“In an environment where you’re getting measured on time to antibiotics,” Dr. Wachter predicted, “you’re going to give antibiotics as soon as pneumonia crosses your mind. We’re going to see a huge amount of CHF treated with ceftriaxone and doxycycline.”
Dispelling myths about quality
While Dr. Wachter said he’s sure that quality measurement systems will improve, he noted for now, at least, physicians face some tough choices.
“We’re at the very early stages of the science of quality measurement and quality reporting,” he said, “and the system has absolutely no capacity to deal with the patients we take care of, which are the people who have multiple diseases, for whom the guidelines about each of the individual disorders might be in serious conflict. The system is focused very narrowly on people who come in with one discreet illness.”
He cited a recent study in JAMA that examined the case of a hypothetical patient with multiple medical problems. When researchers examined the types of care the patient would need based on national guidelines, they found that following the guidelines would produce more than 20 different drug-drug, drug-disease, and drug-diet interactions.
Dr. Wachter also said that a certain amount of mythology needs to be dispelled so that policy-makers understand quality measurement’s true potential. A good example is the notion that improving quality will naturally reduce costs.
Take laparoscopic cholecystectomy as an example. While the new technology initially helped reduce per-case costs by about 25 percent “patients spent fewer days in the hospital, and morbidity and mortality were reduced “the volume of cases started to rise as the threshold for the procedure dropped. In the end, overall spending on laparoscopic cholecystectomy had risen almost 20 percent within five years.
“You can improve quality by putting implantable defibrillators in all of the patients who need them,” Dr. Wachter explained, “but it will be hugely expensive. Improving quality in some cases will markedly increase costs, and the system is still figuring out how to reconcile this.”
Despite these problems, he is optimistic about advances being made in measuring quality. He likened issues with the pneumovax measurement as a “training wheel phenomenon” that will be resolved with more time and effort.
The bottom line, Dr. Wachter concluded, is that quality measurement is here to stay. “Right now, we’re measuring pneumovax because there’s nothing else we can measure that has scientific validity,” he said. “But give it time; we have to get better.”
Edward Doyle is Editor of Today’s Hospital