Published in the December 2005 issue of Today’s Hospitalist
If your hospital is trying to boost its performance on the growing list of pay-for-performance measures, you’ll want to take note of two recent studies examining gains on quality indicators for three common conditions.
The articles, which appeared in the July 21 New England Journal of Medicine, looked at quality indicators for three conditions commonly treated by hospitalists: heart failure, acute myocardial infarction (AMI), and pneumonia. And while both studies found some improvement, exactly what that improvement means appears to be in the eye of the beholder.
Some public health officials view the data as proof that publicly reporting data is giving hospitals the incentive they need to improve patient care. A number of physicians involved in the research, however, complain that the pace of change is much too slow, in large part because the indicators being studied have been well accepted for years.
Substandard care?
For Ashish Jha, MD, MPH, lead author of the fi rst study, the results are an indictment of sorts of the care being offered at many U.S. hospitals.
“When you see hospitals with very mediocre performance now, they have known for years that they have been providing inconsistent care and haven’t done anything about it,” says Dr. Jha, a hospitalist and researcher at the Harvard School of Public Health in Boston. “The point is that people are still getting substandard care, and that’s not acceptable.”
His study examined data on 10 quality measures for heart failure, AMI, and pneumonia collected by the Centers for Medicare and Medicaid Services (CMS).
At first blush, the results look relatively good, with mean scores in the 80s and 90s. While that leaves room for improvement, the good news is that overall scores have gone up since the last study of hospital performance, which analyzed data from 2000 and 2001. Look at an individual indicator like giving aspirin to AMI patients, for example, and you’ll see that the 2,000-plus hospitals that reported data had an average score of about 90 percent.
Dr. Jha says that when you take a look at the performance of individual hospitals, however, mean scores lose much of their meaning. Some hospitals, for example, gave aspirin to their MI patients between 50 percent and 70 percent of the time, a far cry from the 90 percent figure reflected by the median number.
Variation among measures
Researchers found similar variation in measures for AMI patients who received beta-blockers at admission and/or discharge. Among those individuals, some patients received the drugs less than 50 percent of the time, while others received the drugs more than 90 percent of the time.
In heart failure, the percentage of patients assessed for left ventricular (LV) function ranged from less than 10 percent to more than 90 percent at 3,130 reporting hospitals. The prescribing of ACE inhibitors for demonstrated LV dysfunction was only slightly better, ranging from less than 40 percent to slightly over 90 percent.
Similarly, rates of pneumococcal vaccination in 3,079 hospitals were decidedly low, with most clustered in the 10 percent to 60 percent range. One piece of good news came in oxygenation assessment, which was performed in more than 80 percent of patients.
The same type variation can be found by combing through data on the federal government’s Hospital Compare Web site (www.hospitalcompare.hhs.gov). A recent search of hospitals found significant variation among certain measures, like smoking cessation counseling for MI patients.
Regional differences
Dr. Jha says that he was disturbed by slow pace of progress demonstrated by the study because the measures are so well-accepted. “It’s not a knowledge gap,” he says. “And we’re not talking about things that are either complicated or that require a major financial investment.”
But Dr. Jha says that even more surprising “and in some ways more disturbing “was the amount of variation between hospitals and regions.
“That was the big surprise, the major gap between institutions and the incredible amount of regional variation,” says Dr. Jha, who frequently writes about hospital performance measurement. “Some hospitals in Boston performed very well, for example, while others did not. And between regions there were huge variations.”
Where were some of poorest performers? Dr. Jha says that hospitals in the Orlando and Miami metropolitan areas and in parts of southern California (especially the “inland empire” area near San Bernardino) were at the bottom of the pack. The best performers, by contrast, tended to be in the Northeast and the Midwest. In addition, academic and not-for-profit institutions fared somewhat better than their for-profit counterparts.
Dr. Jha and his colleagues also found that a good score on one measure didn’t necessarily predict better care in all three areas. While hospitals that performed well in AMI care tended to also do well with heart failure, some of those same institutions achieved mediocre scores and only modest gains in improving their care of pneumonia.
The right direction?
In the second article published in the same issue of the New England Journal, researchers analyzed data collected between 2002 and 2004 by the Joint Commission on Accreditation of Healthcare Organizations. That database included information from more than 3,000 hospitals on how well they followed 17 measures for AMI, heart failure and pneumonia.
The good news is that for 16 of those 17 measures, the hospitals in the study showed some improvement. The percentage of AMI patients who received aspirin at admission and discharge edged up steadily during the study period, as did the percentage of patients who received beta-blockers.
The most dramatic improvements came in heart failure, where the percentage of patients who received explicit discharge instructions on topics like self-care, medication use, and symptoms warranting medical attention increased by 26 percent over the two-year study period.
These improvements may not be earth-shattering, but the positive direction is encouraging to Scott Williams, PsyD, lead author of the study and director of JCAHO’s center for public policy research. “At least we have some evidence to support the idea that if we measure these things and provide hospitals with feedback,” he explains, “they will continue to improve their performance.”
While Dr. Williams is quick to acknowledge that the study found an astonishingly large number of low-performing hospitals, those facilities tended to make the biggest gains during the study period. That result puzzled researchers, who thought that low-performing hospitals might argue with the measures or the methodology or the data in an attempt to rationalize their poor performance. Dr. Williams suspects that wasn’t the case, perhaps because the initial poor performance data provided an impetus for improvement.
The importance of leadership
Taken together, these two studies “and other emerging performance data “indicate that care is improving for three high-volume conditions, albeit slowly. The question is whether the data should be viewed as good news, or as a sign that there are serious problems with quality and variation in U.S. hospitals.
Kenneth Kizer, MD, MPH, president and CEO of the National Quality Forum in Washington, definitely falls into the latter camp. He points out that at the current rate, it will take until next century for care to reach adequate levels.
“The current rate of progress is not stellar,” he says. “Even though things are getting better, we’re starting from a position that’s simply unacceptable.”
Dr. Kizer attributes the snail’s pace of change and what he sees as persistent mediocrity to two key factors: a lack of leadership within institutions and, until recently, at least, the fact that poor performers haven’t been held accountable in any meaningful way.
Hospitals in which leaders demand and support systems-based care and adherence to evidence-based standards “and hold providers accountable for not meeting care quality expectations ” are the ones making forward movement. “It’s not a facility-type issue and it’s not a knowledge or training issue. It’s a [hospital] leadership issue,” Dr. Kizer says.
Duke University cardiologist Matthew Roe, MD, who has authored several studies of performance measures, says he suspects that institutions that perform well focus on integrating treatment approaches among specialists and other treating clinicians such as hospitalists. “The problem is that [those hospitals] are a small fraction of the total,” he says.
Dr. Jha adds that it’s premature to say that overall performance is improving. “Even for the vast majority of hospitals,” he says, “we have a long way to go for several very important measures. That’s one reason to be circumspect about where we are.”
Quality gaps
A study that received somewhat less attention than the New England Journal articles came to a similar conclusion. An analysis of the ADHERE database, which focuses on heart failure patients, found that on four measures “left-ventricular dysfunction assessment, use of ACE inhibitors and smoking-cessation counseling, and giving patients detailed discharge instructions “hospital scores were all over the map.
The study, which was published in the July 11 Archives of Internal Medicine, found that median conformity among the 223 reporting hospitals ranged from 24 percent to 72 percent. That number is particularly disheartening to the article’s lead author Gregg Fonarow, MD, who directs UCLA’s cardiomyopathy center.
“You would assume that if you’re being hospitalized in the U.S. for heart failure, these standard measures would be used,” he says. “There is no good reason why patients do not receive this care.”
Dr. Fonarow’s study, which also examined variation in care, came up with another interesting result: Quality “gaps” surfaced in all types of hospitals “large, small, teaching and non-teaching, rural and urban.
“We found no evidence that the type of hospital was a major determinant of whether patients received guidelines-recommended, evidence-based care,” he says. “We also saw high levels of care in both community and academic hospitals.”
The fallout of reporting data
While some in health care are discouraged about the state of quality improvement in U.S. hospitals, officials at the CMS believe that publicly posting hospital performance data in forums like the Hospital Compare Web site is leading to improvement, even if it is limited in scope.
“The changes haven’t been dramatic and I hope they will improve,” says Michael Rapp, MD, director of the agency’s quality measurement and health assessment group. “We’re seeing that just the reporting process in itself is prompting a lot of efforts toward quality improvement. What we’re hearing is that hospitals are paying very close attention to these [performance] data and are working to improve them, especially now that they’re publicly available for patients to see.”
While public reporting is certainly helping to convince hospitals to pay more attention to quality measures, so is the growth of pay-for-performance programs. When combined with public reporting, most experts say, financial incentives may be the prod that low-performing hospitals need to clean up their act.
JCAHO’s Dr. Williams points out that the data-collection period covered by his study ended just as public performance reporting began. “We don’t know how things will look in another two years,” he says, “but perhaps we will even see an increase in the rate of performance improvement” because of pay-for-performance programs.
The CMS’ next move in performance measurement and public reporting will be to add 30-day mortality performance measures for AMI and heart failure, a move that Dr. Rapp predicts will up the ante. “That’s an important step,” he says, “because it will be the first time we’re putting outcomes measures up for public reporting.”
Bonnie Darves is a freelance writer specializing in health care. She is based in Lake Oswego, Ore.
What hospitalists can do to improve quality in the hospital
Hospitalists who have the good fortune to be working in institutions whose performance on current quality measures is above the norm still have challenges ahead. The current measures are expected to expand significantly in the years ahead, which means there will be a lot of work to do.
For hospitalists, it also means that opportunities for improving quality abound, says Ashish Jha, MD, MPH, a hospitalist and researcher at the Harvard School of Public Health in Boston.
“Hospitalists are as central to this as anyone because they provide the bulk of inpatient care in many hospitals,” he explains, “and MI, heart failure and pneumonia are their bread and butter.” That’s why he urges hospitalists to not only focus on consistency in their own care but also assume a pivotal role in “ensuring that their hospitals put systems in place to deliver high-quality care by getting senior leadership to focus on this.”
Here are other recommendations to spur improvement in the evolving world of performance measurement:
- Champion change. “Hospitalists are in a great position to serve as powerful champions and leaders, to focus the efforts of QI on the types of measures” they can directly affect, says Scott Williams, PsyD, director of the center for public policy research at the Joint Commission.
- Leverage hospitalists’ QI capacity. “There is great potential for hospitalists in QI. They are part of the solution, quite simply because they know what needs to be done and they should be able to do a better job of making sure the right things get done,” says Kenneth Kizer, MD, MPH, president and CEO of the National Quality Forum.
- Join a national initiative. Gregg Fonarow, MD, director of UCLA’s cardiomyopathy center, says that the American Heart Association’s Get with the Guidelines program gives hospitalists and other physicians the most current information, details on tested tools, preprinted orders and a discharge checklist. “This is the most comprehensive program right now,” he says, “and it’s the one I would recommend that hospitalists use.”