Home Feature A reality check for pay-for-performance programs

A reality check for pay-for-performance programs

July 2007

Published in the July 2007 issue of Today’s Hospitalist.

While many physicians have been less than enthusiastic about the notion of pay for performance, a rash of recent studies is giving doctors new reasons to eye performance-based pay with skepticism.

In the last six months or so, several high-profile studies have focused on performance measures being used in two of the larger hospital initiatives: the pay-for-reporting program sponsored by the Centers for Medicare and Medicaid Services (CMS), and the Hospital Quality Incentive Demonstration program, a pay-for-performance partnership between the CMS and Premier Inc.

While there are differences between the two programs, both are laying the groundwork for a future in which performance measures may affect a big chunk of hospitals’ “and physicians’ “reimbursement. The reporting program currently rewards hospitals for merely reporting performance data; the Premier program pays a bonus to facilities that score well on a set of performance measures.

Over the last two years, Premier officials say, its program has helped save the lives of thousands of hospitalized patients. The new physician-led studies, however, have reached a different conclusion. Researchers say they found only modest improvements related to the pay-for-performance program, and they have called for a broader set of measures that are more closely tied to improved outcomes and mortality rates.

No one is urging physicians to not participate in pay-for-performance programs, which are expected to expand. But health policy experts and physicians alike worry that by focusing on incomplete quality measures, American medicine may not target the right quality improvement goals. And with performance measurement still in its infancy, researchers advise hospitalists to look beyond current measurement sets for evidence-based best practices that aren’t yet tied to financial incentives.

Differing views
In a statement released earlier this year, Premier reported that more than 260 hospitals in its pay-for-performance program raised overall quality by 11.8% for patients in five clinical areas over two years.

Premier also reported that the program helped save close to 1,300 patients from fatal heart attacks, according to an analysis of mortality rates at participating hospitals. The demonstration project uses 36 performance measures that focus primarily on processes of care, such as using beta-blockers at admission and discharge for heart attack patients.

In the CMS’ pay-for-reporting program, on the other hand, more than 3,500 hospitals nationwide voluntarily reported data on 21 different measures related to four clinical conditions. (Many measures are also part of the Premier measure set.) The idea is that hospitals reporting quality data will pay closer attention to how well they implement and perform on these endorsed measures.

According to CMS officials, adhering to measures is a way to integrate data-driven quality improvement into hospital practice. But researchers who have examined the data say that the measures themselves appear to be making little to no difference in mortality.

An example is a study in the Dec. 13, 2006, Journal of the American Medical Association (JAMA), which showed that hospital performance measures in the pay-for-reporting program predict only small differences in hospital risk-adjusted mortality rates.

Rachel M. Werner, MD, the study’s lead author and an assistant professor in the division of general internal medicine at Philadelphia’s University of Pennsylvania, says she was surprised by the results.

"Ideally, you would have measures that give a significant amount of information on mortality across hospitals," Dr. Werner says. But researchers found that when they ranked hospitals by risk-adjusted one-year mortality rates, the range went from an average of 0.40 among the highest mortality hospitals to 0.27 among the lowest, a difference of 0.13. The mortality difference that could be attributed to differences in performance measures between hospitals was a mere 0.01.

The right incentives?
A study published in the Feb. 1, 2007, New England Journal of Medicine (NEJM) took a different tack, comparing performance in the pay-for-reporting program to performance in hospitals that participate in both the reporting program and the pay-for-performance demonstration.

Researchers found that hospitals enrolled in both programs achieved only modest quality improvements over hospitals engaged only in the pay-for-reporting program. The gains were so slight, authors concluded, that "more research is needed to explore whether different incentives would stimulate more than the modest improvements found." (Another study, published in the June 6, 2007, JAMA, found even more limited improvement for acute myocardial infarction among hospitals participating in the Premier demonstration.)

Hospitalist Peter Lindenauer, MD, the lead author of the February study, says that the findings are another sign that American medicine needs more research to prove that current performance measures can reduce hospital mortality rates.

"A real cause of angst among those in the field of quality improvement is that the measures that have been implemented so far have yet to be linked to meaningful impacts on mortality," says Dr. Lindenauer, who is medical director of clinical and quality informatics with Baystate Health in Springfield, Mass.

Dr. Werner says that based on the result of her research and other studies, she would like to see broader quality measures developed before tying current performance measures to financial incentives.

While she gives the CMS credit for encouraging hospitals to publicly report quality data, she feels the pay-for-performance concept is not ready for prime time. "A lot more research is needed," says Dr. Werner, "before moving to make these things permanent."

Dr. Lindenauer agrees. "We need more evidence that demonstrates the benefits of P4P," he says, "not only in terms of adherence to process measurements but also in the effect of P4P on improving outcomes." Future analysis, he adds, should also take costs and unintended consequences into account. "I wouldn’t be comfortable saying that, based on the Premier demo alone, P4P is ready for national implementation."

A need for other measures
One window into the limitations of some measures may be found in a study in the Jan. 3, 2007, JAMA. Researchers found that three of the four current heart failure performance measures being used in both CMS performance initiatives have little relationship to patient mortality.

Lead author Gregg C. Fonarow, MD, a professor of cardiovascular medicine and director of the Ahmanson-UCLA Cardiomyopathy Center in Los Angles, says the study found that only one of the heart failure performance measures “the use of ACE inhibitors or ARBs “was significantly associated with reduced post-discharge mortality or re-admission rates.

Part of the problem, says Dr. Fonarow, is that current heart failure measures don’t include some proven therapies. While the use of beta-blockers at discharge is strongly associated with reduced risk of mortality, for instance, the drugs are not currently part of the heart failure measure set in either CMS program.

Other medications shown to reduce mortality in patients with systolic heart failure include aldosterone inhibitors and hydralazine/nitrates. But neither class of drugs is included in heart failure performance measures, he says.

One big problem, he adds, is that the current development process for performance measures is too static. An expert panel reviews available evidence, then suggests measures that are peer-reviewed and released.

"There is no testing or re-assessment to see if the measures are valid or achieve their intended purpose in actual practice," says Dr. Fonarow. "The process needs to be more dynamic, where new data are used to refine existing measures and develop new ones."

Translating evidence into real-world outcomes
Even performance measures backed by strong evidence, Dr. Lindenauer points out, raise a perplexing question: If the CMS initially chose performance measures because the therapies they were based on were proven in clinical trials, why aren’t the measures reducing mortality rates?

"It may be that what works in the setting of a randomized trial with carefully selected patients is not as effective when implemented in the real world with patients who are more diverse," he says.

Dr. Lindenauer notes that there may be an even simpler explanation: Hospitals may be doing a good job following performance measures, but some of the apparent improvements in performance may be due to better documentation of contraindications.

"We may not be giving any more patients aspirin or beta-blockers," he explains. "We may be getting better at identifying patients who are not eligible for them."

At the same time, Dr. Lindenauer raises the question of unintended consequences related to performance measures. Because some hospitals may be focusing resources on a limited number of clinical areas, patients with other conditions, including stroke and chronic obstructive pulmonary disease, may have indirectly suffered poorer care.

According to Dr. Lindenauer’s co-author Sheila Roman, MD, MPH, some hospitals may be beefing up only certain clinical areas to increase their performance scores, rather than implementing a more systematic approach to quality improvement.

"Some hospitals may be diverting resources and just teaching to the test," Dr. Roman says. "They miss the point and don’t reap the potential they could."

Expanding performance programs
While the study she participated in found only modest improvements related to pay for performance, Dr. Roman, who is senior medical officer with the CMS’ quality measurement and health assessment group, says both the pay-for-reporting and the Premier programs have been a success. That’s because the programs have encouraged hospitals to collect data and improve processes of care.

That said, she acknowledges that little research has been conducted on how effective hospitals have been in improving outcomes. "The primary message [from these studies] is that we need measures on multiple aspects of inpatient care," Dr. Roman notes. "We have to have more measures, and more measure types that speak to improving care systems “and do it quickly."

Specific areas where measurement gaps exist, she adds, include outcomes-based measures, and measures related to both preventing complications and to coordinating care. And speed is important, she explains, because the CMS wants to expand its current programs and move ahead with others.

This year and next, for instance, the CMS will be adding risk-adjusted mortality data and patient satisfaction measures to its pay-for-reporting program. Other measures being considered include those for surgical-complication improvement.

And this summer, the CMS plans to propose to Congress what it calls a "value-based purchasing" program for hospitals. If approved, the plan would replace the hospital pay-for-reporting program in fiscal year 2009 “and provide financial incentives based on performance on selected measures, not just on reporting.

Message for hospitalists
With studies showing steady improvement in the use of targeted process measures, Dr. Lindenauer encourages physicians to see the value of publicly reported quality measurements. But he also advises hospitalists to look outside the current suite of measures and use clinical practices that have been demonstrated to improve patient outcomes.

"We are still in the early stages of figuring out and implementing process measures that are tightly linked to outcomes," Dr. Lindenauer says. "It will take a generation to develop a robust suite of measures." L. Craig Miller, M.D., national medical director with The Camden Group, a Los Angeles-based consulting firm, has helped launch and manage hospitalist programs. He says that hospitalists understand that the measures fall short in significantly reducing risk-adjusted mortality rates.

At the same time, Dr. Miller notes, the environment around performance measures may be raising the tide of quality improvement. Many hospitals, he points out, are motivated by the fact that public reporting of quality data could cause patients to compare their hospital to competitors. (The pay-for-reporting data are publicly available at www.hospitalcompare.hhs.gov.)

"Even more local health care dollars will be put into technology to eliminate some of the human steps that create errors," he says. At the same time, more hospitals are now hiring quality directors and information technology staff to improve clinical performance. "I believe this is the right thing," says Dr. Miller. "It will make an impact in clinical outcomes."

And UCLA’s Dr. Fonarow believes the recent wave of studies “and physicians questioning the effectiveness of measures “are all good for the quality improvement movement.

"We have seen increased participation and interest in going beyond the core measures to enhance quality improvement," Dr. Fonarow says. "Hospitals are focusing on patient safety measures without financial incentives because there are individuals who are committed to improving outcomes."

Jay Greene is a freelance writer specializing in health care business issues. He is based in St. Paul, Minn.

A look at the largest performance programs

Two of the nation’s largest performance programs are sponsored by the Centers for Medicare and Medicaid Services (CMS), the nation’s largest health care payer. Here’s a look at the details.

● Sponsored by the CMS, the Hospital Quality Incentive Demonstration project with Premier Inc. is a pay-for-performance project that offers financial rewards to 268 participating hospitals if they are in the top 20% for five clinical areas: acute myocardial infarction, congestive heart failure, coronary artery bypass graft, pneumonia, and hip and knee replacement. Premier is a hospital alliance and purchasing group based in San Diego.

● CMS’ public quality-data reporting initiative is a voluntary pay-for-reporting program. Hospitals that do not report quality data are penalized by a 2% reduction in their annual Medicare payment update.

Currently, the CMS requires hospitals to report quality data on 21 performance measures that relate to care for heart attack, heart failure, pneumonia and surgical infection prevention. More than 3,500 facilities are now reporting those data, which are publicly available on the HHS’ Hospital Compare Web site (www.HospitalCompare.hhs.gov). The 21 measures describe recommended process of care.

● For the first time, hospitalists can participate in a public reporting program that targets individual physicians. The CMS program, known as the Public Quality Reporting Initiative, will pay physicians a 1.5% bonus on their total Medicare charges for quality data reported between July 1 and Dec. 31, 2007.