Home Q&A Big payoff for performance feedback

Big payoff for performance feedback

A QI project drives group-wide engagement

May 2021
pay-for-performance

WHEN THE HOSPITALISTS at Milwaukee’s Medical College of Wisconsin launched a quality improvement project in July 2018, they employed many strategies that will be familiar to hospitalists. There was, however, one big exception: Leaders made it clear that the project wouldn’t try to hold hospitalists responsible or fiscally liable for things they couldn’t control (think 30-day readmissions).

Hospitalists were given some responsibility—along with a small financial incentive—for factors they could control, like discharge orders placed by 10 a.m. and attendance at care coordination meetings. But the bigger goal of the program was to change the culture of the group and its 40 daytime hospitalists.

“We weren’t saying, ‘You did poorly on readmissions, it’s your fault and we’re going to penalize you,’ ” says hospitalist Ankur Segon, MD, one of the authors of a study published in March in BMJ Open Quality. ” ‘But if you’re consistently not performing well on a particular systems-based metric, we would ask you to reflect on how you might improve that.’ We asked them to really hone in on the process metrics.”

“Being realistic about what hospitalists can accomplish helped reduce resistance to the project.”

Ankur Segon, MD

~ Ankur Segon, MD
Medical College of Wisconsin

The strategy appears to have paid off. Not only did the program see improvement on issues hospitalists could control, but also on bigger-picture systems issues. While performance went up on metrics for discharge orders placed by 10 a.m. and discharge summaries completed within 24 hours, the group’s numbers also went down for 30-day readmissions, LOS and hospital-acquired infections.

To help its hospitalists boost their performance, the program created a performance feedback report that provided a “dashboard” with data on hospitalists’ individual performance and a report comparing them to their peers. The project also used strategies like monthly readmission meetings to give high-performing hospitalists a chance to share anecdotal feedback with their colleagues.

Researchers noted that the project saw improvements in almost all target areas. Dr. Segon, who was section chief of hospital medicine during the project rollout, spoke to Today’s Hospitalist about the strategies the program used.

How did hospitalists in the program initially receive the program?
Overall it was received positively. That was particularly true for the fresh-out-of-residency hospitalists who were just getting started and looking for some kind of a yardstick to determine how exactly they were doing. Some of them told me that this was helpful because they knew where they stood. Others felt like this was not something they prioritize as a physician as much as patient care or education. But as the project went on, we saw performance gaps between different types of hospitalists even out, which shows that we got through to many of them.

Why the up-front focus on not holding hospitalists accountable for process metrics that many feel are beyond their control?
Like everybody else, hospitalists are sensitive to what they can and cannot do. So we wanted to be very explicit about the fact that a lot of these metrics like length of stay and readmissions are systems-based items. At the same time, we were saying that completing discharge summaries and getting them off to primary care physicians are things you can control. Being clear about what they can and cannot do and being realistic about what hospitalists can accomplish helped reduce resistance to the project.

Another thing that helped was that we took a comprehensive approach. We were not just talking about quality, we were also talking about operations. Do we have the right amount of expertise to support the group? What can we do to make their lives easier operationally? At the same time, we did a lot of work in faculty development and paid attention to the kinds of resources hospitalists had for individual development. I think that also helped minimize resistance.

You held meetings where hospitalists who hit their targets could share tips with the rest of the group. How did those meetings work?
Every quarter, we would celebrate individual successes. So we might say that Dr. Smith’s HCAHPS scores were the highest for the year. We were careful to say that these are systems-based things, so we’re not saying that this person did something spectacular that nobody else can do.

But maybe they’re doing something that really works, and maybe we can learn from them. You could see that both the person sharing tactics and the hospitalists listening would light up and really pay attention. Hospitalists would maybe think, “I never thought of that” or “Maybe I should start doing that.” That engagement and positivity were priceless.

In the reports, you would name the hospitalists who hit their quality metrics, but five to 10 hospitalists regularly would not. Were you concerned you were “unmasking” those people?
If you look at the AHRQ guidelines on QI projects, they say that relative social ranking is acceptable when you’re doing performance feedback. From a psychological perspective, there are ups and downs to unmasking people who are not doing quite as well, but there are also benefits. One is that you want to celebrate successes. Maybe there’s a junior faculty member taking handoffs from a more senior faculty member, they can learn something from that person. So if there is the possibility that people will be able to figure out the 10 hospitalists who are not doing well on a measure, I think that’s OK.

You saw improvement on individual measures, but do you feel like you saw evidence that your group culture changed?
We saw an improvement in engagement and participation overall from the group. That was something that department leadership and hospital leadership all commented on. People noticed those at the interdisciplinary meetings our hospitalists attend and also in other quality improvement programs when they’re with nursing on the floors. The level of engagement of the group improved across the board.

Can groups that don’t have the support of an academic medical center implement this kind of initiative?
I think they could. To start, we needed two weeks for our data analysts to create algorithms to pull data from the EHR. Once that’s done, the monthly generation of the various report components takes a couple of hours because the algorithms are already in place. An administrative assistant for the section sends the reports out to the hospitalists, which takes a day to collate all the packets and send them out. So if you have some staff to spread the load around and you’ve got access to your data sources—and the will to drive it—this project is very doable.

Edward Doyle is Editor of Today’s Hospitalist.

Published in the May/June 2021 issue of Today’s Hospitalist

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments