Home Q&A Who owns what in patient metrics?

Who owns what in patient metrics?

With team-based care, how do you attribute individual performance?

October 2018
puzzling over patient measures like readmissions and patient satisfaction

IT’S NO SECRET that doctors face growing pressure over how they perform on measures like readmissions and patient satisfaction. That makes it increasingly important to find fair ways to attribute performance to individual providers—particularly if their compensation is based, in part, on how well they individually do.

“Attribution is a particular problem for inpatient doctors because we have a team-based care model,” points out Carrie A. Herzke, MD, MBA, clinical director of the hospitalist program at Baltimore’s Johns Hopkins Hospital. “If we’re going to be held responsible for value and quality metrics, hospitalists need to be recognized—or held accountable—for that part of a patient’s hospitalization that they own.”

Since 2010, the hospitalists at Johns Hopkins have solved that problem. They’ve implemented a method of attributing patient-level metrics to rotating physicians, as detailed in research that Dr. Herzke was lead author of and was published in the July issue of the Journal of Hospital Medicine.

“Hospitalists need to be recognized—or held accountable—for that part of a patient’s hospitalization that they own.”

~ Carrie A. Herzke, MD, MBA
Johns Hopkins Hospital

The hospital historically relied on administrative data to assign an entire hospitalization to the discharging attending. Instead, researchers working with IT colleagues devised a system that utilizes billing data and allows for more common-sense attribution of nine patient-level metrics to individual providers.

The program assigns attribution for those metrics in three separate categories. One is assigned to the admitting physician and includes appropriate VTE prophylaxis. Another, credited to the discharging doctor, includes percentage of discharges per day, readmissions (observed to expected), time to signing discharge summaries and percentage of patients discharged before 3 p.m.

The third bucket is for what the researchers call “provider day weighted.” That’s where all the physicians involved in a patient’s care “share” credit for any given metric based on the amount of time they individually spent caring for that patient. Shared measures in the research include length of stay (observed over expected), communication with the primary care physician, depth of coding and patient satisfaction.

Say three doctors treat one patient over four days. Dr. No. 1 on day No. 1 gets 100% of the credit (or is dinged) as the admitting physician for appropriate VTE prophylaxis, while Dr. No. 2 sees the patient for the next two days, coding two subsequent E/M services. Dr. No. 3 sees the patient the last day and gets credited with 100% of the discharge measures.

For the provider weighted day measures, Dr. No. 2 receives 50% of the credit, while Drs. No. 1 and 3 each get 25%. All the individual performance data are reported to the hospitalists quarterly and appear on each physician’s electronic dashboard.

“All these data are tied to individual bonuses, and we have pretty aggressive targets,” Dr. Herzke says. “There’s wide variation in how much doctors make from this, in part due to how clinically active they are and in part how they do.” She spoke with Today’s Hospitalist.

Your study points out that “the computational requirements of our methodology are not trivial.” If that was true for Johns Hopkins with all its resources, how likely is it that smaller hospitals could set up something similar?
The initial set-up did take several months. Part of the challenge was locating where all the data we needed come from because we had multiple—and sometimes conflicting—data sources.

We now pay a portion of someone’s time in our analytic group to collect, tabulate and analyze these data. As a large system, we may have more of an ability to pay for that dedicated time. But I suspect that most hospitals do have access to provider-level data and that those data at smaller hospitals may actually be more centralized and easier to access than they were for us.

How did you pick the metrics you used?
We wanted to include metrics that hospitalists have at least some control over. But we also wanted metrics we knew were very important for the hospital so it would fund our ability to collect the data.

Take length of stay, for instance: Hospitalists appreciate that they have partial control over that in terms of how efficiently they deliver care and document patients’ expected length of stay. At the same time, this is a huge metric for the hospital and one it follows closely, so hospitalists doing a good job with length of stay should at least get some credit.

One challenge with metrics is that if you do poorly, you hear about it—but you don’t hear about it if you do well. We wanted to include metrics where hospitalists who do well would be able to benefit directly from their own efforts.

We also wanted to limit the risk of perverse incentives. Say you have a patient who’s been in the hospital for 120 days, which at an academic hospital unfortunately happens. If I as the discharging attending am going to be held responsible for that entire length of stay and I’m going off service tomorrow, it would be very tempting to hold that patient another day and let someone else take that hit.

I’m not sure you can fully eliminate the risk of perverse incentives with performance metrics, but we did consider that carefully in designing this methodology. For our doctors, this system of attribution feels a lot more fair.

Have you tweaked the measures since 2010?
We’ve changed how we weight different measures over the years. For instance, we initially had room to improve on appropriate VTE prophylaxis. But as our doctors have done very well on that measure, we haven’t needed to emphasize that as much.

Also, changes in IT systems have affected what metrics are readily available. And we changed the depth of the coding metric to response to queries.

We also appreciate that there are metrics providers don’t have as much control over. We all agree, for instance, that the way in which we currently measure patient satisfaction is challenging.

So we don’t place as much weight on that because we know the data aren’t perfect. But our message is that it’s still an important measure, and we do want hospitalists to be attuned to patient satisfaction scores.

Phyllis Maguire is Executive Editor of Today’s Hospitalist.

Published in the October 2018 issue of Today’s Hospitalist
Notify of
1 Comment
oldest most voted
Inline Feedbacks
View all comments
Ray Nowaczyk, DO
Ray Nowaczyk, DO
November 2018 3:35 pm

This may work in an academic center but for community hospitals or programs that utilize daytime rounder to cover nights, this is not the way to apply metrics. As a dedicated nocturnist a large majority of the plan of treatment is begun with the initial admission, most tests, labs, radiologic tests and other procedures are ordered. [This process continues at] the beginning of the treatment.