I RECENTLY ATTENDED one of our monthly hospitalist meetings, which had a very professional PowerPoint presentation on “Quality and Performance.” The program depicted various initials that signified performance metrics: LOS, GLOS, MIPS, HAIs, ASM, IPS and PEx, along with mortality index.
The presentation emphasized length of stay as a metric for both our group and for individual physicians. We all know about the CMS’ DRG reimbursement system in which hospitals benefit financially from the shortest length of stay for any specific diagnosis—a payment method that rewards individual hospitalists who have the shortest average LOS. While other metrics go into evaluating performance, length of stay seems paramount in the world of DRGs.
Most of the people reviewing patient charts to collect these data are hospital personnel from billing and coding, risk management, infection prevention and pharmacy (for antibiotic stewardship), to name a few. Practically none are physicians.
We don’t choose doctors according to signifiers of their financial success, but on character and intellectual and clinical ability.
The managers, both local and corporate, who evaluate these collected data and render decisions based on them have titles such as CMO, CFO, CEO and COO. They may also include directors of hospitalist programs and their corporate supervisors. Most of these individuals, including the physicians, have MBAs and tend to make decisions from the standpoint of business.
Apparently, financial efficiency is the key element in hospitalist performance. Virtually all these performance parameters have one thing in common: By meeting certain standards, the hospital is assured better reimbursement and avoids being penalized for complications such as HAIs.
The need for balance
Financial success may be relatively easy to assess. But I believe these data miss important quality performance issues. I’d argue that it may be unfair to use financial success as the main benchmark for professional excellence.
I’d also argue that only physicians can judge each other’s quality of care. As hospitalists, we do this every day by interacting and communicating with other physicians as well as viewing their work product in the EMR. We also choose primary care physicians or consultants for family members or for ourselves based on personal observations or recommendations from trusted colleagues. We don’t make those choices according to signifiers of a physician’s financial success, such as the make and model of his or her car, but on character and intellectual and clinical ability.
Hospital length of stay (LOS) as consistently been a lever; hospitalists, it has been argued, produce lower lengths of stay, which would improve hospitals’ bottom lines. Read our By the Numbers column: Not too long, not too short, just right.
When it comes to professional performance, we must balance financial responsibility with the quality of outcomes in diagnosing and treating individual patients. Only then can we accurately judge a hospitalist’s cost efficiency.
Here’s my proposal: Hospitalists as a group should take it upon themselves to improve care quality by periodically reviewing each other’s charts. I base this idea on my habit of routinely reviewing the hospital records of all my patients when my weekly shift begins.
In doing so, I’ve discovered missed diagnoses including unrecognized severe hypothyroidism, the misinterpretation of a blood-cultured organism as a contaminant, and the failure to document and follow up on a radiologic finding suspicious for breast cancer. While I made sure those problems were corrected, I doubt that nonmedical personnel reviewing our charts for LOS, MIPS or patient satisfaction would detect them.
I know it’s easy to overlook or not recognize important findings due to the hectic nature of our practice. We’re all facing increased patient volumes, constant interruptions and pressure to discharge.
And the EMR may actually exacerbate these problems. In the rush to finish a shift on time, we tend to cut and paste information from the previous hospitalist’s H&P or progress note, thus short-circuiting critical thinking and resulting in an anchoring heuristic error.
How it would work
I propose naming this metric the peer-to-peer retrospective review, although I’m not sure the acronym PPRR will catch on.
Once every other month, hospitalists would review three or four charts from different colleagues, a process that should take an hour. Besides misdiagnoses and cut-and-paste shortcuts, we could look for items like making sure colleagues are de-escalating antibiotics after a few days for patients with community-acquired pneumonia.
We would have to respect anonymity and confidentiality and make sure the reviews are done collegially. I’m sure some people would complain about the time required, but it’s hard to argue against a measure designed to improve quality.
Nonphysician reviewers could collect data on one metric that, to some extent, denotes quality of care: 30-day readmission rates for a discharging hospitalist. Often, there may be a very good explanation for a readmission, such as poor patient compliance. But a hospitalist with a low LOS who also has a high readmission rate might signal a performance problem that needs attention.
By periodically and anonymously reviewing each other’s charts, we would not only detect unidentified medical problems, but reduce liability issues. Using the Hawthorne effect—which holds that people will modify their behavior because they are being observed—we can find the right balance of financial and quality outcomes for the benefit of our patients and our hospitals.
Stephen L. Green, MD, is a locum hospitalist who maintains a telemedicine infectious diseases consulting practice. He previously practiced for more than 30 years as a primary care internist and infectious diseases specialist. Dr. Green can be reached at firstname.lastname@example.org.
Published in the January/February 2023 issue of Today’s Hospitalist
Financial efficiency needs to be balanced with quality of outcomes in assessing hospitalist performance. The solution proposed here, peer performance reviews, would need to be adapted to capture remote telehospitalist performance.