Home Analysis How ready are APPs for hospital medicine practice?

How ready are APPs for hospital medicine practice?

A new tool can assess their clinical strengths and areas to improve

May 2022
advanced-practice-provider-hospital-medicine

WITH HOSPITALIST PROGRAMS everywhere giving advanced practice providers (APPs) more clinical autonomy, groups are struggling to find ways to assess APPs’ clinical skills. What additional training (and in what areas) may some APPs need to practice hospital medicine?

One big challenge is the broad range of experience and training that APPs bring to hospitalist groups. With new graduates, training is variable and often doesn’t include any focus on hospital medicine. A second challenge: There’s no widely accepted method to measure their skills in specific practice domains. While some programs have relied on checklists, few if any of those have been validated by research.

The hospital medicine division at Baltimore’s Johns Hopkins Bayview Medical Center hopes to change that with a new instrument to assess and nurture APP performance. In a write-up in the January 2022 Journal of Hospital Medicine, Johns Hopkins Bayview researchers report that the instrument they devised is relatively easy to use—supervising clinicians can rate an APP in under 11 minutes. Moreover, data show it works to identify problem areas and strengths.

Competencies and milestones
The study points out that while more than 80% of hospitalist groups employ APPs, no standard method has yet existed to assess their practice readiness.

“We have to use this tool longitudinally.”

Amteshwar-Singh-MD-MEHP

Amteshwar Singh, MD, MEHP
Johns Hopkins Bayview Medical Center

That’s due in large part, says lead author Amteshwar Singh, MD, MEHP, director of education in Johns Hopkins Bayview’s hospital medicine division, to the enormous heterogeneity in how hospitalist groups deploy APPs. That may be changing, in part because of covid. “Over the last few years,” Dr. Singh explains, “there has been an uptake of APPs, and their numbers are growing. The talent they bring to a health care system is being recognized more and more.”

The tool he and his colleagues created is known as the Cardin Hospitalist Advanced Practice Provider-Readiness Assessment, or CHAPP-RA. Supervising clinicians use it to rate APPs on 17 items that each covers a competency including history-taking, physical exam, medication reconciliation, patient interview and more.

To rate an APP, physicians use a nine-point scale that goes from novice to expert on each of the 17 measures. Dr. Singh says each point represents a particular milestone of proficiency within that measure. The scale mimics the ACGME’s milestone format that physicians use when teaching residents—a similar format that researchers hoped would make the tool easier for teaching physicians. Robust research, he adds, supports the use of the format.

For the study, researchers had 30 physicians use the tool to assess 11 APPs. The raters evaluated APPs after working three consecutive shifts supervising them.

Ease of use
To determine if the tool worked, researchers also asked those physicians to agree or disagree with two global statements for each APP they were rating: “APP is ready to practice independently” and “I would feel comfortable having this APP care for my loved ones.” When the authors compared supervising physicians’ tool ratings to their answers to these two statements, they found consistency. That told them that CHAPPRA was getting it right.

The authors found other good news: Physicians rating APPs spent an average of 10.5 minutes on the tool to review a single APP. While physicians didn’t always complete all 17 category ratings for each APP, they consistently scored them on critical sections like assessment/plan of care, documentation/written communication, time management/reliability, and collaboration with a multidisciplinary team.

Two items on the competency list were not as widely rated: identification and management of the acutely ill and history-taking. It turns out that APPs in the study didn’t always have the chance to admit a new patient and take a history or manage a decompensating patient while being assessed.

Not for one-time use
Dr. Singh points out that the hospital medicine group at Johns Hopkins Bayview has for years relied on a fellowship program that has APPs rotating through different parts of the hospital, shadowing more experienced NP/PAs and then working alongside physicians. New hires typically work in the ED for admissions, the wards for rounding and progression of care, a chemical dependency unit, and the pulmonary service.

“When we hire APPs, they go through this training to get a flavor of the sites they work well in,” Dr. Singh says. “Eventually, they progress to taking on more senior responsibilities.” While the APP fellowship is designed to last 12 months, program timing is flexible. “If we see an APP making a lot of progress in a shorter time, we can speed up the process,” he says. For those who need more time to gain a certain degree of autonomy, “we can provide more learning experiences during the fellowship.

When using CHAPP-RA, Dr. Singh says that kind of flexibility is just as important. “The tool should provide feedback both to the rater and the person being assessed to support their education and drive the learning process,” he explains. “We absolutely cannot use this as a single snapshot in time.”

He and his program oppose the idea of using the instrument as a one-and-done measure. “We have to use this tool longitudinally,” Dr. Singh says. “We need to continually reassess APPs and identify action items based on tool feedback. We don’t want to use a time-based assessment; we want a criterion-reference assessment instead.”

Data should be “messy”
Not surprisingly, the tool found big differences in performance and readiness between new APPs and those with one year of experience or more. According to Dr. Singh, that result probably indicates that the tool is most helpful in assessing novice APPs. He suggests that hospitalist programs use it to individualize onboarding for APPs right out of training. By assessing their skills, groups can then determine how much supervision APPs need and create individualized goals.

He hopes groups using the tool don’t stop there. “See how each APP subgroup”—novice, mid-career and expert—”progresses over time in each domain,” he says. “Look at differences in six months or a year.”

Because the academic center tested the tool in late summer 2020 during the height of the pandemic, its use in the program has been sporadic, with APPs using it for feedback. The medical center is still working toward the goal of developing the tool for formal assessment.

The challenge remains to come up with a thoughtful strategy to help APPs improve clinical skills and not just land on a quick score.

“These are not numerical ratings,” Dr. Singh points out, “and the data should remain complex and messy. The results should prompt a discussion not just between the physician and the APP being assessed, but among stakeholders of an organization. We need to ask how we can use this tool to help this workforce optimize their practice.”

Edward Doyle is Editor of Today’s Hospitalist.

Published in the May/June 2022 issue of Today’s Hospitalist

Subscribe
Notify of
guest
2 Comments
newest
oldest most voted
Inline Feedbacks
View all comments
Peter Braverman (Facebook)
Admin
July 2022 7:43 am

My housestaff team was just talking about this last week: How to evaluate ANY “provider”/doctor. Our Sub-I said it oughta be as objective & reliable as Major League Baseball statistics: Batting average, RBIs, Errors, ERA, etc. He had interesting suggestions: # of great plays or catches(correct diagnoses? well-managed cases)… divided by # of errors, or misses, or iatrogenic mishaps. Or # “saves” divided by # losses. Yes, very difficult to do/measure, but not impossible. The alternative is subjective “medical Yelp” reviews, like Healthgrades: “She was such a nice doctor!”

Denise Gafanhao (Facebook)
Admin
June 2022 8:38 am

It would be interesting to see how the physicians score on the tool they developed to rate the APPs. In my experience, there is great variation in clinical expertise with hospital physicians as well.