Home Evidence-Based Medicine How do you translate evidence into practice?

How do you translate evidence into practice?

Groups test different mechanisms to speed up standardization

December 2015

Published in the December 2015 issue of Today’s Hospitalist

WHAT DOES IT TAKE to get physicians to change how they practice and embrace evidence-based medicine in the process? While that’s a question that everyone from public health experts to payers have been asking for decades, the search for an answer is taking on more urgency ” and the pressure is trickling down to hospitalists.

There is a growing emphasis on how to standardize medical practice as hospitals struggle to adapt to a world that increasingly pays for value, not volume. To ensure success, hospitals are searching for ways to standardize everything from core-measure performance and order sets to patient-safety practices.

As they delve into standardization, hospitals are realizing just how localized the practice of medicine can be. As one hospitalist quality expert lamented when asked about the infrastructure hospitals have in place to drive evidence-based practice: “I think administrators believe we have a mechanism, but none truly exists.”

“Conversation changes the game, and a pop-up isn’t conversation.” 

huongCho~ Hyung Cho, MD
Mount Sinai Hospital

David Grace, MD, the senior medical officer of hospital medicine at Schumacher Group, a large national physician staffing company, puts it simply: “A standard, consistent level of evidence-based medicine is something other industries have figured out, but it’s something that we in health care struggle with.”

Hospital administrators are putting pressure on physicians in general “and hospitalists in particular “to make changes. “Hospitals and hospital systems are becoming more sophisticated in their analysis of physicians than they were historically,” Dr. Grace explains. “They are asking us not just if we have order sets, but they want to see those sets. It’s not good enough any more to say we can deliver results. They want to know how we make that happen.”

The reality is that translating evidence into practice is painfully slow, with experts estimating that it takes nearly two decades to move original research into routine practice. Speeding that timeline up takes money, labor and an extensive infrastructure.

The big-system approach
When a large health care system like Banner Health decides to standardize practice, it has an advantage in terms of the resources it can muster. The system has taken on the challenge of standardizing practice by creating nearly 20 physician-led clinical consensus groups in specific disciplines. Over the last five years, those groups have tackled initiatives like handoffs and PPI use by defining expected clinical practice, designing interventions “such as order sets, new workflow and decision-support tools “to reliably support that practice, and then implementing them across the health system.

Cheryl O’Malley, MD, is co-leader of Banner’s hospital medicine clinical consensus group, which this year is taking on the evidence-based use of paracentesis to evaluate ascites for possible spontaneous bacterial peritonitis. She says the topic came partly from Banner’s hepatologists, who were looking at unnecessary treatment delays and length of stay.

The clinical consensus group plans to design new order sets to “make it easier to perform paracentesis at the right time and then initiate appropriate treatment if peritonitis is identified,” says Dr. O’Malley, who is also internal medicine program director at University of Arizona College of Medicine-Phoenix.

Other recent group projects include changing chest pain order sets to reflect new guidelines on the number of troponins needed to rule out a myocardial infarction. “We realized that the order sets created at the very beginning of our EMR were out of date,” she notes. Data are pending, but the changes for chest pain could have a significant impact on observation length of stay and costs.

While Banner’s clinical consensus groups can effectively standardize care, there is a downside. The groups rely on a rigorous process of reaching out to many other areas of the health system for project design and implementation That can make the process frustratingly slow.

“In facilities that are part of a big system, physicians can identify a problem where they would like to make changes, but they feel there is a big barrier to getting those changes made,” Dr. O’Malley explains.

At the same time, Banner’s deliberative approach usually helps get significant buy-in from clinical staff. “This structured process takes longer,” Dr. O’Malley says, “but it ultimately leads to more discussion and more robust project development. Your efforts will be more sustainable in the end and you can leverage the resources in the system.”

And because the consensus groups are sanctioned by the health system’s top administrators, the process “is a way to develop best practices and have the work that’s done at one facility be reviewed by others and, ultimately, have an impact on the entire health system.”

Banner Health has invested time and resources in developing a process for tracking and reporting outcomes from each implemented clinical practice. As Dr. O’Malley points out, this helps ensure timely adjustment when processes don’t go as planned or expected outcomes aren’t achieved. “It will also be a way,” she says, “to better quantify the value gained by investing in this process.”

Focusing on high-value care
Hospitals and health systems are finding other innovative ways to standardize practice. Christopher Moriates, MD, a hospitalist at the University of California, San Francisco (UCSF), says a good place to start is by looking at the lists that the Choosing Wisely initiative Choosing Wisely has complied of practices physicians should avoid.

“Look and see what’s relevant to you,” says Dr. Moriates. When UCSF’s hospitalists reviewed the list from the Society of Hospital Medicine, he says, “We weren’t doing all five things poorly. There were things like Foley catheters where we were already doing fairly well.”

To promote evidence-based practice, UCSF’s hospitalists rely on a mechanism known as a high-value care (HVC) program that includes hospitalists, residents, pharmacists and administrators. The program seeks to first identify and then eliminate waste that bumps up costs but doesn’t improve patient outcomes.

One project aimed to reduce proton pump inhibitor use. Another, which was written up in the Sept. 23, 2013, JAMA Internal Medicine, worked to transition inpatients from nebulizers to metered-dose inhalers, cutting nebulizer use by more than 50%. That project, Dr. Moriates points out, was suggested by an administrator who noted how much money the division was spending on nebulizers.

While the evidence behind the initiatives must be good, messaging is also key. “To get physicians to care, you have to make it about the patients in front of them,” says Dr. Moriates. “If we talk about national guidelines or health care costs, it’s hard to capture people’s hearts and minds and get them to go along. But if you say, ‘Most of our patients leave the hospital and use inhalers, so we need to provide them better education on using them,’ it becomes about the patient.”

Because one of the goals of the high-value care program is to save money, administrators provide some resources to help projects gain traction.

“We need physicians to generate the ideas and the will to change, but we also need the institution to support system changes,” points out Dr. Moriates, who is the program’s physician leader. UCSF funds that leader with 20% dedicated time and provides funding for a program administrator.

Scoring an “easy win”
The hospitalists at Baltimore’s Johns Hopkins Hospital take a similar approach to standardizing care by using a high-value care committee. Amit Pahwa, MD, a med-peds hospitalist who leads that committee, says that convincing people to conform with evidence and change long-ingrained practices is more about getting them to accept that theirs is “a culture of overuse” than about teaching them facts or even changing EMR order sets.

To start, his committee compiled a list of between 40 and 50 practices they thought were of low value. The group then weeded that list down to two to show “proof of concept” and notch an “early win.”

One initiative focused on telemetry, while the other, which tackled unnecessary serum folate orders, addressed a relatively low-ticket item. But “it was important because it gave us a team,” says Dr. Pahwa. “When people saw we were doing something cool, they wanted to be part of it.”

But as the group learned, eliminating hospital processes that are deemed wasteful may produce pushback. Dr. Pahwa points out that radiology and pathology lab techs may be out of a job if physicians order fewer tests and images. “We have to be really careful about this,” he says. “I don’t know how much these efforts will affect radiology in the end, but there’s a fear.”

In addition, eliminating unnecessary orders is not always a matter of convincing doctors to just say “no.” There are plenty of “unnecessary” tests and imaging embedded in order sets, so not ordering may be harder than ordering. For instance, Dr. Pahwa says, “urine analysis is embedded in the fever work-up order set, so we lose some ability to remind people during CPOE not to test for UTI unless patients are exhibiting symptoms.”

Order sets, clinical pathways, CPOE
That experience at Hopkins illustrates an important lesson: While order sets are an essential part of any evidence-based infrastructure, they come with their own problems. It takes time and expertise to create such sets, and they have to be continually updated to reflect changing evidence.

Brian W. Kendall, MD, a former hospitalist medical director who is chief quality and safety officer for RMC Regional Medical Center of Orangeburg and Calhoun Counties, S.C., says that his hospital, which isn’t an academic center, doesn’t have the resources to produce “multiple, evidence-based homegrown order sets tailored to our individual culture.”

Instead, his hospital buys order sets from a national vendor. Although these are good, he says it has been a challenge to translate the hospital’s old paper-based order sets to the hospital’s EMR.

“Like many places, our IT resources are limited. If physicians have to click through four or five additional boxes to accomplish the same task that required one check mark on a paper order set, it is a barrier to adoption,” Dr. Kendall points out. His hospital currently has between 15 and 20 medical order sets, “and the level of adoption for each order set varies widely. If it’s not friendly to workflow, doctors won’t use it.”

Misunderstandings about order sets are another problem. Dr. Kendall found that hospitalists weren’t using a special order set designed for heart failure patients, choosing instead to use a generic admission order set for all their patients.

“They didn’t know that when they use the CHF order set, a host of orders get fired in the background alerting appropriate disciplines “like case management “that this is a heart failure patient they wouldn’t otherwise know was in the hospital,” he points out. “Hospitalists thought the care plan would be the same, whether the individual clinical orders originated from a generic or a diagnosis-specific order set.”

To clear up that misunderstanding, Dr. Kendall relied on educating the hospitalists. He also directly communicated about the heart failure order set with admitting physicians.

Face-to-face communication
Dr. Kendall’s experience shows that when it comes to promoting evidence-based practice, nothing may work as well as face-to-face communication.

At New York City’s Mount Sinai Hospital, providers tweaked their EMR so that a little green dot now shows up alongside the name of every patient with a Foley catheter, alerting doctors and nurses to their presence. While the dot by itself doesn’t effect change, the hospital paired that strategy with human interaction. The medical director of each floor uses the dot as a prompt to ask the patient’s provider during daily multidisciplinary rounds about the catheter and whether it’s really needed.

“Conversation changes the game, and a pop-up isn’t conversation,” says Hyung Cho, MD, director of quality and patient safety in the hospital medicine division who chairs Mount Sinai’s HVC committee. Dr. Cho and his committee led a project called “Lose the Tube” that encourages physicians to remove urinary catheters as soon as possible to prevent catheter-associated UTIs. “Face-to-face works best.”

A University of Chicago evidence-to-practice project promoting better sleep in the hospital revealed a similar lesson. A computer “nudge” to stop regularly waking patients up at night to take vitals has to be combined with face-to-face conversations and education explaining why and “encouraging people to make the right choice,” says Vineet Arora, MD, a University of Chicago hospitalist.

The “nudge” consists of a change to the EHR so the default for labs and vitals is not always in the middle of the night, Dr. Arora explains. “Now, clinicians have to think about whether they want to wake a patient up for those.”

As for the education and face-to-face conversations, those occur at shift rounds and routine conferences. “Education plus the nudge helps more than the nudge alone,” she points out. Plus, “we have added the topic to the standard work of the nursing huddle. If nurses think a patient doesn’t need labs or vitals, they can recommend that to the doctors.”

Dashboards and scorecards
When hospitalists at Geisinger Medical Center in Danville, Pa., want to promote evidence-based practice, they start by providing the evidence-based rationale for change. They then leverage their group’s electronic dashboard to help measure and drive the change they’re looking for.

To reduce unnecessary red-blood-cell transfusions, for example, the group first made the case for why fewer transfusions were in patients’ best interests. They then shared group-level, transfusion-ordering data.

“This allowed our providers to graphically see the progress we were making,” explains Benjamin A. Hohmuth, MD, director of the hospital medicine department who spearheaded the transfusion reduction program. “It made the improvement very tangible.”

The group also ran provider-level reports to identify outliers who had not yet changed their practice. “We found a couple of people who needed a little more convincing,” says Dr. Hohmuth. For those physicians, one-on-one meetings proved to be very effective.

“The initiatives that have been most successful in our group have not relied on incentives,” he adds. “Education along with performance feedback are often much more powerful than using financial carrots to drive change.”

The next step was to hardwire changes across service lines and hospitals within the Geisinger system. That included making it harder in the CPOE system “but not impossible “to order multiple units at one time for non-hemorrhaging patients. Program leaders also forced indication selection for transfusions that encouraged a restrictive approach. As a result, says Dr. Hohmuth, “the health system saved more than $1 million in avoided direct costs in the first year after the program was put in place.”

Dr. Cho from Mount Sinai similarly uses dashboards to drive changes in performance. “We have faculty meetings every month, and we have each of our hospitalists’ names so everyone can see everyone’s scores” on things like length of stay, the discharge-before-noon rate and patient satisfaction.

“Doctors are extremely competitive,” Dr. Cho adds. “If you are the outlier, you are going to try to do better because everyone knows about it. That is the power of transparency.”

Dealing with outliers who don’t or won’t change their practice is a common dilemma. “Unfortunately,” says Schumacher’s Dr. Grace, “there are providers who simply don’t care. For them, it’s about doing a good enough job so they don’t get into trouble. We have to recognize that we have a lot of those providers out there “not just individuals, but also systems.”

Instead of trying to reach those clinicians, he advocates ignoring them for now and focusing instead on those who do care. According to Dr. Grace, an essential part of building an evidence-based infrastructure is “identifying, cultivating and rewarding champions.”

Deborah Gesensway is a freelance writer who follows U.S. health care from Toronto.

Standardizing across systems

AS HOSPITALISTS IN LARGE ORGANIZATIONS in large organizations try to standardize evidence-based practice, they are encountering a major obstacle: transporting a single project to multiple sites.

Spreading standardized, evidence-based medicine from one floor, unit or hospital to the rest of a system is not as simple as tweaking the one system or EMR that everyone is using (if everyone is using the same EMR). That’s because even in integrated systems, both software and staffing vary greatly from site to site.

Take the “Lose the Tube” project at New York’s Mount Sinai Hospital, which encourages physicians to take out catheters to prevent catheter-associated UTIs. The project relies on daily multidisciplinary rounds, as well as on unit medical directors or nurse managers “who review a sheet every day and ask if that Foley is still necessary,” explains Andrew Dunn, MD, MPH, chief of the hospital medicine division.

The problem is that while all of the seven hospitals in the Mount Sinai system have multidisciplinary rounds, the rounds vary across sites in terms of their structure, including the presence of staff nurses and attendings and whether rounds occur at the bedside or in a conference room. (Mount Sinai merged with another hospital system two years ago.)

“It sounds simple,” Dr. Dunn says, “but it isn’t. The details matter. Everything is local: staffing models, census, protocols. If you are going to work toward common goals, you have to focus on these things first.” Dr. Dunn is in the process of figuring out how to customize the project to each site.

At Baltimore’s Johns Hopkins, meanwhile, hospitalists are working to build a basic infrastructure so the different hospitals in that system can eventually share knowledge and evidence-based practices. But they’re learning how complex that job is.

The first step, explains Henry Michtalik, MD, MPH, MHS, a hospitalist and assistant professor, is defining which metrics to track. For instance, individual hospitals even within one health system don’t use the same definition of something as basic as length of stay.

Some measure “absolute length of stay,” others track “observed over expected” length of stay, and still others adjust it in different ways. (Dr. Michtalik and his colleagues presented an abstract on their efforts to standardize those definitions at this year’s Society of Hospital Medicine annual meeting.)

“A first step is to develop consistent performance metrics, so you can compare apples to apples,” he says. “You have to do a lot of groundwork to harmonize that.” He has also been working on an “inter-hospital collaborative program” to allow different hospitals in the system to share innovations across sites. This includes a common electronic dashboard and monthly “clinical community” meetings of hospitalists and support staff across all sites.

“There are a lot of commonalties and protocols, and great work already being done at individual sites that hasn’t been disseminated across the system,” Dr. Michtalik explains. By deciding how items will be measured and then developing infrastructure, he adds, “you will have group learning and align incentives while developing local solutions.”

And that is the basic conundrum of evidence-based practice: Every site is different. David Grace, MD, senior medical officer of hospital medicine of Schumacher Group, a large national physician staffing company, puts it this way: “We need to standardize processes across all our programs to drive positive outcomes, but we need to leave enough flexibility so programs can be adopted at the facility level.”

For one Schumacher project in the works, called the “Emergency Medicine to Hospital Medicine Playbook,” clinicians have listed the top 20 diagnoses that patients are admitted for as well as the “three to 10 things, like lab and radiology tests” that, according to evidence, should be done before the ED calls the hospitalists to admit a patient. But “we encourage our medical directors to look at their local facility and adjust that list based on local factors,” Dr. Grace says.

“We need at least to provide the standardized framework for evidence-based care,” he notes, “then defer to our medical directors to fine-tune it to meet the needs of their environment.”

“Quality is so local,” adds Dr. Dunn. “It’s local from one medicine floor to another and one hospital to another, so taking a project that has worked well in one place may not work well somewhere else. That’s the challenge.”