Home Q&A When QI research brings good news, can you believe the results?

When QI research brings good news, can you believe the results?

March 2005

Published in the March 2005 issue of Today’s Hospitalist

For researchers interested in quality improvement, a study published in 2001 confirmed the instincts of many: Using critical pathways to help manage surgical patients reduced length of stay, giving hospitals one more tool to improve patient care.

Problems surfaced later, however, when researchers compared those length-of-stay trends to other hospitals in the area. Hospitals that did not employ clinical pathways, it turned out, realized even bigger reductions in length of stay for similar patients. While critical pathways seemed like a no-brainer, they had little to no impact in reducing length of stay.

While the above example, which is a true story, may sound unusual, it’s a serious issue in the burgeoning area of quality improvement. According to Kaveh G. Shojania, MD, a hospitalist and assistant professor of medicine at the University of Ottawa in Ontario, Canada, the story illustrates how deeply entrenched assumptions coupled with a fl awed trial design can derail the best intentioned quality improvement study.

In the January/February issue of Health Affairs, Dr. Shojania dissects these and other problems that plague the growing field of quality improvement research. Along with co-author Jeremy M. Grimshaw, MD, he explains that too many quality improvement trials don’t use the same rigorous methods as clinical studies. As a result, their conclusions are often weak.

While this is certainly a problem for researchers, Dr. Shojania says it can affect hospitalists, who often join the effort to improve patient care. For one, so little is known about what works in the quality improvement arena that physicians implementing their own initiatives need as much information as possible to guide their efforts.

But perhaps more importantly, he adds, by reporting inflated and unfounded results, these quality improvement studies are setting the expectations of both physicians and administrators unrealistically high.

Dr. Shojania is no stranger to the quality improvement movement. Working with Robert M. Wachter, MD, one of the founders of the hospitalist movement, Dr. Shojania co-authored “Internal Bleeding: The Truth Behind America’s Terrifying Epidemic of Medical Mistakes,” which was released last year. He is also deputy editor of Web M&M, a Web-based patient safety journal published by the Agency for Healthcare Research and Quality.

Dr. Shojania talked to Today’s Hospitalist about the challenges faced by quality improvement researchers. He also discussed some specific strategies to avoid problems in quality improvement initiatives.

You use the above story about research into critical pathways as a prime example of how quality improvement can be derailed. What went wrong in that case?

This study, which came out of Brigham and Women’s Hospital in Boston, was very elegant and more sophisticated than most quality improvement research. It was a before and after study that examined several clinical pathways used with surgical patients. One of the most important outcomes was that there was quite a noticeable reduction in length of stay.

Unlike many other quality improvement studies, these researchers went one step further and re-examined their results a year later to make sure the effect was not fleeting. They found that a year later, the result was even greater.

They then looked at the outcomes for several other hospitals in the Boston area that hadn’t done anything special to reduce length of stay in similar patients. They found that in some cases, those other hospitals achieved an even more significant reduction in length of stay.

The study took place at the peak of managed care in the Boston area, and there was so much economic pressure to get people out of the hospital that the critical pathway turned out to be a fifth wheel. Critical pathways weren’t really doing anything, but if you studied them in a superficial way, it would seem like they had produced dramatic effects.

How common are these types of problems in quality improvement studies?

I’m always surprised by the lack of rigorous methods used in many of these studies. Too often, people have already done the intervention and they are trying to justify what they’ve done. They go back and review a few charts, which produces all kinds of opportunities for bias in the assessments of what’s happened, either consciously or subconsciously.

I’m not saying that everyone needs a randomized trial, just that they have to be rigorous. There are all kinds of opportunities to be misled.

What specifically can researchers interested in quality improvement do to make sure they aren’t misled?

First, simple before and after studies are notoriously unreliable. If you don’t have access to a control group, which may often be the case for hospitalists, the next best thing is to gather data from more than one time period before and after the intervention.

You want to look at more than just two data points “three years in a row or two or three months in a row “before and after the intervention. Then if you see a clear change in a pattern, that’s much more convincing than one measurement before and one measurement after.

For hospitalists, that should be more feasible than finding a control group. It simply may not be possible if you’re not on staff somewhere else.

You say that quality researchers need to apply the same rigorous methods used in clinical research. Isn’t that difficult because of the differences in what’s being studied?

It’s true that there are lots of variables in routine care that we can’t control. In biomedical research, we often create a very artificial environment to test a new drug. We often have the opportunity to create an artificial world in which the clinical trial occurs. Then if it works well, we can decide to translate the results into the real world.

Quality improvement research can’t take place in an artificial world. We’re trying to see if we can fix problems in the real world, so we don’t have the opportunity to make sure that all patients or doctors are behaving in a regimented manner and adhering to protocols. So it’s very easy to lose the effectiveness of an intervention because it gets diluted amidst all these other variables that we can’t control. This lack of control is why more rigorous studies are so important.

In the Health Affairs article, you explain that assumptions often steer researchers in the wrong direction. Why is this a particular problem in the quality improvement arena?

Frequently, a single explanation intuitively seems like it must be right, so everything proceeds from that. But when you actually do the research, you realize the real problem was something different. At that point, you’ve designed an initiative to remedy a nonproblem, and you’ve left the real problem untouched.

Handwashing is a good example. Some people may wonder why doctors and nurses don’t do a better job of washing their hands. Some might say that we need to talk to doctors to explain the importance of handwashing. That’s ridiculous, because we know we don’t need to tell physicians that, but it’s one example of how you would make no progress if you focused on that approach.

Here’s another example of how focusing on a superficial assumption may not get you anywhere. You might think that the only reason physicians are not washing their hands is that it’s inconvenient, so you might put a sink outside the operating room. That might work to some degree, but you might also find that handwashing simply takes up a lot of physicians’ time, and that when they have too many patients to see, it falls by the wayside.

My point is that even with something as simple as getting physicians to wash their hands before and after seeing patients, it turns out that there are many explanations about their poor performance. Each one can seem plausible as the sole explanation, but if you pick any one of those, you’re going to fail.

You make the point that in quality improvement, there’s no magic bullet.

One thing I’ve learned in the last couple of years is that there are no monolithic answers. If you’re working in a hospital, certain quality improvement things may work a lot better than if you’re working in the outpatient setting. Solutions might even vary by disease populations and by patient populations.

The problem is that behind everything in quality improvement, there’s been so little support that we tend to look for magic bullets. We want the single answer that works everywhere, and it turns out that’s probably not the case.

A simple reminder system may work in some settings but not in others. I think that some of the conflicts we see in the literature reflect the fact that we were looking for simple, monolithic answers.

How can researchers keep everyone’s expectations realistic?

I hate to give a specific number, but in clinical research, if you can improve something by 20 percent, you’re doing a good job. If you can make something happen 20 percent more or less often, depending on your goal, you’re usually doing quite well. I feel that’s going to be about as much as you can hope for in many quality improvement studies.

The problem is that when a lot of physicians and managers first turn their attention to this area, they often think that if only they can get a new computer system, 100 percent of physicians and nurses will finally do things the right way, that everyone is going to wash their hands. That’s very unrealistic.

You can’t think that you’re going to see a 50 percent or 100 percent increase in compliance or improvement in patient outcomes. It doesn’t usually happen. And if you observe it happen, you have to be concerned that there’s something wrong with the way you observed it happen or the way you’re analyzing the data.

When setting goals, should researchers focus on traditional outcomes like length of stay and mortality, or are outcomes like changing a process valuable?

The jargon we use to describe this issue is process of care vs. outcomes. Handwashing is a process, while the outcome is what happens to the patient.

For some things in quality improvement, focusing on the process is perfectly acceptable, especially if the process is something we consider intrinsically good, like obtaining informed consent from a patient. Handwashing is considered to be an intrinsically good thing, and the connection to improved outcomes has been shown.

Unfortunately, there are a number of areas where we’re not so sure how changing a process will affect outcomes. In patient safety research, for example, more and more studies are reporting an improvement in teamwork or the culture of safety. You have to be careful, because we don’t really know what that translates into. When the results of an action aren’t particularly well-known, you have to do more work to justify some sort of connection.

Do you have other advice for hospitalists getting involved in quality improvement research?

Some quality improvement interventions measure only outcomes, and not the way the intervention was supposed to work. A nicely done randomized trial recently published in the American Journal of Medicine had nurse case managers phone up patients and manage their care as part of a disease management study. While the study showed the intervention didn’t work, it failed to mention that in 70 percent of the cases, the nurses couldn’t get hold of the patients.

While the study was negative, this detail tells you that disease management might still be a good thing, but you probably don’t want to try it using a phone-based system. It sounds silly, but it happens all the time. You do the intervention and it doesn’t work, but the study was such a black box, at least how it was reported in the paper, that you don’t really know why it didn’t work.

I would caution people about the outcomes they measure. If the process is really tied to the outcome, it’s fine to measure the process. But it’s also useful to do some kind of measure to make sure the intervention is taking place the way it was designed.