Today's Hospitalist
 
Todays Hospitalist Home Current Issue Past Issues Blogs Jobs for Hospitalists Career Center Subscribe
Follow Us On Facebook Follow Us On Twitter Follow Us On Linkedin Meetings/CME  |   Email Alerts  |   Advertise  |   Store
Hospitalist Career Center
Hospitalist Salary Survey
Hospitalist Career Tips
Hospitalist Practice Closeups
Hospitalist Job Search

 
Clinical protocols
Coding tips
Hospitalist Practice Management
Growing Your Hospitalist Practice
Guidance on Staffing and Scheduling
Handoffs and Discharge
Surgical Comanagement
Subscribe to Todays Hospitalist Magazine
Hospitalist Email Alerts
Contact Today's Hospitalist
Editorial Board
Management
Privacy Policy


The diagnosis wasn’t right, but was it really a mistake?
One researcher says that labeling every wrong diagnosis as an error is setting physicians everywhere up for failure
by Deborah Gesensway



Published in the August 2005 issue of Today's Hospitalist

One of the nation’s authorities on diagnostic decision-making and quality of care is making waves by accusing some of his associates in the patient safety movement of spending too much time trying “to improve the un-improvable.”

The result of advocating a “false standard of quality,” says Robert McNutt, MD, an oncologist, professor and associate chair of the department of medicine at Rush University Medical Center in Chicago and Rush’s associate director of medical informatics and patient safety research, is that patient safety advocates are distracting the profession from “real innovation.”

Dr. McNutt, who focuses
“Cases always look different after an adverse event has occurred. Hindsight bias is a major problem with classification of diagnosis mistakes.”

Robert McNutt, MD
Rush University Medical Center
much of his work on “diagnostic mistakes,” contends that in many cases where patients have been said to have been harmed by “missed or delayed diagnoses,” physicians have in fact done nothing wrong. Far from being preventable errors, he says, many wrong diagnoses are often nothing more than adverse events. Doctors, he says, should not be held to standards—in this case, regarding diagnostic processes—that either do not exist or are not based on evidence.

In a commentary published in the May 2005 issue of Emergency Medicine (which is available online at the Agency for Healthcare Research and Quality’s Web site, Dr. McNutt and his colleagues laid out their thinking. His group bases its conclusions on a study that is part of an AHRQ grant to evaluate cases from the archives of AHRQ WebM&M.

He talked to Today’s Hospitalist recently about his group’s efforts to separate errors from unpreventable adverse events. He also discussed why he is largely alone in his effort to research the intersection of diagnostic decision-making and patient safety.

What is the problem that you saw needed researching?

I’ve always been interested in the diagnostic process and how we can improve it. We see adverse events to patients during the diagnostic process due to two main reasons: overzealous diagnostic searches when there is very little benefit, and underzealous diagnostic searches when there could be benefit. We clearly see problems on both sides.

We began our framework for thinking about the issue of diagnostic mistakes with these two ends of the spectrum of medical errors in terms of diagnosis. We next asked if we could come up with a taxonomy to help the medical world improve its ability to make diagnoses and to establish criteria for what is a mistake, what is improvable, and what is not.

What did you do to research your hunch?

We collected morbidity and mortality cases from our institution and tried to dissect them before we knew the outcome of the case. This is important: We evaluated cases without knowing the outcome of care.

We then tried to develop models to produce some sort of evidence that there was a core problem with the diagnostic process that, if changed, would have affected the outcome. We used evidence- based medicine and system theories to define a mistake.

We also went to the AHRQ’s Web M&M and looked at cases posted there. We often ended up saying, “These guys are out to lunch. This is not an error.” We came to very different conclusions virtually 100% of the time.

Why did you disagree so often with a conclusion that a missed or delayed diagnosis that caused patient harm was an error?

First, we evaluated every case without hindsight. Cases always look different after an adverse event has occurred. Hindsight bias is a major problem with classification of diagnosis mistakes. A lot of people have written about this, but we think the effect of hindsight bias is underestimated. All the criteria for defining error and mistakes in medicine are hindsight-biased.

There is an inherent baseline risk that exists in a lot of medical conditions. Defining what is inevitable—and what is part of the baseline risk—is very difficult. If I already know the outcome of a case, I look backwards with an eye to finding something wrong, but the problem may have been part of the inevitable ambiguity of medical care.

Is there an example that stands out in your mind?

There was one case of a man who died of a pulmonary embolism hours after surgery. Some experts said that because the doctors should have known the diagnosis and acted to treat, a mistake was made. The man was irritable and he was short of breath after surgery, and an autopsy found a pulmonary embolus.

When we presented this case to a group of physicians and described the outcome, everyone thought it was a mistake. But when we presented the case without mentioning the outcome, the consensus was that the patient was having an alcoholic withdrawal that caused his symptoms. No one mentioned pulmonary embolus, in part because it is uncommon so early after a surgical procedure. It’s an example of how hypotheses can change under hindsight bias.

We see this all the time with radiology exams. I can manipulate radiologists to get them to call almost anything. They are suggestible, in part because their exams are not “gold standards.” By this I mean that the interpretation of radiology exams is subjective and there is a nondistinct criterion within radiology, especially for the diagnosis of many conditions.

I had a patient the other day undergo a CT scan of the abdomen. I was suspecting that he had a mass in one part of his body because his potassium level was very low. When the CT scan came back normal, I looked at it and said, “There’s the mass.” My resident said, “You are not a radiologist.”

So we walked downstairs into a radiologist’s room and put the CT scan up. I said, “This guy has chronic abdominal pain, but I’m not sure what’s going on. Do you see anything?” The radiologist looked it over and said, “No, it looks normal to me.” I went to another radiologist and said, “I have a guy here with a low potassium level. He is not taking diuretics. I think he has an adrenal mass. What do you see?” He said, “Here it is: 5.5 cm.”

There is a very gray area between signal and noise for many diagnoses, especially if the signal to detection is influenced by the presentation of clinical information. After the fact—in hindsight or with suggestible information—you can see the signal, but it wouldn’t have been seen before. No criteria for mistakes should include hindsight biased review, or we will not be able to learn what may or may not be preventable before the adverse event occurs.

Because there are often many competing diagnoses, and physicians can’t possibly test for them all at the same time, you would conclude that something that might look like a mistake may not really be an error?

Yes, and this second reason for overcalling mistakes is also far underestimated as a reason for misclassification of error. Patients present with symptoms that may be caused by multiple diseases, and these diseases have varying risk and benefits for early diagnosis. In fact, acting on one disease may actually harm another disease.

These competing illnesses require us to make careful judgments. Because there aren’t many evidence-based, gold-standard processes for how to weed out one diagnosis vs. another, however, we are being held to false standards of diagnostic expertise when more than one serious disease could be present.

For instance, what is the appropriate process of care for someone with a potentially very difficult-to-diagnose dissecting aortic aneurysm? The best test for diagnosing a dissecting aortic aneurysm is not a clinical exam, but a CT scan of the chest. But even when there is a gold standard for diagnosing a condition, there are competing tradeoffs, and choosing one instead of another is not necessarily a mistake.

Should all patients who come into the emergency room with chest pain get a CT scan of the chest to make sure they do not have a dissecting aortic aneurysm when that delay causes you to miss appropriate therapy in a more common illness—acute myocardial infarction? One of the analyses from the AHRQ’s WebM&M called such a case a preventable diagnostic error. That kind of standard is unattainable and it will bury us if we try to fix unfixable diagnostic processes.

You have written that some wrong diagnoses are incorrectly called mistakes because cause and effect are often not simple in medicine. “Redundancy and codependency abound,” you write. Can you explain that?

This is another problem with “signal to noise.” Chest pain, for example, occurs in not one disease, but in lots of diseases. However, variation in the presentations of chest pain creates overlap between competing illnesses. During the various stages of a disease, there are various strengths of signal to noise.

For example, I can make the diagnosis of pulmonary embolism in one person very easily if the signal is clear, but if the signal is not so clear and other clinical diagnoses may cause the complaints, I may miss a diagnosis of pulmonary embolism and the patient could die. There can be very different presentations and very different clinical situations where the overlap of cause and effect is greater on one end than it is on the other end.

It doesn’t take much of a diagnostician to make a diagnosis of myocardial infarction when all the ducks are in a row. But it takes a very astute—and sometimes lucky— clinician to make a diagnosis of MI that is just beginning. We fail to appreciate that diseases are not static things; they are moving targets that sometimes present with mild and sometimes present with severe symptoms. Diagnostic accuracy will change for different presentations of the same diseases.

Why is defining missed or delayed diagnoses mistakes such a problem?

I am concerned that if we pick the wrong targets for improvement in diagnostic thinking, we will waste valuable time. If we can’t come up with a better measurement system for preventable diagnostic mistakes, the whole safety movement will come back to haunt us. It will be so focused on the non-preventable that we won’t be able to get at the issues of how to keep the system from crashing.

I’m also very concerned that we potentially are going to get worse—not better—if we don’t get smarter about our criteria, our standards about what we are going to call errors, and how we are going to communicate these issues with patients. How are we going to change the expectations about practitioners to the public in a more realistic way so that we can have a more rational development of the health care system?

What would you like the patient safety improvement movement to focus on instead?

First, we need to focus on getting rid of hindsight bias. We need to make sure that we evaluate the ambiguity of information and consider all competing diagnoses. We also need to consider the strength of the signal to noise of clinical complaints when trying to improve those diagnostic processes.

We need to make sure that the things we know work are being done before we look for things to improve. We need to take care of the infections that are easy to treat. We need to make sure we consider drug toxicity and stop drugs that may be harmful. We need to make sure that patients get their fl u vaccine and pneumovax, and aspirin and beta-blockers when indicated.

At the other end of the spectrum, if you try to make a diagnosis for those things that you can’t do anything about, then everything that occurs during the diagnostic process will be defined as an error. If I cause an adverse event trying to biopsy somebody’s lung for lung cancer when the patient’s coronary artery disease is so bad that she has only a short time to live, I am consuming resources and extracting time from things that could matter.

In between those two extremes—making diagnoses that matter and ignoring those that don’t—we are going to have to let science do its job of trying to improve the signal-to-noise detection. We need to let the world off the hook until we have a better sense of the ambiguity in diagnostic error detection.

Deborah Gesensway is a freelance writer who reports on U.S. health care from Toronto, Canada

Purchase Cd Rom

Unit Based Rounds

Coding Tips
Coding Tips

Coding for consults and readmissions
Popular Articles


Getting outpatient physicians to step up
Popular Blogs
Most Popular Blogs

Give the ACA a chance
Salary Survey Results
Salary Survey

Pay, demographics, work schedules and more
Copyright © 2014 Today's Hospitalist. All rights reserved.
Home   |   Current Issue   |   Past Issues   |   Blogs   |   Jobs   |   Career Center   |   Subscribe   |  
Search   |   CME   |   E-mail Alerts   |   Advertise