Published in the June 2007 issue of Today’s Hospitalist.
The emergency department pages you about a patient complaining of chest pain, leaving you to decide: Do you admit the patient, send him home or call cardiology for a consult?
As hospitalists cement their reputation as the go-to physicians in the hospital, they’re being asked to take on more and more responsibility. But for some, making the call on a potential MI can be nerve-racking.
You don’t want to unnecessarily admit patients, a trend that costs U.S. health care about $3 billion a year. But you also don’t want to miss an MI, a major error that occurs in as many as 3% of patients who present with chest pain.
While following acute coronary syndrome (ACS) guidelines from groups like the American College of Cardiology is a safe bet, a new study examining this dilemma gives hospitalists another option.
Researchers found that a simple risk stratification tool that you probably learned in residency can go a long way to taking the guesswork out of diagnosing these patients, potentially reducing unnecessary admissions for chest pain. And while the study’s lead author, a hospitalist, acknowledges that the risk prediction tool isn’t a panacea, she notes that such tools can potentially reduce costs and streamline your decision-making.
Who should you admit?
The study, which was published in the March 2007 Southern Medical Journal, provides an interesting snapshot of how hospitalists at a small community hospital fared when caring for patients sent to them with chest pain.
A total of 260 patients with chest pain were admitted to the hospitalist service from the emergency department. Only 24 of those individuals “just over 9% “went on to receive an ACS diagnosis.
Researchers retrospectively took those patients and applied the Diamond and Forrester risk stratification tool, which relies on age and symptoms, to identify patients’ risk of coronary artery disease. The goal was to see if using the tool could have changed the management of chest pain in these patients.
According to the risk prediction tool, 28.3% of the patients were defined as high risk, 65% as intermediate risk and 6.6% as low risk. About 68% of the ACS cases came from the group later defined as high risk, while no cases of ACS were found in the low-risk patients. None of the low-risk patients had a positive stress test or received cardiac catheterization.
Beril Cakir, MD, the lead author and hospitalist who conducted the study at Carolinas Medical Center-University hospital in Charlotte, N.C., says that the various levels of risk are important because they can guide patient management and testing. The study concluded, for example, that 6% of admissions “for patients who turned out to be low risk “could have been avoided.
Eliminating unnecessary tests
The decision to admit wasn’t the only factor that could have been influenced through use of the prediction tool. The study also notes that patients who had a high risk of ACS were good candidates for more aggressive work-ups, and that many could probably have skipped stress testing and gone straight to cardiac catheterization.
Of the 260 patients admitted to the hospitalist service with chest pain, 175 underwent stress testing, 34 underwent cardiac catheterization, 20 had occlusive CAD and 14 received percutaneous coronary intervention.
Among high-risk patients, about 70% underwent stress testing. About 60% of those had negative results and were sent home.
Of the intermediate-risk patients, 72% underwent a stress test, while 53% of low-risk patients did as well. According to Dr. Cakir, who currently works at Gaston Memorial Hospital in Gastonia, N.C., those numbers show that the hospitalist group’s attitude to stress testing was perhaps too liberal.
"The data showed how unselective we were in doing stress tests," she says. "We should have been doing the stress test on the moderate-risk patients, while the high-risk patients deserve cardiac catheterization. Their probability would still be too high to justify the exclusion of coronary artery disease, despite a negative stress test."
Navigating gray areas
The study also found that the risk stratification tool can affect the bottom line by potentially reducing the costs of caring for chest pain patients. But Dr. Cakir acknowledges that when it comes to decisions about both admitting and testing these patients, the decision-making process for hospitalists is not always so black and white.
While the study notes that patients who face a low risk of ACS can be safely sent home, Dr. Cakir says that physicians will continue to admit more low-risk patients than necessary. No one, after all, wants to miss an MI, so doctors will play it safe.
And as to deciding whether to admit and test a patient with chest pain, she adds, other factors come into play. The patient may have little or no access to diagnostic testing in the outpatient setting, for example, or the patient may specifically request stress testing. In such instances, she notes that hospitalists may want to set the recommendations of a risk prediction tool aside.
Even so, Dr. Cakir says, such a tool can provide some help. "The prediction rule will help you sort patients into groups, to determine who are low risk," she explains. "You can just rule them out by enzymes and EKGs quickly, and discharge them the next day."
Groups need consensus
While risk stratification tools may not be foolproof, they can give physicians peace of mind that they’re making the right decision when admitting a patient with chest pain or sending him home.
And, Dr. Cakir says, these tools give doctors something to fall back on in case they make the wrong call. With misdiagnosed MIs accounting for the country’s highest paid malpractice judgments, that’s a real concern.
Dr. Cakir says that she decided to use the Diamond and Forrester tool in her study because it is relatively well known among internists. "It’s not necessarily the ideal or best method," she notes, "but it is the easiest because it doesn’t require much in the way of data."
Perhaps just as importantly, she adds, everyone in a hospitalist group should be on the same page when it comes to choosing one prediction tool to use. For a tool to be really effective, everyone has to be speaking the same language.
"The important thing is that as a group," she says, "you should have a consensus on what to use."
A dialogue with primary care
But how hard is it to bring hospitalists on board with the risk prediction approach, given how many different algorithms and tools they’re supposed to be using?
Dr. Cakir acknowledges that it may be harder to convince more experienced hospitalists to use these tools. "When you’re fresh out of residency, you may still use one," she says, "but down the road as you get more experienced, you may just go with your common sense."
The problem with that approach, she adds, is that it’s often inaccurate. If the hospitalists in the study had been using the prediction rule, Dr. Cakir points out, the moderate-risk people would have had more stress tests, while the low-risk group would have received almost none. "Common sense is always important, but I think we need some proof to support our conclusions."
Finally, she notes that using a risk stratification tool can help hospitalists maintain a dialogue with primary care physicians. That’s particularly true if you’re using a tool like the Diamond and Forrester risk method, which so many internists learned to use during residency.
"These patients need follow up," Dr. Cakir says. "This kind of tool can help you communicate with the primary care physician, justifying your decision on why you did or did not do further testing."
Edward Doyle is Editor of Today’s Hospitalist.