Clinical Decision Support: Planting a Decision Tree in Radiology
After a six-month period of collecting baseline data, the CMS Medicare Imaging Demonstration began on April 1, 2012. The imaging industry is watching this test closely. If the two-year demonstration shows that a computerized decision-support system can guide referring physicians to make appropriate orders for advanced imaging tests—and, at the same time, curtail inappropriate utilization—then a move to impose prior-authorization requirements for advanced imaging exams for Medicare patients might be headed off at the pass.
Even if the Medicare Imaging Demonstration fails to squelch a call from the Obama administration and its allies to use radiology benefit management companies (RBMs) to screen advanced imaging orders for Medicare recipients, the demonstration project represents a watershed event. There has been a push (inside and outside CMS) since 2005 to reduce costs for CT, MRI, and other advanced imaging exams. Computerized radiology decision support—a sort of soft blockade in the path of the ordering physician—is seen as a way of accomplishing this.
In fact, the RBMs themselves are embracing computer-aided radiology decision support and are building it into their prior-authorization methods. They are already experimenting with hybrids of electronic decision support and human intervention that are certain to become familiar to ordering physicians.
As Curtis Langlotz, MD, PhD, notes, there is really nothing unusual about decision support finally being invoked for imaging orders. “Decision support has been around for decades,” Langlotz says, “and it has been shown to be effective in modifying physician behavior. This represents radiology taking advantage of decision-support technology that has been widely used in other disciplines.”
In hospital laboratory and pharmacy ordering, decision support is routine, and cardiology now is implementing decision support at a rapid pace. Langlotz says that decision support also guides physicians in dealing with allergic conditions when prescribing drugs or ordering screenings. “I think these are tools that help radiology practices and other health-care institutions manage imaging in a rational way, and I think they will be used increasingly for the accountable care that we deliver,” Langlotz adds.
Langlotz, who is vice chair for informatics in the radiology department of the Hospital at the University of Pennsylvania (Philadelphia), is a participant in the Medicare Imaging Demonstration, as part of a group that includes three other academic hospitals in the Northeast. CMS has selected four other groups to participate in the Medicare Imaging Demonstration. The agency calls the participants conveners because each one convenes referring physicians to join the demonstration.
Langlotz emphasizes that the groundwork being laid with radiology decision support through the Medicare Imaging Demonstration is most likely to find its payoff as health systems transition to accountable-care organization (ACO) models. “We were interested in decision support because we see it as a good way to get patients the right test, at the right time, as we move toward accountable care,” Langlotz says. “We are using the Medicare Imaging Demonstration as a way to get some early experience for ourselves.”
Getting early experience with decision support through the Medicare Imaging Demonstration is a recurring theme with Medicare Imaging Demonstration conveners. Even though it is focused only on fee-for-service Medicare outpatients, the Medicare Imaging Demonstration is generating excitement because it is testing radiology decision support across a wide range of provider groups that have no common financial incentive.
If decision support works for the Medicare Imaging Demonstration, then it ought to work anywhere. If the Medicare Imaging Demonstration’s results show that decision support guides ordering physicians toward appropriate imaging exams and away from inappropriate exams, then Langlotz and others who believe that decision support can replace RBM prior authorization might feel vindicated.
Physician and Patient Friendly
A recent study¹ looked at combined computerized provider order entry (CPOE) and decision support use in both outpatient and inpatient settings. The study found that CPOE with embedded decision support achieved a high level of use from ordering physicians, with 95% acceptance at the end of 10 years. It also found that the need for preauthorization decreased with the integration of preauthorization and decision-support databases.
Medicare Imaging Demonstration participants are now finding the same kind of ordering-physician acceptance. Referrers are embracing the computerized decision-support tools, even though using them takes time and thought. Of course, the jury is still out as to whether the Medicare Imaging Demonstration will determine that there is a lessened need, or no need, for preauthorization itself.
Gary J. Wendt, MD, MBA, is enterprise director of medical imaging and vice chair of informatics at the University of Wisconsin–Madison School of Medicine and Public Health. The school is one of the Medicare Imaging Demonstration conveners. On November 3, 2011, in Washington, DC, he presented “Decision Support: Implementation Experiences” at the first ACR® Imaging Informatics Summit. He says that he was surprised, during the Medicare Imaging Demonstration’s data-collection phase, by how willingly ordering physicians accepted decision support.
“The point of order feedback was really the opposite of what I thought it was going to be,” Wendt says. “I thought we were going to catch a lot of flack for it, and in reality, we haven’t. People want to know why we aren’t getting more of it, and why we aren’t getting it sooner.”
Wendt says that the great advantage of decision support, from the referring physician’s perspective, is that it is displayed at the moment of ordering the imaging exam—when the patient is still likely to be in the physician’s office. According to Wendt, when referrers get a prior-authorization denial (perhaps days or weeks after ordering a test), they have to contact the patient and explain that the original exam has been cancelled.
“They have to go back to the patient and say, ‘I was sort of wrong; I have to do something different,’” Wendt explains. “They prefer the feedback right at the point of ordering because the patient is with them. Not only can they use decision support to justify choices in their own minds, but if they have a patient who is being pretty difficult and pushing for an order, they can bring that order up and show the patient that it is inappropriate.”
In 2012, Wendt makes the same point. “The referrers can enter orders and get immediate feedback. They know that either the test’s approved or there’s an alternative test, and they can present that to the patient immediately,” Wendt says. “That’s the real benefit of decision support.”
Wendt also says that decision support is superior to the use of prior-authorization screening. “Preauthorization is human intervention, and humans don’t act in a rule-based way. With decision support, the same rules are applied across the board,” Wendt says. “I think you, as a patient, would find that more pleasant and palatable than a call, three weeks later, that tells you that you don’t need that head CT, according to the RBM.”
Matching Scenarios With Exams
Christopher L. Sistrom, MD, MPH, PhD, is CIO and associate chair of the radiology department at the University of Florida College of Medicine. The school is not a participant in the Medicare Imaging Demonstration, but Sistrom has been working for years to complete a matrix that matches clinical scenarios with imaging exams. This allows decision support to be applied using appropriateness guidelines developed by the ACR.
One requirement of the Medicare Imaging Demonstration is that decision-support tools used by conveners must be based on medical-society guidelines. For radiology, that means ACR guidelines, although one of the 11 exams being studied by the Medicare Imaging Demonstration is a nuclear-medicine test that also uses guidelines from the American College of Cardiology.
On November 3, 2011, at the ACR Imaging Informatics Summit, Sistrom presented “Decision Support: ACR Appropriateness Criteria Migration.” He says that matching clinical scenarios with specific imaging tests and establishing an appropriateness ranking for each test are complex, difficult tasks. “If a test is inappropriate, in most cases, we shouldn’t do it, but if it’s equivocal, then it’s hard to tell,” Sistrom explains. “Unfortunately, a lot of the stuff that we do is equivocal.”
Choosing the best test for a given clinical scenario often leaves a lot of gray area, Sistrom suggests. He says that the ACR began to work on appropriateness rankings for specific imaging tests for given clinical scenarios or symptoms in the early 1990s, when the Clinton administration was promoting health-care reform.
A 100-member panel of ACR physicians was formed to vote on each imaging procedure as appropriate for a defined clinical scenario, he says. Imaging procedures were given rankings from 1 to 9, with the bottom three scores being inappropriate, the middle three being equivocal, and the top three being appropriate. Color coding was also attached to the three designations, so that a score of 1, 2, or 3 would show as red for inappropriate; a 3, 4, or 5 score would show as yellow for equivocal; and a 7, 8, or 9 score would show as green for appropriate.
Sistrom devised a matrix—with rows for clinical scenarios and columns for imaging tests—to guide decision support for referring physicians. In part, the purpose of this project was to make the ACR’s appropriateness criteria computer ready, but the broader intent was to develop a shared conceptual framework to organize, tabulate, and display imaging utilization, variation, and appropriateness across populations, settings, and regions.
“The appropriateness scores are right at the junction of the clinical scenario and the imaging procedure,” he says. “Some people call those rules. Each rule has a score and a review. The table of 700-plus correlations takes an imaging procedure and matches it with a clinical scenario and gives it a score. You can go to the matrix, see CT of the head, and look at tags for clinical scenarios that have scores for CT of the head.”’
Sistrom says that ACR panelists have continually reviewed and updated the appropriateness scoring. “For the ACR,” he says, “there are 19 broad categories, and 10 of those are diagnostic and fully relevant for imaging order entry. I think the appropriateness criteria cover about 75% of the most common exams.”
The Guts of the System
The decision-support systems used in the Medicare Imaging Demonstration can pull scores for each clinical scenario and relevant imaging test and rank the tests as inappropriate, equivocal, or appropriate, Sistrom says.
Sistrom recently enlarged on his presentation; he has now spent three or four months refining the appropriateness criteria and clinical scenarios matched in the Medicare Imaging Demonstration. In the early days, Sistrom says, the ACR devoted millions of dollars in free labor to develop appropriateness criteria as an investment in radiology’s future (a grant has covered his own work).
Currently, the ACR is in the early stages of executing a comprehensive distribution mechanism for the ACR appropriateness criteria that will deliver content using a variety of electronic methods, Sistrom says. “Prominent among these will be a set of Web services that can be customized and integrated into common electronic medical record (EMR) vendors’ products,” he explains. “The vision is for clinicians to order studies and receive decision-support feedback from within a specific patient’s record—within the EMR client.”
Leveraging Web services will enable the ACR to deliver identical, consistent, and up-to-date content simultaneously to all users, in addition to obtaining feedback for the purpose of continuously updating and improving the criteria, Sistrom explains. “It’s in a database, but it’s rudimentary,” he notes. “I changed body areas and standardized exam types to make it so you can use an order entry that gives appropriateness scores. For the Medicare Imaging Demonstration, I added a mapper that takes the institution charge master and maps it to our standards, and it can fire back a rule into a specific place. It’s been a lot of work, but we’re getting to the point where we can deliver decision support as a service.”
The key, with decision support, is that the same appropriateness rules have to be delivered and applied to all users. The rules have to be updated constantly as well. “The hope is that the ACR will get more panelists, modify the rules, and make it worth doing. There’s a lot of work to be done,” Sistrom says.
“It is rocket science,” he adds. “People know what has to be done, but getting there takes a lot of careful and precise work. It’s getting the teams together that’s difficult, though not impossible. We have the physical technology. The EMRs have order interfaces; it’s the hard work of putting the systems in place (and making sure they work) that’s ahead of us.”
That’s a key element of the Medicare Imaging Demonstration, too: How well will computerized decision support perform in the clinical setting? Sistrom says that the true test might arrive in five years or so, as health care transitions to a capitated ACO model.
He explains, “If the ACOs are capitated, imaging becomes a cost center instead of a revenue center, so then you really want to make sure you’re doing a test that you need—and you will have to be careful with underutilization. How decision support will perform in that scenario is a very good question.”
An element of the Medicare Imaging Demonstration study that might determine how decision support is used in the future is whether the project demonstrates that decision support changes physicians’ ordering patterns for imaging. Safwan Halabi, MD, is director of imaging informatics and a pediatric radiologist at the Henry Ford Hospital (Detroit, Michigan). The Henry Ford Health System (HFHS) is one of the five Medicare Imaging Demonstration conveners.
In “Decision Support: Implementation Experience,” which he presented on November 3, 2011, at the ACR Imaging Informatics Summit, Halabi describes the Medicare Imaging Demonstration as bringing life to the appropriateness criteria at last.
He notes that the six-month baseline data-collection period was meant to establish physician ordering patterns against which the 18 months of actual decision-support use could be measured. “At least, we will be getting feedback now—we can now say to ordering physicians, ‘These are the ordering habits of your peers, and these are yours,’” Halabi notes.
Halabi says that HFHS applied to be a Medicare Imaging Demonstration convener after insurers petitioned HFHS to use a commercial decision-support product that had come on the market. “Then CMS said, ‘We’ll pay you to do what you were going to do anyway,’” Halabi says. There was a caveat: HFHS was required to use decision support that was based strictly on medical-society appropriateness criteria. “In midstream, we had to switch to a different decision-support system,” he explains.
Elaborating on his presentation, Halabi reports that HFHS is now using the same decision support designed for the Medicare Imaging Demonstration throughout the system. Of an estimated 600,000 advanced imaging studies completed per year by HFHS radiologists, about 10% are for Medicare recipients, he says. The decision-support results for those patients will go to CMS for analysis as part of Medicare Imaging Demonstration.
For the other studies, ordering physicians will be required to go through the same decision-support process, but the non-Medicare results won’t be sent to CMS. This will be information that HFHS (and, perhaps, its payors) can use independently, although for now, payors won’t be given the appropriateness scores, Halabi says.
There is a key difference for the non-Medicare advanced imaging orders, Halabi adds. CMS does not want order denials (hard stops) to occur in the Medicare Imaging Demonstration. The agency only wants to track changes in ordering patterns as a result of the guidance of decision support. Physicians ordering for Medicare patients are able to order red-tagged exams, even though they are labeled inappropriate. They can do so, without explanation, just by selecting the exam (despite its red status).
For non-Medicare patients at HFHS, when physicians order a red- or yellow-tagged exam they must enter an explanation before the order proceeds, Halabi says. Exams ordered for privately insured patients also must go through any prior authorization already in place, he adds.
HFHS hopes, however, that the use of decision support will lessen the need for prior authorization. “The whole premise of the Medicare Imaging Demonstration was to end the need for preauthorization,” he says. “Our goal is to have these data so that we can go to our payors and prove to them that decision-support systems do work.”
Whether They Will Work
Michael J. Pentecost, MD, is associate CMO for National Imaging Associates (NIA), a major RBM. “We see health plans as being very interested in this and expecting it to be a product of the future,” Pentecost says, “but it’s also on the radar that decision-support systems don’t have proven reliability, durability, and reproducibility. They need to be shown to be effective—and that has not been shown, so far.”
NIA is committing resources to investigating decision support, however. It applied to CMS and was chosen as a Medicare Imaging Demonstration convener. It is testing decision support with its own groups of referring physicians as they order advanced imaging exams for Medicare patients.
It might seem counterintuitive that an RBM would participate in the development of a product that would diminish the demand for its major service of prior authorization, but NIA has adopted the stance of joining them to keep from being beaten by them, Pentecost explains.
“Our view was these decisions would become more and more electronic, increasing in frequency and use,” Pentecost says. “We think a decision-support system is a logical conclusion. All the RBMs are migrating toward decision-support systems. We’re just going to migrate a little faster.”
Pentecost also notes that NIA was interested in the Medicare Imaging Demonstration because it had a cardiac imaging component with its nuclear-medicine test. NIA has made cardiology testing prior authorization a big part of its business. Pentecost says, “The nuclear-medicine component was a big part of our proposal to CMS because we do so much cardiac work.”
NIA is assessing the degree to which electronic decision support might reduce the utilization of advanced imaging tests—unless the tests are called for clearly. “I think there is widespread agreement that there is overutilization of imaging,” Pentecost says. “The market will demand more discriminating, accurate, and effective tools to weed that out of the system. The deduction is that this demand will lead to a reduction in overutilization, but that has been quite uneven, depending upon people’s practices.”
He makes no predictions about how the Medicare Imaging Demonstration will affect RBMs (or NIA itself). “Decision-support systems are going to be an increasing part of utilization management, and we want to delve into this and inform ourselves, rather than have others tell us,” he says. “We want to be students of the issue and watch it all unfold with our own eyes. Whether it will work and health plans will be able to rely on it is unclear.”
Pentecost also says that NIA’s participation in the Medicare Imaging Demonstration does not mean that NIA is backing away from a lobbying effort (conducted with other RBMs) to get CMS to use prior authorization for advanced imaging exams for Medicare patients. “Our position is that in Medicare, they will benefit with an RBM,” Pentecost says. He also notes that decision-support systems and their use are moving targets that can reshape themselves even as the Medicare Imaging Demonstration proceeds over the next 18 months and then enters an analytical phase.
“This is a system that is being created,” he says. “Scientific progress factors into this.” Those who strongly believe in computerized decision support see improved patient care—through the scientific use of decision support—as one of the major benefits of implementing such systems. Whether decision-support systems will be so widely adopted that the eventual results of the Medicare Imaging Demonstration are irrelevant is a question that no one can answer.
Getting the Gamers
Halabi says that decision-support use not only will result in fewer inappropriate exams, but will build on itself, over time, so that feedback will result in the prioritizing of exams for given clinical scenarios. Decision support will aid in the collection of positive/negative clinical data to determine which imaging exams are most effective, he adds. A green score without confirmed positive results later in the patient’s care might mean that the use of the exam should be changed.
Halabi says that outcomes data can also be used to see which referring physicians might be gaming the system because they have financial or personal motives for ordering certain tests. “If they really want that exam, they’re going to say the right things,” he says. “By documenting that in decision support (through outcomes), we will see those people who know what to say getting green scores. That’s the real premise behind this demonstration project.”
Halabi hopes that patients will warm to decision support when they come to understand that getting the proper imaging test is in their interest, particularly if it means decreased radiation exposure—and if they save on expensive imaging that is not needed. On the other hand, they could resent decision support if it prevents them from getting tests that they want, he acknowledges. “It could go either way, and the jury’s still out; I hope people trust the science,” Halabi says.
Work to Be Done
There is still hard work (and lots of analysis) to be done to perfect decision-support tools for radiology. For one thing, we don’t know how decision support will fare if it fails and there are legal consequences. Nothing has happened yet to force the courts to deal with that because decision support is too new. Nobody’s sure how decision support will be viewed legally. Sistrom suggests decision support, at some point, might become the standard of care, but that’s yet to be argued in court, he says.
“If you get a red score for back pain, and you don’t do the test—and then, it turns out that there is disease—what do you do?” he asks. “My hope is that you could say that the physician and the patient made the gamble; it was a long shot, but they lost. My hope is, in a comprehensive system, the patient and the physician would be held harmless. It will be an interesting question. It’s going to come up: Somebody will have a red result, and the physician will not do the test; the patient will come back with problems, and the physician’s defense counsel will say that this is the standard of care.”
Wendt sees another (perhaps equally urgent) matter to be addressed: He knows of nothing that has been built into decision-support systems to track cancelled orders, but such tracking is critical to developing outcomes data.
“Any computer system, in general, focuses on tracking what occurs, not what doesn’t occur,” he says. “With decision support, what doesn’t occur is as important as what does occur. Recommending not doing something is just as significant as doing something.”
Some have argued that decision support should also have the ability to track radiation exposure, over time, for each patient, but Wendt says that this would first require the development of a national dose registry (because too many patients have untallied exams outside the health-care entity that is deploying decision support). “When you’re talking about dose monitoring and modifying care, it’s irrelevant if you don’t do it globally,” Wendt says.
Still, this could happen. Decision-support systems might not be global, but that doesn’t mean that they aren’t headed that way. The calculation that decision support can be done on a widespread and uniform basis is one of its attractions, particularly when there is so much focus on the cost of health care.
“I am an advocate of decision support and of using the data to evolve the system and make it better,” Halabi says. “There is going to be a point, as a nation, where health care drives us into a hole, unless we have appropriate expectations about the utility of exams. Decision support will play a huge role. Think of imaging as a medication. It has side effects. How can we curb unnecessary imaging and its costs?”
George Wiley is a contributing writer for Radiology Business Journal.
1. Ip IK, Schneider LI, Hanson R, et al. Adoption and meaningful use of computerized order entry with an integrated clinical decision support system for radiology: ten-year analysis in an urban teaching hospital. J Am Coll Radiol. 2012;9(2):129-136.