How Your Appropriateness Criteria Sausage is Made

Twitter icon
Facebook icon
LinkedIn icon
e-mail icon
Google icon

imageFor my final session of the day, I sat in on “ACR Appropriateness Criteria: Are You Trying to Tell Me What to Do?” I’m not sure what I was expecting from this session – a little controversy, mayhap – but what I learned was fascinating. After the jump, a look at how your appropriateness criteria sausage gets made; but first, another RSNA trivia question:

How many sets of clinical practice guidelines are currently available in the National Guideline Clearinghouse? (Think four digits . . .)

There are fully 2700 sets of guidelines in the NGC, according to Earl Steinberg, MD, a VP at WellPoint and vice chair of the IOM’s committee to develop standards for developing trustworthy clinical practice guidelines – in other words, the committee on guidelines for guidelines. As Jeffrey Weilberg, MD, of MGH, noted when discussing the hospital’s experiences with computer-assisted decision support, “There’s a million guidelines out there, and doctors don’t like them.”

Everyone can agree that something has to change to quell the growth of imaging based on self-referral, defensive medicine and other less-than-ideal motivations. Most stakeholders’ method of choice is computer-assisted decision support, which has been shown to have the same effectiveness as utilization management, according to the ACR’s Bibb Allen, Jr. (More on that in the most recent RBJtake a gander.) But how can the most effective criteria for decision support be developed?

This turned out to be a subject of some contention, making for a frisson of excitement in what was otherwise a sleepy room. Michael Bettman, MD, was on hand to share how the ACR develops its appropriateness criteria. It’s a byzantine, complex process, as you may have guessed. Each clinical indication is assigned a Topic Author who then acts as a steward for that particular subject, reviewing and selecting the best available literature, building an evidence table, developing a narrative for the guideline, and creating clinical scenarios. His or her final product is then reviewed by the panel chair and voted on by panel members before becoming part of the ACR’s guidelines.

Sounds groovy, right? You wouldn’t hear any objections from me. But Steinberg, checking in from the committee on guidelines for guidelines, noted that some of the ACR’s processes might be problematic. First of all, ye olde votin’ panel should, according to the ACR, be primarily composed of radiologists, while the IOM thinks it should be a lot more “balanced,” as they put it. Expertise versus balance? That’s a question for more experienced minds than my own.

Steinberg also questioned the ACR’s method of appointing a single Topic Author to review and synthesize the available literature. “There are very few people who have the clinical and methodological expertise to be able to perform these types of reviews,” he said. “Whether this type of subjective approach would withstand scrutiny is another question.” There are also some conflicts between the ACR’s guidelines and those of other organizations, and Steinberg concluded by recommending that the ACR take a leadership role in identifying the reasons for those discrepancies.