The time has come for the profession of radiology to embrace a label it has been furiously trying to beat back for years: commoditization. The battle against those who wield it as a pejorative is not just futile, it’s also counterproductive.
Step back and consider the label at a distance sufficiently removed from the agendas and aims that drive some to dismiss things they don’t like by labeling them commodities. Commoditization in real-world applications is, in fact, a very good thing for the consumer. The term connotes not only quality but also availability and cost savings.
Consider the matter from the perspective of people who produce true, honest-to-goodness commodities. The federal government regulates as commodities corn, beef and wheat. What these three examples all have in common is the requirement for food producers to meet standards of quality that are strict, objective and non-negotiable.
In the case of corn, the simplest of the three, FDA places demands on such invisible-to-the-consumer characteristics as moisture content—this cannot exceed 14%—and insect damage, which must be at least 95% absent. Wheat must meet similarly exacting criteria. Beef has so many categories, from prime to choice to commercial to canner, one practically needs an advanced degree to navigate the nuances of each.
So it is that I hereby wholeheartedly embrace the term commoditization as applied to radiology. That goes, with special emphasis, for its use around the branch of radiology near and dear to my heart: teleradiology.
I hope all who love our profession will join me.
Toward that end, I offer several popular myths underlying—and misinforming—the catcalls behind commoditization (as well as its frequent co-pilot in the intended-insult cockpit, “depersonalization”). If you agree that each of these myths is quite easily debunked by the facts, you are well on your way to not only embracing the commoditization of radiology, but also even smiling when you discuss it.
MYTH: Radiology quality can be defined in many different ways, many of which are difficult to quantify.
FACT: Quality in radiology comes down to one thing and one thing only: interpretive accuracy. Did the radiologist make the finding or miss it? This standard may be so plain and obvious that it’s easily overwhelmed by important but lesser concerns. Did you address the clinical question that was asked? Did you use clear and concise terminology? Did you orient your report for the expected target audience?
No one is saying these concerns are inconsequential. However, interpretive accuracy is the sine qua non of the radiologist’s responsibilities. Slip on that, and what difference does it make when you very clearly communicate “no evidence of appendicitis” alongside an image of an abdominal tumor?
MYTH: Most radiology practices have adequate quality-assurance (QA) oversight.
FACT: Most practices practice QA in name only. They tend to overread a minimal percentage of their volume. They tend to do it on modalities not actually prone to misinterpretation. And their overreads are rarely provided by an objective third party who doesn’t have some personal relationship with the radiologist they are overreading or, worse yet, some financial interest in that radiologist’s practice.
For comparison’s sake, and to show what’s possible with dedicated QA, consider vRad’s QA data from the past three years. Our error rate bounced between .3% and .35% for combined major and minor errors. That’s far lower than the standards in the most frequently cited studies, from Wilson Wong, MD, (1.09%) to D.J. Soffa, MD, (3.48%) to Leonard Berlin, MD, (various analyses; see accompanying Q &A article in this issue of Medical Imaging Review).
How did we do it? Over the course of those three years, our practice read approximately 13 million studies. These were overread to the tune of 70% by objective, paying clients who are highly motivated to find errors in our work. Meanwhile, we produce graphs every quarter so that every radiologist in our practice sees exactly where he or she ranks from a quality percentage standpoint. We then place the lowest ranked performers on an improvement plan that leverages our physicians’ best practices.
If more radiology practices followed this model of what I’m proud to call “commodity-level QA,” albeit adjusted to their own scale, more would welcome the charge of commoditization as warmly as I do.