Cracking the Code for Improving Quality

Twitter icon
Facebook icon
LinkedIn icon
e-mail icon
Google icon
As the entire health care continuum comes under increased scrutiny in terms of both cost and effectiveness, radiologists and administrators alike have fresh cause to contemplate quality in the delivery of imaging services. What are the drivers of quality in radiology, and how can it be quantified, benchmarked, and ensured? As a complex radiology practice reading for a multitude of hospital clients, Franklin & Seidelmann Subspecialty Radiology, Beachwood, Ohio, has refined its approach to ensuring high-quality radiology delivery across an array of subspecialties.
“There’s an underlying assumption that as long as you have a radiologist who’s gone to medical school and is board certified, he or she can read anything. If you have the education, that’s great, but without the experience necessary to create expertise, you’re not as effective.”
—Clay Larsen, senior vice president of marketing, Franklin & Seidelmann
imageClay Larsen
Larsen cites four requirements for ensuring high diagnostic quality: a broad team of properly trained, specialized radiologists; the technology and business processes necessary to route the right study to the right specialist, every time; sufficient study volumes to develop and maintain expertise across a team of radiologists; and an effective quality-assurance (QA) process and feedback loop to ensure continuous quality levels. Building the Team The core of Franklin & Seidelmann’s business model revolves around subspecialty radiology, and its hiring process is designed to identify radiologists with high levels of expertise in specific areas of imaging. Radiologists are hired based on board certification, any certificates of added qualifications, and how many studies they have read in their areas of specialization (a minimum of 10,000 is required). The practice also administers a test to its applicants in which they read 30 cases in their subspecialties; applicants who fail the test—around 50%, according to Larsen—are not hired. After-hours emergency coverage is treated as a specialization in and of itself, and its test is timed to ensure both efficiency and quality. The educational process deepens as newly hired radiologists begin reading studies exclusively in their areas of expertise. “Because of our unique business model, with the volumes we’ve been able to achieve, we’ve been able to contribute to original research,” Larsen says. “Our subspecialists can see things they might never have been able to at an academic medical center, and things that have never been documented before in the literature. It’s a whole new model that can go beyond academic radiology.” Honing the Infrastructure A large practice reading for multiple sites faces a unique challenge in routing its studies to the appropriate staff. “If you are having to service a number of different customers, all with different PACS and RIS implementations and some with paper inputs, you need to be able to accommodate all the different input devices,” Larsen says. “You also need a body of professional-services resources to walk customers through these implementations to be sure their workflows enable you to get the right studies at the right time.” Franklin & Seidelmann uses a proprietary IT system called AccuRad, coupled with a support team to accommodate its clients. “DICOM and HL7 are great for linking us with clients, but they’re far from plug and play,” Larsen notes. “It’s not just about technology. It’s about having a group of professionals behind the scenes, 24/7, to make sure that the input is properly done, and that the study gets to the right radiologist with the right specialty, licensure, credentials, and privileges. It’s a large, multifactorial issue, and it’s far from being an automated process that can be solved using off-the-shelf IT systems.” Maintaining Volume “Having large study volumes is essential to developing expertise,” Larsen says. “Some of our physicians give the example that if you haven’t done 10,000 MRIs of the wrist, then you don’t really know the wrist. Once you’ve solved the thorny problem of routing the right studies to the right radiologist, you have a scalable system.” Larsen explains that AccuRad provides Franklin & Seidelmann’s radiologists with unique, individualized worklists, removing the temptation seen in many groups for radiologists to cherry-pick the most profitable cases. “Private-practice groups tend to be very aware of RVU issues,” he notes. “In our model, specific studies are carved out and identified for a particular radiologist to read. We have no shared worklists.” With sufficient volume, this level of subspecialization can be extended far beyond what a local radiology practice might be capable of supporting. “Even with megagroups, because they’re distributed across so many different sites, at any given site, they still often behave like a local practice,” Larsen notes. “They might break out MRI and have some differentiation between musculoskeletal and neurology studies, but it’s really more like a local practice, as opposed to high level of specialization.” At Franklin & Seidelmann, subspecialties include musculoskeletal, neurological, pediatric, body, and women’s imaging; in the future, the group plans to split musculoskeletal studies further into large joints, small joints, large bones, and similar groupings. This results in what Larsen calls mutually complementary incentives to improve both quality and productivity. “The more we grow, the more refined our areas of subspecialization become,” Larsen says. “Our staff gets finer and finer expertise, and not only does that improve quality, but it also makes them more efficient.” QA and Feedback The fourth and final piece of the quality puzzle, according to Larsen, is a communication system that supports continuous QA and effective, useful feedback. “If you’re a radiology group with strong incumbency, then there’s not much incentive to improve communication or implement a strong peer-review QA process,” he says. “It takes time, and time is money.” Franklin & Seidelmann handles its QA through a blinded process wherein 2% of each radiologist’s studies are read again by a different staff member within the company, often by a radiologist whom the original reader does not even know. Studies are selected randomly on a monthly basis, and the process is made more efficient by Franklin & Seidelmann’s own technologists’ involvement in pulling the studies and tracking them through the QA process. “Instead of radiologists spending an onerous amount of time documenting any discrepancies they might see, they turn it over to our professional-services group, which can summarize the statistics and report them to our physician advisory board,” Larsen explains. The QA process operates according to Joint Commission standards; level 3 and 4 discrepancies result in a letter being sent to the patient. “All these quality requirements are mutually exclusive ingredients,” Larsen says. “If one is left out, you can’t improve the effective delivered quality. Outcomes are one of the most difficult things to measure, which is why the quality metrics people have come up with focus primarily on ways to reduce utilization, improve facility infrastructure, and improve patient safety. The diagnostic-accuracy piece has, to date, been a small portion of the indicators because measuring outcomes requires large retrospective studies.” Cat Vasko is editor of ImagingBiz.com and associate editor of Radiology Business Journal.