Innovative Approaches to Harnessing the Big Data Behind Radiology

Twitter icon
Facebook icon
LinkedIn icon
e-mail icon
Google icon

Jordan HalterAs accountable-care organizations (ACOs) take root around the country, radiology, as predicted by many, is proving to be a troublesome link in the care chain. Jordan Halter, vice president of solutions for Virtual Radiologic (vRad), says, “Radiology risks being seen as a cost center, to be managed, in the ACO model. Radiology must fundamentally and permanently alter itself to survive in the new fee-for-value health-care world. It’s no longer good enough to be available and affable; groups need to be accountable, affordable, and aligned with their hospitals, as Alan Matsumoto, MD, and the ACR® Council Steering Committee pointed out earlier this year. Radiology needs to be seen as a strategic partner with a seat at the leadership table, not as a cost center.”

In consideration of this need, vRad, a national radiology group with more than 450 radiologists (performing 7 million exams per year), developed a new strategy that it calls Radiology Group 2.0, or RG2 SM for short. The 2 in RG2 represents the two components of the company’s vision of the radiology group of the future.

They are, Halter explains, “the teleradiology cloud and ground radiologists—interventional or diagnostic—or the grid. A radiology group must be able to move studies seamlessly, laterally and vertically, between the two components, if it is to get the right study to the right radiologist, for the right reason, and at the right cost. That type of efficiency might sound simple, but it requires harnessing the intelligence and insight inherent in a practice’s big data and making them actionable.”

Those levels of integration and insight are only possible with normalized inputs—and a robust analytics back end to track and modify a group or hospital’s operating plan, Halter says. He explains, “The challenge we all have is that big data is typically disparate and not normalized. Radiology groups and hospitals usually work with multiple systems, resulting in local variability and nomenclature standards: One hospital’s lower-extremity radiograph is another’s foot and ankle radiograph. Trying to normalize data between hospitals, or even within the same group, is a manual, tedious, and (often) seemingly impossible exercise. We can’t get big insight from big data if we can’t aggregate the data for a single and consistent view across facilities, practices, and health systems.”

Slicing and Dicing

In the absence of a robust standard for tracking study attributes, vRad created its own: a proprietary, patent-pending method called the vCoder that assigns 23 unique attributes (for example, functional modality, CMS procedure description, and technical and professional RVU level) to every study to orchestrate study movement, along with back-end measurement and analytics.

“One of the 23 attributes is a human-readable imaging-study code that we refer to as radiology’s vehicle-identification number (VIN),” Halter explains. “As with the VIN on a car’s dashboard, this vCode tells us everything we need to know about a study to move it to the right radiologist, and it is fundamental to our big-data–normalization solution.”

vRad reads an average of more than 19,000 studies a day and, as a result, has a large database of more than 22 million imaging studies, which is growing by 600,000 studies every month, Halter says. “Like every other radiology group and hospital, we were unable to extract information from our big data for the insight we needed to make better decisions for the health of our patients and our practice,” he notes. “Keeping data consistent within a hospital can be a challenge, and normalizing data across two hospitals is even harder; we read for thousands of facilities. The vCoder solved this impossible problem for us.”

Case in point: vRad recently released what it calls its Radiology Patient Care (RPC SM) indices—findings-based benchmarking measurements that provide hospitals, radiology groups, and health systems with objective comparisons of their use of imaging with national averages and with the use levels of relevant peer groups. Halter explains, “Because of the scale of our practice, vRad’s database is a projection of the national market. We quickly realized that the specific benefits we were seeing from our use of normalized data and analytics were benefits we could, and should, bring to radiology in general.”

Halter adds that the first sets of national and peer indices focus on the use of CT in the emergency department. These datasets were selected, he