A relative newcomer amongst the ACR® data registries is the General Radiology Improvement Database (GRID) program. Part of the ACR’s National Radiology Data Registry, GRID’s focus is evidence-based health outcomes and process data; the program was launched as a pilot in 2008 and began official operation in 2009. Today, the GRID program has 23 active facilities contributing information in accordance with two levels of membership: green (reporting primarily on process measures) and gold (reporting on both process measures and outcomes).
Mythreyi Bhargavan Chatfield, PhD, director of data registries for the ACR, says, “We’re still in the process of growing. We offer two levels of participation because we understand that outcome measures can be more difficult to track and report, and we didn’t want facilities not participating because they were unable to get outcome data.”
Massachusetts General Hospital (MGH) in Boston has been a gold-level contributor to the database since its pilot stage, and it was one of the first facilities to join, according to Max Gomez, director of quality management and education for the organization’s imaging department. “Our chief of radiology has always seen the value in understanding how we’re going, compared with the rest of the imaging departments in the country,” he notes. “The benefit of these registries is understanding where you are and what the range is, and from there, determining what your next step should be.”
The ACR launched the GRID program to enable radiologists to make objective, evidence-based decisions about their practices and performance based on data from similar facilities or facilities within the same region. Reports are delivered to participating facilities on a semiannual basis, and they distinguish between the data received from hospital-based and freestanding practices. “We present benchmarks that compare each facility with a facility of the same kind—for instance, comparing an academic site with all participating academic sites,” Chatfield explains.
Measurements that are reported include rates of attended and unattended falls, deaths, code-blue episodes, nosocomial infections, and wrong exams; patient waiting times, times from order to exam, and report-turnaround times; reacquisition rates; rates of nondiagnostic liver and lung biopsies; CT extravasation; and nonconcordant stereotactic breast biopsies. “Not all facilities have all the data, and different facilities report on different measures,” Chatfield notes. “About two-thirds of our facilities are currently reporting at the green level.”
Gomez says that MGH participates in GRID in the hope of eventually being able to improve its processes and outcomes, based on the data reported. “There are four or five academic institutions currently participating,” he says, “and those institutions are a bit avant garde—they recognize the importance of doing this, even though the benefits may not manifest themselves immediately. We haven’t actually started using the data to improve, but that’s the next evolution—to use it to drive how we’re going to change.”
He adds that having comparative data available is helpful in explaining the radiology department’s performance to hospital administration. “In health care, we’re asked to measure ourselves on so many things because we haven’t been doing a very good job in some areas,” he notes. “Your institution is always going to ask why your wait times are as long as they are, why your outcomes are the way they are. It’s always a good idea to know where you sit and what the range is.”
Building the Foundation
Gomez issues a clarion call for other facilities to begin participating in the program. “It’s tough to convince other people to do this,” he notes. “In the climate we’re in, where resources are limited and budgets are tight, it’s difficult to convince an institution to carve out someone’s time for this, but many people need to join in order to make the data more useful for everyone.”
Chatfield notes that a GRID committee oversees the activity of the database and evaluates the measurements being reported. “This is a registry that is always responsive to its participants’ needs,” she says. “We’re constantly gathering feedback and making modifications, if necessary.” For example, she says, the requirement for reporting turnaround times was adjusted when facilities shared that they didn’t tend to track mean turnaround times, but rather, how many exams