PACS Nirvana: University Radiology's Reporting-driven Workflow

Twitter icon
Facebook icon
LinkedIn icon
e-mail icon
Google icon

Ever since digital imaging liberated radiologists from the site of image acquisition, radiology practices have labored to patch together distributed reading solutions that would efficiently meet the needs of multiple clients, balance workflow, and enable subspecialization. Not all solutions have been elegant, and many are downright ugly, requiring multiple monitors, keyboards, virtual private networks, and dictation systems for each remote reader.

On the other end of the spectrum is the digital cockpit set up by University Radiology (URad) for its radiologists, described by Alberto Goldszal, PhD, MBA, CIO, University Radiologists, East Brunswick, New Jersey, during a session on June 5 at the Society of Imaging Informatics in Medicine meeting in Charlotte, North Carolina, and in a subsequent interview.

It uses one monitor, one keyboard, and one dictation system for each reader covering six unaffiliated hospitals and eight imaging centers. Add to that the ability to pull prior studies from all sites, regardless of where the new exam is performed.

“If hospital A, B, or C has an unread case, we really don’t care if it is hospital A, B, or C; all we care about is that there is an unread case out there that needs to be dictated. Philosophically, it’s really keeping an eye on the prize. The product of radiology is the report. Our focus is on that report—on the capture of the order as well as the creation of the results.” —Alberto Goldszal, PhD, MBA
 

What enabled University Radiology to achieve PACS nirvana? Goldszal’s solution is based on standards (namely HL7 and DICOM); a PACS and dictation system with built-in intelligence, allowing probabilistic matching of studies from unaffiliated institutions; and, surprisingly, a little help from HIPAA.

URad is an 83-radiologist practice reading for 14 sites heavily concentrated in the central corridor of New Jersey, along Interstate 95. Most of the six hospitals that the practice covers are academic centers for which URad provides everything from the department chair to the attending physicians for the residents, in addition to reading 600,000 hospital-based procedures per year. Add to that another 300,000 procedures done at the group’s eight imaging centers.

Aggregation

URad’s clients are independent, unaffiliated locations that are geographically distributed, primarily in New Jersey, but also throughout the tristate area of New York, New Jersey, and Pennsylvania. Since it created an in-house night-coverage service with 10 radiologists reading from Illinois, Washington, California, Germany, and Israel, URad’s members are also geographically distributed (see Figure 1).

URad needed a solution that could consolidate the reading of all studies on one platform that wouldn’t require redundant hardware. “When you are reading for two hospitals, you really have two options. There’s no in-between,” Goldszal explains. “One, historically, is you teach the radiologists how to work with the information systems of each hospital, and if that radiologist is working remotely, it may mean that you have to put hardware and software from each hospital into the radiologist’s home. [If] the radiologist is covering three hospitals, you may end up with three PACS, three dictation systems, perhaps six computers, six keyboards, three microphones, and so forth. That is the historical model.”

He continues, “Of course, as we increase the number of sites, it becomes increasingly difficult to manage, and learn well, all of these applications, not to mention the difficulty in installing and supporting all of these remote applications at the radiologist’s home or reading center. Why not put in one set of applications and bring the data to it, so the radiologist only needs to learn a single set of applications? That is what we refer to as a single cockpit.”

Goldszal says that bringing the data to the reader was not difficult, given the evolution of DICOM and HL7 standards. “We built the whole, thing and there is nothing proprietary in what we do,” he says. “Everything that’s here was bought off the shelf, from vendors in the market, and put together using standards.”

Mining Overlap: Access to Prior Studies

What added a layer of complexity to the project was the URad requirement of access to all relevant prior studies from the sites that it covers. URad radiologists knew that the patient histories that would come from aggregating the data from all covered sites would improve diagnostic confidence, accuracy, and quality. A system that could reach out and fetch prior studies among all covered entities, regardless of where a study was generated, also would deliver a significant service and quality upgrade to all client hospitals and payors.

“People seek care in different institutions, often within a few miles of each other, whether because of insurance coverage, access to specialists, or proximity,” Goldszal explains. “Patients leave a little bit of information at each of these institutions.” Access to relevant prior studies increased confidence in 90% of cases, influenced diagnosis in 56%, and changed the diagnosis in 18%, according to two seminal papers¹,² cited by Goldszal.

There also are economic benefits: “Because unaffiliated organizations don’t have access to each other’s images, they often generate duplicative, and sometimes unnecessary, exams,” he says. URad only recently discovered the extent of overlap within its reading geography, when it conducted a study including patient procedures from February through May 2009 and mapped (see Figure 1) the results.

Within a 10-mile radius of the business center, Goldszal found that patients crossed institutional borders in up to 20% of cases. As the radius increased, the overlap decreased. “One in five patients that we have seen has data at another organization,” he emphasizes. “This is not just a patient with image data at another location; these are relevant image data. If you just measure overlap in where patients go, the number would be much higher.”

image
URad's geographically distributed interpretation and results reporting solution spans the globe.

Goldszal says that there are only two methods of identifying relevant prior studies across organizations: deterministic (using, for instance, a number that is unique to the patient, such as a Social Security number)—which can’t be relied on—or the medical-record number that is used in Europe. “It would be nice if we had that,” Goldszal says. “Perhaps that is a good use of the stimulus money: to create a national patient identifier or a statewide number.”

The Linchpin

Instead of a universal medical number unique to every patient in the United States, patients get a medical-record number from every health care organization that they visit, rendering those numbers useless to a practice attempting to aggregate information from unaffiliated organizations into one digital folder. In the absence of a deterministic number, the only alternative is to use probability to assess whether John Doe in hospital A is the same as John Doe in hospital B.

“Without numbers to make that match, you have to use name, age, gender, date of birth, address, other attributes that are present either on the image itself (in the DICOM header) or on the order,” Goldszal explains. “You use those attributes and you use some logic, and depending on how well tuned your probabilistic matching is, you achieve a match with a 75% chance or even 100% certainty. That, in itself, is starting to become very well explored, and people have developed these probabilistic matching algorithms with several different purposes.”

Goldszal alludes to algorithms used by the US Census Bureau to prevent counting anyone twice, but he also contributed to the field of probabilistic matching when working (with colleagues at the University of Pennsylvania) at Thomas Jefferson University on a National Institutes of Health study³ known as the Philadelphia Health Information Exchange. While URad is using Goldszal’s matching algorithm, probabilistic matching is already embedded in its PACS (Synapse from FUJIFILM, Stamford, Connecticut) with a feature called CommonView.

“With the Fuji PACS, probabilistic matching is embedded out of the box, with a few tweaks,” he says. “You need to give the appropriate weights and so forth. For instance, we like to consider it a perfect match if the last name is fully matched, along with the first name, gender, date of birth, main address, and other things. With the Fuji PACS, you can say, ‘I want the first five letters of the last name, the date of birth, and the gender, and I’ll give that a 75% matching probability.’ They actually drive you through a wizard that advises you on how you should judge that match—how good it is, depending on the attributes you pick. Most PACS, if not all, cannot accommodate the probabilistic match.” The more rigorous the probabilistic patient-matching algorithm, the fewer false-positive matches, so Goldszal recommends erring on the side of caution when pulling prior studies from unaffiliated organizations.

The Workflow

The architecture (Figure 2) of the URad solution works like this: The hospital places an order for a CT exam of the head and sends the order to URad via HL7. The order has the patient’s name, date of birth, and some other identification information. The URad workflow engine receives the order and passes it to the dictation system, which puts it on the worklist, alerting the radiologist that a patient is scheduled for head CT. At the same time, the workflow engine passes that information to the Synapse PACS, and the PACS makes a DICOM query to the hospital PACS to ask whether it has a history of John Doe and any relevant prior studies (such as head CT or neck MRI) and, if so, to tell it to deliver them.

image
University Radiology uses reporting-driven workflow to cover client sites.

When the exam is completed, the workflow engine notifies the PACS and the dictation system that the study is done. The PACS goes and gets the new study, and once the new and the old studies are available, the PACS sends everything to the radiologist.

The radiologist opens the images (which opens the dictation system automatically) and dictates a report; it is sent back to the dictation system, which sends it to the workflow engine, which sends it all the way back to the hospital RIS. To perform regional imaging aggregation (Figure 3), the PACS also sends DICOM queries to other unaffiliated sites that the practice covers. Goldszal says that the ability to handle datasets coming from multiple organizations with overlapping medical records is another uncommon attribute of the Synapse PACS.

image
Regional imaging aggregation that occurs among University Radiology client sites.

“A big issue that we have in the industry today is that most of the RIS/PACS are developed for hospitals,” Goldszal says. “Within a hospital, you have a single medical-record number and that’s it. Here, we have multiple medical-record numbers, and sometimes they overlap. Fuji recognizes that there is overlap and gives a treatment for that. There is no filing of the wrong record in the wrong patient jacket. Recognizing that is very difficult for a plain-vanilla PACS.”

What Goldszal calls “collisions” occur when most PACS accept data from other institutions because they begin identifying a patient with the medical-record number followed by the accession number. “The accession numbers, the medical-record numbers, are assigned by the modalities or the RIS, not by the PACS,” Goldszal reminds us. “The PACS is just the recipient, but the PACS has to have enough intelligence to discern that data come from site A versus site B. When you are making a worklist, the highest index should be the site—then you have medical-record numbers and then the accession numbers. The key here is if you put medical record 123 on site A and 123 on site B, that’s OK, but 123 and 123 under the same site: That’s a collision.”

Goldszal thanks HIPAA for giving URad permission to aggregate patient data across unaffiliated organizations. “Prior to HIPAA, it wasn’t clear how to go about getting these data,” he says. “Since HIPAA, the data belong to the patient—that’s clear. They also belong to the organization for treatment and care, but they do belong to the patient. With patient consent, you can knock on the doors of all these organizations, and my patient, which is also your patient, has given consent to go after your data. Then you overcome the political barriers of competing organizations. All of a sudden, it is the patient care that is dictating to that organization to open their doors and relinquish the data so you can provide better patient care.”

Ultimately, the greatest gain for the practice is the single platform for dictation. Goldszal says, “People say, ‘Are you RIS driven or PACS driven?’ We are neither; we are reporting driven. Why? It’s because we are keeping an eye out for anything that is unread. If hospital A, B, or C has an unread case, we really don’t care if it is hospital A, B, or C; all we care about is that there is an unread case out there that needs to be dictated. Philosophically, it’s really keeping an eye on the prize. The product of radiology is the report. Our focus is on that report—on the capture of the order as well as the creation of the results.”