In the fall of 2005, 58-member Wake Radiology found itself feeling vulnerable. Having achieved the holy grail of the electronic practice by banishing paper and film, the practice understood that it had much to lose if it lost the network over which it leveraged subspecialized reads across three area hospitals and 11 standalone imaging centers in North Carolina’s Triangle area: Raleigh, Durham, Chapel Hill, and Research Triangle Park.
Even the telephones were running over the network via voice-over IP (VoIP). So Wake elected to tighten up its IT practices by building a Tier 2+ Data Center.
“People need to understand the cost of down time, of not having a system. An organization like ours runs on electrons. It’s like blood.”
—Ronald B. Mitchell, MSc, CIO, Wake Radiology
Any interruption in effect takes the business down, and not every practice is that way,” Mitchell continued. “Some are still passing paper and some are still printing film. Each individual practice needs to decide for itself the cost of systems and network failure, and really go through different scenarios and try and figure out for them what the cost is, and then decide what they want to spend to avoid that cost.”
There were no precipitating events that convinced the practice to make its move. In fact, the equipment and existing network were fairly reliable. Nor did Mitchell provide the partners with a hard and fast dollar figure for a network failure. “A lot of the risk was assessed in soft dollars, relationships with patients and referring physicians mean a lot to us, and patient care is paramount,” he explained. “All of those would be affected if there was an extensive network outage.”
In considering that downtime could be extensive and the resulting cost quite high, Wake Radiology gave the go-ahead to build a data center where a parking lot formerly lay. They broke ground in the spring of 2006, and in January 2007 began moving into the new 1,140-sq-ft data center housed in an addition to its Raleigh-based administration building. Mitchell estimates they have enough storage space to meet the needs of the practice for 15 years and is actively seeking tenants to generate some revenue through the IT arm of the practice.
What Is a Tier II+ Data Center?
Mitchell, who joined the practice in October 2005, believes the previous data center did not even meet the criteria of a Tier I data center. “The original location was two converted offices, and they did not have a raised floor and the air conditioning was unreliable,” Mitchell explained. “I did put a monitor in the room and if my alarm went off in the middle of the night—and it did—I had about half an hour to get down there and do something about it before the machines shut down due to overheating. I was pretty happy to not have to do that anymore.”
Wake hired a consulting company to provide guidance in building the center, Chicago-based Forsythe. A Tier II data center is composed of a single path for power and cooling distribution, with redundant components, and is considered slightly less susceptible to disruptions from planned and unplanned activity than a Tier I. Other features include a raised floor, a UPS (uninterruptible power supply), and engine generators. Maintenance of the single path and other parts of the site infrastructure will require a processing shut down, providing 99.741% availability. The tier classification system includes four tiers, and a white paper from The Uptime Institute, Santa Fe, NM, provides further details on IT industry standard classifications for tier performance in data centers (pdf).
The data center Wake built is actually closer to a Tier III, as it features dual power and cooling paths. “The first thing to remember is that those classifications really represent gradations,” Mitchell explained. “We really are more than a Tier II in that we do have redundancy for most of our system, in power, network and air conditioning. And in the fire alarm system we have two different VESDA (very early smoke detection apparatus) systems. So it’s extensive redundancy, and it has proven to be very useful and has kept the data center up. We have not had downtime…and we have had failures, but they are failures we can predict and failures for which redundant components pick up and continue. So it’s working for us.”
In North Carolina, power failures are common due to frequent thunderstorms, but the UPS systems pick up, and the generator picks up after that. Mitchell also had