Before you criticize an agency for being disorganized, take a look at your computer desktop.
Dozens of data files are probably there with urgent Word documents scattered higgledy-piggledy among dusty Excel spreadsheets, long-forgotten PDFs and duplicates of browser launch icons. Add them up and you might find about 80 megabytes of data and program code there that have no business sharing the same space.
Now, replace that megabyte with
terabyte and you begin to understand the headaches that many agencies face. To their credit, some federal users realize that not all data is created equal and are taking steps to store it properly.
Take, for example, the National Library of Medicine and the National Naval Medical Center. In separate projects — at sites across the street from each other in Bethesda, Md., — the two facilities are taking a three-tiered approach to data storage and access. The first tier includes mission-critical data, the second, important but less urgent data and, the third, data of a less crucial nature
or accessed infrequently.
As Karen D. Schwartz reports here, IT staff at the National Institutes of Health's NLM knew the library had to ditch its direct-attached storage approach for tiered storage. DAS is great for small, tight-knit environments, but not for the NLM, which serves up vast databases of health-care data via the Internet. The new system is more
scalable and ultimately less costly.
Meanwhile, IT staff members at the Navy's premier hospital facility are putting the final touches on a storage area network that uses blade servers and redundant arrays of independent disks. Eventually, the SAN will also include optical drives.
Just as at NLM, the medical center is using fast, though relatively expensive, fiber-optic lines to connect first-tier storage devices, and less expensive, slower copper lines to link the second- and third-tier devices. It's a smart move. After all, there's no point in having lightning-fast access to data that users seldom need.
Another smart move by the medical center is its use of diskless server blades. In the past four years, the storage market has seen growth in the use of blade servers. The medical center, as Kevin Ferguson reports here, is in the vanguard, deploying a SAN system that includes blades.
Why go diskless? Three reasons: reliability, the ability to virtualize processing power and organized data storage.
First, consider maintenance and downtime. The diskless blades have fewer moving parts and throw off less heat, making them more reliable. When they do fail, or if they simply need to be removed for routine maintenance, system administrators can swap them out on the fly; other blades pick up the processing responsibility. Because data remains on the SAN devices, there is no interruption to file access.
Second, consider virtualization. Each blade is identical to the next, so their processing power can reallocate automatically during heavy workload cycles. For Navy Lt. Cmdr. Michael Montoya, CIO at the medical center, it means his staff can work more efficiently. "In the past, when we had one server go offline, it affected numerous customers," Montoya tells FedTech. "We had to scramble to get that server back online. Now that we're managing applications on the SAN with virtualization, if one CPU goes down, another one just picks up. So, now we have minimal downtime, and that is a great improvement as well."
Third, to the main point, blades used as part of a SAN provide organized storage. When housed in a central repository, there's no guesswork as to where data is stored.
Now, take a peek at your desktop. Maybe it's time to follow the Navy hospital and NLM's lead.
Editor in Chief