Vince May gets paid to worry.
As a regional storage manager for the Veterans Affairs Department, it’s May’s job to protect the data that’s the lifeblood of several dozen healthcare facilities in VA Region 2. That means he worries about the measures that his team takes to protect against failed hard drives, lost patient records and corrupted files — anything that might cause data to seemingly disappear.
But new storage technology deployed across the region, which spans the middle of the country, means May can worry a bit less.
VA Region 2 recently invested in new servers, networked storage hardware and data backup software to better protect data and increase uptime. Solid data backup and fast data recovery are critical to the delivery of services and patient safety for the department’s Veterans Health Administration.
Before, each VA healthcare facility managed its own backups. About half backed up to disk and the others used tape. Some facilities used antiquated or out-of-maintenance servers and tape libraries, which sometimes resulted in backup failures, May says.
In 2009, the region’s leaders decided to consolidate and standardize data backup and put May in charge. He has implemented a disk-to-disk backup storage solution throughout the region’s facilities, taking advantage of replication and deduplication. The result? Backup and recovery processes are more reliable and efficient.
“We were looking to eliminate tape backups. The amount we were spending on tapes and the IT resources that it takes to manage the tapes is so much more than disk,” May says. “With standardization, we are a more efficient, well-oiled machine. We can centrally monitor and manage the backups.”
Many agencies are retooling their backup methods to bolster continuity of operations and disaster recovery and to gain stability in their data centers. In recent years, an increasing number of IT organizations have embraced disk-based data protection, relegating tape to archival storage or eliminating it entirely, analysts say.
In fact, according to a 2010 survey by the Enterprise Strategy Group, 62 percent of organizations currently back up to disk-based storage and then to tape, 18 percent back up to disk only and 20 percent back up directly to tape.
Although tape is durable and using it can conserve power, disk is faster, particularly if an organization’s users need to recover accidentally deleted files, analysts say.
“Tape is no longer the first line of defense. Everything has moved to faster and faster technologies and having data replicated or backed up to disk,” says Jonathan Eunice, founder and principal IT adviser for tech industry analyst Illuminata.
Moving forward, some federal IT leaders say they are even considering turning to the cloud to house their backups. Here’s a look at how several agencies are implementing backup strategies that can future-proof their storage pools by providing data access stability today.
Standardizing on Disk — VA Region 2
VA’s data backup initiative is part of an overall consolidation and centralization effort across the department.
In 2009, the chief technology officer for Region 2 decided to standardize on CommVault Simpana, a suite of applications that includes data backup and recovery, deduplication, data replication and long-term archiving capabilities. The IT team installed CommVault at Region 2’s main data center in Chicago first. And this past year, VA installed high-end servers with expansive internal disk storage at each of the region’s facilities.
Since Region 2 of Veterans Affairs began its data backup standardization effort two years ago, the amount of data that it must back up has grown from
811 gigabytes of data in 87 systems
218.9 terabytes in 987 systems.
Region 2 encompasses five Veterans Integrated Service Networks. Each VISN health system network includes multiple VA medical centers, outpatient clinics and facilities that provide primary and specialized care. The IT team deployed the high-end backup servers in the data centers of each of the five regional VISNs, as well as the 40-plus VA medical centers and numerous other facilities.
Each facility backs up its data locally to the backup servers on a nightly basis, and they replicate the data to their respective VISN data centers “on a heartbeat,” May says. That way, if a local facility’s backup server crashes or is somehow incapacitated, its VISN data center has a recovery version ready to go.
Throughout the process, CommVault deduplicates data, which deletes multiple copies of the same file, keeping one copy that users can access. The technology allows VA to save on storage space.
Each backup server is set up as a CommVault “media agent” so it can communicate with the main server in Chicago, May says. “The brains are in the server in Chicago, and the servers with the media agents are like the horsepower that writes these backups to disk.”
Region 2 IT administrators can remotely schedule, manage and monitor the backup and recovery process for every facility from a web-based graphical user interface. They began installing the backup servers at each facility in August 2010 and are completing the project this month.
To further protect data and improve continuity of operations, Region 2 upgraded its storage infrastructure last July with NetApp networked storage hardware at its five VISN data centers and at 15 local facilities.
Before, the 20 facilities used different flavors of network-attached storage and storage area networks. The region is now standardized on NetApp’s storage systems, which it uses primarily for file server data, virtualized application environments and Microsoft SQL Server databases.
NetApp supports deduplication, replication and snapshot technology. The Region 2 IT team set replication to occur hourly and snapshots, which are point-in-time copies of data, to occur nightly. For disaster recovery purposes, the data at local facilities replicates to a regional VISN data center.
The NetApp devices provide primary storage, while the CommVault software and backup servers back up the data. Combined, the technologies dramatically improve data protection and uptime, May says.
“With this bundle, you have a real bird’s-eye view of the whole backup infrastructure,” he says.
VA’s central IT organization recently purchased new EMC disk storage for all its sites, and the Region 2 IT team is developing a strategy for installing the EMC storage hardware in its local facilities. But Region 2 doesn’t want two flavors of storage locally.
One plan being considered is to install the EMC storage at the local facilities and move the NetApp storage to the regional VISN data centers for high-availability clustering. That way, “there won’t be downtime with the different file servers and virtual machines,” May says.
Considering Cloud — Goddard
NASA’s Goddard Space Flight Center in Greenbelt, Md., is home to the nation’s largest group of scientists, engineers and technologists studying the Earth, solar system and universe. They generate a lot of data.
For example, NASA’s Solar DynamicsObservatory, a spacecraft launched last February to study the sun, collects about 1.5 terabytes of data each day.
Multiple data centers house and process Goddard’s core scientific data, and for extra protection, the center backs up the data to tape that it then stores offsite.
“The lion’s share of what we do is supporting research and scientific discovery,” says CIO Adrian R. Gardner. “Having a continuous data record is essential for long-term trending.”
For example, the space agency’s Earth Observing System is made up of 11 satellites that provide information about the planet’s surface, biosphere, atmosphere and oceans to help scientists understand weather and climate change.
The Earth Observing System Data and Information System (EOSDIS) houses the data, making it available to scientists and the public at large. EOSDIS systems are distributed at 12 data centers, called NASA Distributed Active Archive Centers.
Each DAAC hosts specific types of data based on the center’s specialties. The Goddard DAAC, for example, houses global precipitation and atmospheric composition data, while the National Snow and Ice Data Center at the University of Colorado in Boulder, Colo., handles snow, ice and climate data. The center backs up data locally and remotely on disk.
The Goddard IT team recently began a review of its COOP and disaster recovery plans to identify areas for improvement. During the next 18 months, Gardner says, he will investigate whether to back up Goddard’s data to a private cloud.
This is not unchartered territory for the space agency: In 2009, NASA’s Ames Research Center launched a private cloud on its Mountain View, Calif., campus. The Nebula project has been so successful that NASA has deployed a cloud infrastructure at Goddard.
“NASA is moving toward cloud containers, and we will see how the cloud model can apply to data backup, continuity of operations and disaster recovery,” he says.
Tiering — DLA
The Defense Logistics Agency, the supply organization for all of the Defense Department, uses different data backup technologies and techniques depending on the criticality of the data, says Thomas Michelli, the agency’s executive director of enterprise solutions.
Many of DLA’s business applications reside in Defense Information Systems Agency Enterprise Computing Centers (DECCs), and the most critical applications are backed up using disk-to-disk-to-tape storage.
For example, DLA replicates its enterprise resource planning application (the Enterprise Business System) and its logistics system (the Distribution Standard System) to another DECC in near real time, Michelli says. Less important data, such as information from file and print servers, stores straight to tape.
“It depends on the business case and how critical the applications and data are to the business,” he says. “If the business can do without it for a day or more, we do it right to tape.”
DISA applies a tiered approach, with important or frequently accessed data housed in the first tier using high-performance disks. A second tier, relying on slower, lower-cost disks, houses infrequently accessed data. And the third tier stows archival data directly on offsite disk or tape.
DLA’s IT department regularly fine-tunes its backup and continuity strategy to take advantage of new technologies, Michelli says.
The agency still manages some applications and data in house, such as file and print servers and collaboration software. But in the coming years, it expects to offload most of this to the cloud.
DISA built a private cloud, the Rapid Access Computing Environment, in 2008. Today, Defense agencies such as DLA can provision as many virtual machines and as much storage as they need.
“We will move those applications and data to the cloud, and for disaster recovery, we will have the appropriate backups through disk-to-disk storage,” he says.
Ultimately, data backup is one of the most critical components of disaster recovery.
The U.S. Holocaust Memorial Museum expects its digital archives to grow from
5 petabytes over the next five years.
That’s why IT administrators spend so much time improving backup strategies, says Joseph Kraus, former CIO of the Government Accountability Office who now holds the same title at the U.S. Holocaust Memorial Museum.
The reasoning is obvious, Kraus says: “Data backup is the key element for disaster recovery. You need data before you can recover from a disaster.”
VA’s May agrees. Like other agencies that have invested in new storage infrastructure and backup and recovery software, May is confident that he and his IT team are prepared should they need to perform a major disaster recovery operation.
“Obviously, our biggest concern is ensuring we never lose patient information and other important data,” May says. “Back when Hurricane Katrina hit, we experienced some major loss in data, and it’s those types of scenarios that we are trying to avoid ever happening again.”