While the IC’s research organization looks into adding security to cloud environments, in the here and now, intelligence agencies are sharing more data.
Navy Lt. Cmdr. Michael Montoya faced a nasty surprise in March 2005 when he assumed duty as CIO of the National Naval Medical Center in Bethesda, Md. Forty of the medical center's 100 servers were on their last legs and needed to be replaced.
Turns out, that was the easy part. Six months later, in September, a presidential commission recommended that the famed medical center lead development of a unified military health system as part a $1.5 billion base consolidation effort. Under the plan, the Bethesda facility center will subsume nearby Walter Reed Army Medical Center in Washington by 2011. That same month, the naval medical center began carrying out a Defense Department directive to move paper patient records to a new digital database.
Montoya and his staff immediately knew they would need to do more than simply refresh servers. They decided they would create a networking environment that was:
The National Naval Medical Center chose a 60-terabyte, high-availability storage area network that includes two 4-gigabyte storage processors running on both two- and four-way server blades. Connecting it all will be miles of fiber-optic and copper wire.
"We needed a better way to institute a different architecture, a better way of doing business in today's environment," Montoya says. "With the reduction of money to operate activities, the reduction of personnel, base realignment and closures, and all the related issues in the military, we had to find an easier way to manage our technology. The most appealing one at the moment is the SAN technology."
A SAN has several advantages. First, it lets the center migrate to a single platform with a single administration interface. Second, the medical center can consolidate its server farm as it deploys new blade technology. And third, it provides failover and virtualization capabilities.
Combined, elements of the new architecture solve problems of the past and prepare the medical center for the future. Here's how:
By migrating to a single platform, Montoya and his staff can more quickly deploy new applications, make repairs and reduce downtime.
"We have over 3,800 machines here on the campus that are single points of failure," says Mark Goodge, the center's chief technology officer. "So, if they lose their data, we have unhappy customers, and that's a bad reflection on IT."
Instead, data copied to local drives simultaneously duplicates to a central repository on the SAN. "Now, if they say, 'Hey my computer blew up,' we can just say, 'OK, find another computer and log on, and we'll map your information back to you,' " Goodge says. Users' roving profiles, including their e-mail and data files, will be accessible anywhere on the center's network, which links 13 branch medical clinics scattered from New Jersey to West Virginia.
That means Navy doctors, nurses and technicians can work more efficiently, patients can get better care, and IT workers don't have to scramble to fix a disk drive containing the only copy of a user's files. "Therefore, we have a cost reduction because they don't have a stoppage," Goodge adds.
By replicating data back and forth, the center eliminates the possibility of data loss because of a potential problem, such as a natural disaster, at one site.
Amid the server consolidation and blade deployment, a second, smaller data center is being carved out elsewhere on the Bethesda base. If network administrators lose connectivity with one, it automatically fails over to the second. Each center will have one six-blade frame, one high-availability storage system and one disk storage device.
What makes the blade servers unique is that they are diskless and each blade is interchangeable. That means they can automatically repurpose during a failure, immediately assuming the identity of any failed blade. Meanwhile, disk resources remain accessible via the SAN, letting the data centers pool processing power instead of assigning individual servers to specific applications.
"Hardware is always the limiting factor," says John Cantrell, a field account executive with CDW Government. "Utilizing built-in virtualization tools helps to transform the hardware paradigm because it increases operational efficiency by leveraging the functionality in the software."
As IT staff reductions level off, users and applications are growing, Montoya says. "We're becoming dependent on more and more clinical systems, business systems and financial systems, but we don't want to increase the technology staff," he says. "So, we're trying to be more efficient with the use of those people and trying to reduce what we're paying for technology."
Some savings will come from deploying and maintaining applications across the SAN from a single location. "We have over 100 servers in our data center, and with all those servers came the administration and management of each individual server, under the old paradigm," he says.
Other savings will come from a streamlined system of patch management and system troubleshooting. The storage software lets systems administrators take a snapshot of production data, store it and retrieve it if future problems occur, for example, while deploying patches. Moreover, users can test new applications with current data, quickly debug the apps and then deploy them faster and more economically.
"What used to take a whole day and a service call to the vendors that manufacture the servers will now only take 15 or 20 minutes because we're doing point-in-time configurations," Goodge says.
Administering software patches to comply with DOD and State Department policies was worse. "Sometimes those patches would wreak havoc on the server infrastructure," he says. With the new system in place, Goodge and his staff can take a snapshot before the patch is applied. "If something happens, we can roll it back and see what was causing the problem." Better still, using virtualization technology that's embedded on the blades, the center can first test patches in a controlled environment before deploying them across the network, reducing downtime.
Perhaps most significantly, savings are built into future growth. "The whole idea behind BRAC is to get DOD smaller, get leaner," Montoya says. "We'll have the technology where we can just snap on additional blades, additional disk devices and pull additional fiber to add capacity as we need it."
Adds Goodge: "Putting in a solution like this will help us grow into — not grow out of — technology. When we merge with Walter Reed, I don't want to tear down walls after we just put them up. You really need to look down the road five to seven years to see if you're going to meet those requirements."
The center began the SAN project in June 2005, though it was clear before then that the servers needed replacing. As servers started to fail, Montoya approached the medical center's board of directors and described the serious IT issues the center faced and then offered options. The SAN approach was among them.
For the next few months, the SAN plan was, in military parlance, socialized to elicit feedback. By September, the comptroller, the board of directors and the governing Information Management Planning Committee had approved the plan, and by December, the IT department started to receive the first products.
The new technology is being deployed in phases. First, user files were consolidated onto disk. This accounted for about 40 percent of the medical center's aging servers. The remaining servers, identified by Montoya as critical infrastructure components, will be rolled into the high-availability blade frame once he and his staff have completed testing and evaluation.
As of late June, the medical center had completed about 75 percent of the rollout, including deployment of the blades and about 20 terabytes of disk storage. Only "parts and pieces" and further training of the IT staff remain, Montoya says. Because the SAN is designed to be transparent to end users, no training is required for them.
Goodge expects to complete the project by September.
Under the new paradigm, the center has a three-tier storage architecture. The blades in the data centers are connected to the high-availability storage system via fiber connectors, enabling all mission-critical applications to be available immediately.
Nonessential applications that are used daily are connected via SCSI and copper wire. "It's a little less expensive but still high availability," Montoya says.
Applications and data that are even less urgent are relegated to the less expensive disk storage. "Not everyone needs to have Boardwalk- and Park Place-type storage," Goodge says.
The SAN equipment, Montoya says, is proving a perfect fit for a large, geographically dispersed enterprise in flux.
The biggest challenge was in choosing the vendors, both Montoya and Goodge say, noting that the medical center is not using outside contractors to help with the implementation. The center typically puts all vendors through a vetting process, but it ruled out some vendors from the get-go because their products did not work with those of other vendors as promised.
"Where we have had to make adjustments is with interoperability," Montoya says. "The vendors say one thing, but when you look at the interoperability matrix — when you finally get one out of a vendor — it says something else."
In the end, adds Goodge, "That's a big issue — pulling back that onion peel and trying to see what attaches to what."