As the federal I.T. landscape has evolved with the emergence of the cloud and an explosion of data (and the need to store it), IT infrastructures have become the very lifeblood of agencies. Their ability to effectively store, manage and protect information makes it possible for agencies to carry out their missions. But corralling all this information is putting a strain on data centers, which means agencies must deal with the constant threat of IT failure.
Any shutdown of an agency’s data center can be devastating. One recent study estimated that a data center failure can cost more than $500,000. That’s per incident. And most data centers aren’t built to handle today’s volume of information.
That’s bad news for those looking to expand. Not everyone comes out first in evolution — no matter how big they are. What’s important is to keep the data center from going the way of the T-Rex.
Tune Up, Don’t Shut Down
Agencies face many budgetary and mission-related challenges as they seek to establish new IT infrastructures. The trick is to upgrade the current data center without becoming another downtime statistic. Making matters worse, most legacy data centers were built before anyone heard of virtualization or the cloud. They don’t possess the power or cooling capacity to handle the extra burden. That means the evolution is no easy task. I liken it to performing a tune-up while a car is already whistling 150 miles per hour down the autobahn.
To fully prepare for IT evolution, organizations must have a well-thought-out plan backed by a cross-functional team. The plans must take into account a variety of factors, such as storage, power, cooling and density, while recognizing that a change to one of these factors dramatically impacts all the rest. It’s critical that agencies establish a roadmap for this process. The aim is to expand a data center to handle next-generation requirements without disrupting the mission.
The reward for getting this right is huge. A properly planned transition can help a legacy data center evolve into a modern, high-density system that will be maintainable and reliable for years. And agencies can realize a significant return on their investment in reduced power and cooling costs, reduced downtime and maintenance, and increased performance.
Getting the transition wrong will have major consequences. An inadequate project plan can yield multiple “restarts,” which will burn through budgets and create costly delays — and is usually accompanied by some downtime.
By the Book
Data center operators should seek a partner with a proven track record in designing data centers. This partner can help the agency assemble a cross-disciplinary team of engineers, architects, contractors and equipment vendors capable of creating a smart design that includes a best-practices approach to meeting requirements for increased capacity and density. A good partner can also “fine-tune while driving” to meet the goal of zero downtime along the way.
The process generally begins with a step-by-step playbook, from which no deviation is allowed. It takes into account multiple scenarios, including actions to be taken during system bypass. Scenarios are practiced repeatedly, and backup procedures are put in place. A feasibility study is an important step, taking into account everything from site selection and equipment integration to construction and commissioning.
Getting Data Center Evolution Right
The data center is evolving, and there’s no turning back. To meet the demands of their mission as citizens demand more services, agencies must take a closer look at infrastructure reconstruction and redesign. But don’t expect it to be easy. From storage to power to cooling, a proper plan must take it all into account. The right plan is backed by a comprehensive, step-by-step roadmap and a script that gives all parties a voice.