The technology that provides an agency with the ability to use all of its data must work throughout the data’s lifecycle — ensuring its safety and usability when it is collected; enriching it for proper use; and when the data is reported to other agencies, helping them predict outcomes.
Storing all that data in the cloud — whatever environment an agency chooses — is an initial step. But there’s so much data in government that centralizing or standardizing it isn’t an option anymore.
MORE FROM FEDTECH: What is the right level of cloud for data sharing needs?
Reference Architecture Creates a Foundation for Data
Some agencies turn to reference architecture — providing a foundation on which agency components can build their own IT structures and still maintain the ability to share data without compromising their needs. This is how the 17 intelligence community agencies handle their data.
Others choose to manage the data where it lives, using virtualization to run more than one operating system at a time and expand a system’s capability; or through containers, a more streamlined technology that can help software run cleanly among different environments.
Still others are turning to enterprise data management tools that let them design ways to handle data transparently and quickly, allowing an agency to see data from a new viewpoint.
In the government arena, this is critically important: Doctors can better see patterns of disease outbreaks, cyber professionals can better spot looming attacks, financial experts can catch suspicious spending patterns.
With the volume of data pouring into the federal government every day — and the new emphasis on data strategy from the White House — agencies must craft a method to organize, quantify, verify, explain and share their data in the most convenient way possible. The technology is available; they just have to choose the way that’s best for them.