A Tool for Explaining Risk
Here's a heretical thought to some: FISMA is actually an ingenious statute. Why? Because if you read it very, very closely, embedded in it from the beginning is the concept that agencies should manage risk with continuous monitoring.
The statute instructs agencies to outsource IT services and use commercial software with known security problems, and manage the risk by monitoring vulnerabilities through the lifecycle of a system — the definition of continuous monitoring.
That's right, continuous monitoring evolved directly from the original Federal Information Security Management Act. The reporting model that was initially used to ensure compliance with FISMA delayed recognition of that fact, but the latest version of the Risk Management Framework from the National Institute of Standards and Technology is clearing things up.
At the National Archives and Records Administration, we started developing our continuous monitoring model by first selecting a set of security controls that map to reporting tools which we know can provide real-time information about the technical state of those controls.
Now, we are developing ways to extract details from the tool output that we can use to create clear and concise summaries of what that information means. I compare these summaries to cartoons because they have to be visually clear and unambiguous, so that systems owners can easily consume this information and grasp the risks that they are managing — the risks that they bought through the use of these systems.
True Value
For me, that information exchange signals the true value that continuous monitoring offers agencies. If you can take what you know from your reporting tools and present it to business owners in such a way that they can understand the risks they have assumed and can balance those risks against their agencies' mission needs and budgets, you are implementing the FISMA mandate.
What it comes down to is figuring out how to use the continuous monitoring information as a communication tool. With these details in hand, program managers can justify their systems: They can say, "I am buying a lot of risk, but I have to." Or, they can present strong arguments for system enhancements and the resources needed to mitigate risk.
The biggest challenge is finding that simple cartoon that communicates effectively where risk actually lives in a system.
But that's a job that's well worth our effort. By communicating this information to systems owners, an agency's IT security team offers them a way to take a realistic look at their systems relative to the agency's budget priorities. And we're providing the information to the person in the agency who needs it most.
Our Obligation
One of my colleagues famously said in a briefing to our staff, "As a security manager, you must sleep at night. You have got to make this somebody else's problem. That's the key to risk management." Now, admittedly, that's kind of a crude way to put it. But we do have the obligation to help systems owners understand these risks.
For more insights on network security, watch our FedTech video at fedtechmagazine.com/
0711security.
They can then either accept a risk, spend available funds to make changes to reduce it or report it up the ladder to someone else within the agency, if necessary. At NARA, our systems owners are embracing this approach because they are generally happy to learn about the risks for which they are accountable. If you present this information in a way that helps systems owners judge it fairly, then they will be much more comfortable.
I didn't invent this idea. Everybody in government IT is working their way toward continuous monitoring and how to share the information best within their own environments. But I point to the State Department and its iPost system as a model. That is a unique department with consulates and embassies around the globe — it's very unlike NARA — but the tool they created works well.
It shows that continuous monitoring can help agencies manage IT security in an extremely dynamic environment. And that's critical. Because the threat we have today is not the threat we will have tomorrow, nor the threat we'll need to be focused on the day after tomorrow.