While the IC’s research organization looks into adding security to cloud environments, in the here and now, intelligence agencies are sharing more data.
Many security managers place a high value on human analysts, believing they are the heart and soul of a successful security operation. Ask a CISO what makes his or her department successful, and the answer will likely praise the analyst and his or her ability to understand the agency’s mission.
As the sophistication and volume of cyberthreats continue to grow, cyber-defense becomes even more difficult for prime targets like federal agencies. To properly respond, government technology leaders turn more frequently to automated security technologies that can respond in times of crisis.
The increased use of automation creates a different environment for security managers, where human analysts seemingly lose their value. Agencies need automation to fight cyberattacks, but many wonder what role analysts will play moving forward. The answer lies somewhere in the middle: Agencies will require analysts to operate as the brains behind the technology’s might.
CISOs may want more security analysts on their teams, but that isn’t an option for most agencies. Budget constraints and lack of available talent make analysts a valuable, albeit costly, resource. As a result, security leaders seek efficiencies through automation.
There is a misconception that automating processes replaces the job of the human analyst. Automation frequently is human-delivered and human-mediated. Analysts identify opportunities for automation and play a crucial role in developing the policies that machines are programmed to automate.
Cyberleaders need two things when it comes to automation. First, they need qualified analysts with the capability to recognize problems and exercise their own intuition and judgment to find solutions. Second, leaders need agile and flexible solutions to ensure analysts can augment or expand their use to adapt to new threats.
Those who resist automation technology typically cite fear over a lack of control or transparency along with adding complexities; however, automation inherently functions to provide assistance in two areas, scale and time. Automation offloads repetitive jobs to machines that can perform the work quickly. Such automation offers greater efficiency, but also requires humans who can provide real-world context.
Federal agencies rely heavily on authentication, as employees typically work across the country or throughout the world. Hackers try to take advantage of this. For instance, if an employee logs in to the network at 3 p.m. from New York, then again 30 minutes later from London, an analyst could flag it.
Of course, a machine could also catch that same anomaly and trigger the appropriate alert, freeing the analyst to focus on other things.
Alternatively, there might be times when a machine initiates an alert of an anomalous behavior and automatically restricts access as a protection. When it snows in Washington, D.C., the majority of an agency’s workforce may log in to work remotely. The abnormally large number of remote logins may initiate an automated security protection mechanism to ensure unauthorized users don’t gain access. A machine doesn’t adapt policies under the context that it’s snowing and may need the insight an analyst provides to ensure remote users have access.
Technology solutions fill the cyberbattlefield, but ultimately automation is a human-versus-human issue: Humans initiate and direct every threat actor and network defense mechanism. Federal leaders must realize analysts and automation technology do not oppose each other. Humans can identify, create and supervise automation to achieve maximum effectiveness, while automation can free humans from the mundane to focus on more pressing issues.