Can the Federal Government Move to a World Where Software Is Secure by Default?
Zero-day vulnerabilities are the stuff of cybersecurity defenders’ nightmares, since the vulnerabilities in systems haven’t been discovered, are not publicly known and cannot be patched. If a malicious actor got their hands on such exploits, they could wreak havoc (some believe the stockpiling of such exploits by the National Security Agency led to the WannaCry ransomware attack).
But what if software were made secure from the outset? What if zero-day exploits could be found in weeks, days or even seconds, instead of years?
That is the goal of a new wave of research, spurred in part by the Defense Advanced Research Projects Agency. The goal is to shift the culture of software development for the government from one in which software is highly vulnerable to a world where software is made secure to begin with, thanks to new algorithms and software methods.
SIGN UP: Get more news from the FedTech newsletter in your inbox every two weeks!
Zero-Day Exploits Pose Great Danger
Earlier this year, RAND released a study based on rare access to a data set of more than 200 such vulnerabilities, almost 40 percent of which were unknown to the public.
“Until now the big question — whether governments or anyone should publicly disclose or keep quiet about the vulnerabilities — has been difficult to answer because so little is known about how long zero-day vulnerabilities remain undetected or what percentage of them are eventually found by others,” RAND notes in a statement.
According to the study, it takes an average of nearly seven years between the initial private discovery of a zero-day vulnerability and public disclosure. “That long timeline plus low collision rates — the likelihood of two people finding the same vulnerability (approximately 5.7 percent per year) — means the level of protection afforded by disclosing a vulnerability may be modest and that keeping quiet about — or ‘stockpiling’—vulnerabilities may be a reasonable option for those entities looking to both defend their own systems and potentially exploit vulnerabilities in others,” RAND says
Lillian Ablon, lead author of the study and an information scientist with RAND, says that typical “white hat” hacker-researchers have more incentive to notify software vendors of a zero-day vulnerability as soon as they discover it, while others, such as system security/penetration testing firms and “gray hat” entities, have incentive to stockpile them.
For governments, Ablon says, if an adversary also knows about the vulnerability, then publicly disclosing the flaw would help strengthen the government’s own defense “by compelling the affected vendor to implement a patch and protect against the adversary using the vulnerability against them,” Ablon says.
However, publicly disclosing a vulnerability that isn't known by an adversary gives the adversary the advantage, she says, “because the adversary could then protect against any attack using that vulnerability, while still keeping an inventory of vulnerabilities of which only it is aware of in reserve. In that case, stockpiling would be the best option.”
DARPA Pushes for Security by Design
DARPA, along with academics and security practitioners, wants to move government agencies away from this cat-and-mouse game on zero-day exploits and simply make software more secure from the outset.
That is partly why DARPA hosted the Cyber Grand Challenge in August 2016. The event, which featured finalists DARPA had whittled down from earlier competitions, allowed seven teams to deploy software that, in DARPA’s words, “automatically identified software flaws, and scanned a purpose-built, air-gapped network to identify affected hosts.” For nearly 12 hours, the finalists “were scored based on how capably their systems protected hosts, scanned the network for vulnerabilities and maintained the correct function of software.”
If researchers can make software more secure by design rather than by finding vulnerabilities years later, as Nextgov recounts in a special report, that could lead to “a world where the barrier for entry to hackers is significantly higher, where companies spend less money on constant cyber monitoring and defense and where companies and consumers can count on their information being much more secure.”
One way to do this, according to the Nextgov report, is to use so-called formal methods, in which automated tools are used “to whittle away imprecise components of software code that might be exploited to the point a researcher can mathematically prove the software is invulnerable to certain classes of vulnerabilities.”
That approach is “valuable in certain specialized industries, such as aerospace, where software is both highly critical and performs discrete tasks,” Lee Pike, research lead for the cyberfirm Galois and onetime formal methods researcher for NASA, told Nextgov. The formal method often “becomes too expensive or complicated, though, when it comes to the more complex code that underlies most consumer and government systems.”
Other tools, like those used in the Grand Cyber Challenge, automatically scan and patch software for known vulnerabilities and use “fuzzing” tools, which “run a series of random commands against different chunks of the software to see if any of them turn up unexpected or unwanted results,” Nextgov notes.
The winning team, ForAllSecure, deployed a system called Mayhem, which uses a process called “symbolic execution” or “white-box testing.” The system “creates a model of the software the program is testing. Mayhem then repeatedly runs the program, following slightly different paths each time and automatically patching when things go wrong,” according to Nextgov.
Now, the Mayhem technology is being used by government agencies across the defense, civilian and intelligence realms, ForAllSecure’s Chief Operating Officer Tiffany To told Nextgov. Lockheed Martin and software company Guardtime also provide the Mayhem system to agencies as part of a larger contract that also includes Pike’s formal methods firm Galois, the report notes.
“Both government and industry have spent tens of billions of dollars on all these detection tools, and they’re still being hacked into,” To told Nextgov. “You can build layer after layer of detection, and stuff is still getting through, so why not make the asset you’re trying to deploy stronger?”
It’s unlikely that such tools will be used on a wide scale in government agencies over the next few years. The tools may protect some critical systems, such as aviation, but they are currently expensive and require specialized training to use effectively.
“Agencies are generally undermanned and undertrained, and many are not using the tools they have to their full capability,” former federal CISO Gregory Touhill told Nextgov. “Fixing that would be a better return on investment at this point, rather than introducing something that adds complexity.”