Predictive AI Offers a Structural Advantage to Defenders
Predictive AI has largely worked in favor of defenders. These AI systems can discriminate or sort data into categories, and they are well suited to help organizations manage the large volumes of data they generate as they operate. Discriminative AI offers enterprises a structural advantage, whereas predictive AI can help attackers by suggesting ways that networks that share characteristics with a new target were successfully attacked.
For cyber defenders, predictive AI enables anomaly detection, automated alert triage and execution of response playbooks that help security teams manage growing workloads and respond more consistently to threats. Predictive AI underpins much of the speed and scale of modern cybersecurity. It has turned one of its greatest challenges — dealing with the size and complexity of enterprise networks and the flood of data they produce — from a liability into an asset by instrumenting key locations and mining the resulting data.
Government Sees the Dual-Use Reality of Generative AI
GenAI introduces a more complex dynamic. Its core strength, content and knowledge creation makes it inherently dual-use. Tools that help defenders generate incident response plans and automate security operations can also be used by attackers to craft convincing phishing campaigns, impersonate executives and write malicious software.
Although the builders of GenAI models try to create guardrails to prevent misuse of these platforms, the focus on ensuring that the rationale for GenAI output is transparent and knowable means those with malicious intentions can structure their queries or ask follow-up questions to determine the parameters of the internal guardrails and evade them.
While attention has recently focused on the potential of commercial GenAI models to accelerate the velocity and sophistication of malicious cyber campaigns, the most significant benefit of GenAI to malicious cyber actors in practice has been in creating content for social engineering. AI-generated text, voice and video have dramatically increased the credibility, volume and targeting precision of attacks, eroding long-standing trust signals.
AI Has a Role in Vulnerability Management
Less attention has been paid to understanding the implications of GenAI on vulnerability discovery and exploitation. VulnCheck has reported that roughly 1% of newly published common vulnerabilities and exposures in a given year are publicly reported as exploited in the wild, consistent with its historical trend data. One reason for this is that most cybercriminals are not programmers.
However, “vibe coding” helps less technically competent cybercriminals write functional software using the same GenAI tools they already exploit for social engineering. GenAI also makes it far easier for malicious actors to identify vulnerabilities that have been disclosed but for which no patch has been created and to generate exploit code targeting these vulnerabilities.
This could dramatically expand the number of vulnerabilities that are exploited — and because of the nondeterministic nature of GenAI (where asking the same question a second time often yields a different answer), the variety of these exploits is likely to expand as well. The combination of increased volume and variety of attacks is likely to overwhelm organizations that are not in turn embracing AI tools to fuel their cybersecurity.
Compounding the problem, many organizations are slow to take action to manage IT vulnerabilities. The term “N-day vulnerability” describes vulnerabilities for which a patch has been created by the manufacturer but which a customer has failed to apply for N days, leaving the organization at risk. In some cases, the number for “N” can be years. In many cases, users fail to apply a patch even when they are repeatedly notified by the vendor. Taken collectively, N-day vulnerabilities likely create more exposure to cyber risk than zero-day vulnerabilities.
What We Can Do To Strengthen Collective Resilience
Both public and private sectors have a critical role to play in shaping incentives that encourage responsible cyber hygiene and vulnerability management.
There is a role for market-driven incentives to drive progress, including but not limited to AI:
- Reducing N-day vulnerability risk through targeted “carrots and sticks,” such as public awareness efforts, requirements for critical infrastructure networks and emerging cyber insurance models that scale payouts based on how long known vulnerabilities remain unpatched
- Re-examining transparency expectations for GenAI models, balancing the need for explainability with the risk that excessive transparency makes guardrails easier to identify and bypass, and supporting research that advances both transparency and misuse resistance
- Using AI for vulnerability management and cybersecurity. AI is driving innovation in cybersecurity that can help counteract its benefit to attackers. GenAI can also help organizations better manage vulnerabilities in their IT assets, tackle the N-Day problem and potentially save organizations time and money.
For cybersecurity and IT leaders, demonstrating due diligence and adherence to best practices can meaningfully reduce risk and make an organization a less attractive target. Beyond internal controls and security practices, organizations must also manage the risk they inherit from partners and technology providers.
Practical steps include:
- Choosing partners that demonstrate real security maturity, including secure-by-design development practices, transparency in vulnerability management and willingness to provide tangible evidence of progress
- Maximizing the cybersecurity value of data across interoperable ecosystems, enabling both predictive AI and generative AI to operate on richer, more contextualized information rather than isolated signals
- Adopting generative AI in a controlled deliberate fashion, starting from clear use cases and involving key stakeholders from across the enterprise
Perfect security may be unattainable, but meaningful resilience remains achievable. In an AI-accelerated threat landscape, success will depend less on chasing every new capability and more on aligning people, processes and technology so that innovation strengthens, rather than undermines, collective defense.
