There have already been moves to express federal leadership on AI, including the AI Action Plan, an executive order on advancing U.S. leadership in AI infrastructure, an EO on removing barriers to american leadership in AI and another on EO on accelerating federal permitting for AI data centers.
Some agencies have acted as well. The Federal Trade Commission, for instance, had previously announced a crackdown on deceptive AI claims and schemes.
Against this backdrop, the call for a national framework “is an attempt to prevent the mistakes made with privacy,” says Drew Bagley, vice president and counsel for privacy and cyber policy at CrowdStrike. The federal government didn’t take the lead there, and that resulted in a heterogenous regulatory landscape. With an AI framework, the administration is looking to prevent that happening again.
Is There an Emerging Framework for Data and Security?
The administration still needs to formulate the framework, and then Congress needs to act on it. But experts say there are some likely areas of focus, particularly around data management and security.
Overall, the framework likely will be “designed in a manner that intends to incentivize innovation, and incentivize further this notion that AI is a tool in the toolkit,” Bagley says. To that end, it will almost surely articulate guidelines around “what is going on with the data and how data that’s leaving your enterprise is being protected against threats.”
In terms of cybersecurity, “we need to protect the confidentiality, integrity and availability of that data,” he says. The framework will likely look to address that need.
Much remains to be seen, however. “The administration understands that a lot of that heavy lifting actually has to be done by Congress, because Congress are the folks who have to pass the laws,” Hagemann says. “The administration can give guidance, the administration can have policy priorities and a preference for the direction that this thing takes, but ultimately, a lot of the details have to be meted out by Congress.”
How Can Federal IT Leaders Prepare for AI Compliance Requirements?
In the meantime, there are steps agencies can take today to set themselves up for compliance, even before any requirements are spelled out.
When it comes to risk management, for example, “we already have a pretty successful framework for how you manage risk in this space, and it’s the NIST AI Risk Management Framework,” Hagemann says.
That framework offers “a pretty good metric and rubric for assessing whether or not you as an organization are doing your best due diligence, ensuring you’re abiding by prevailing best practices,” he says. Agencies that align with the National Institute of Standards and Technology framework can continue to do so, and those who aren’t there yet can get themselves up to speed.
In addition, agencies can ramp up (or double down on) their data management efforts. “First and foremost, really, is to view this through the lens of a data governance problem. You don’t have to wait for a framework to do that,” Bagley says.
“Any sort of AI framework will naturally have something to say about data governance, and those principles are pretty much tried, true and tested,” he says. “Maybe the speed and scale changes with AI, but the principles are the same.”
Privacy in particular will likely be a focus on any emerging framework. Many agencies are attending to this already. By maintaining such a focus, they will be “at least in general alignment with everything that’s going to be outlined in that framework,” Hagemann says.
What’s Shaping a National AI Policy Outlook?
So far, the administration has been consistent on its messaging around AI: Agencies need to put the new tools to use; they need to innovate with AI safely and effectively.
Going forward, Hagemann says, potential policy changes around procurement could make that easier. “There’s a lot of commercial off-the-shelf models and systems that will probably serve a lot of purposes within the federal government perfectly fine,” he says.
That being the case, “maybe there are opportunities to adjust some of the procurement rules to allow quicker adoption of technologies that are off-the-shelf ready, as opposed to putting the agencies in a position where they feel as though they need to develop the tool themselves,” he says.
With an eye toward federal government itself as a user of AI, Bagley sees room for policies that encourage adoption. “If something is repeatable and can be automated, then that’s something where there’s probably going to be a default expectation that AI is used,” he says. Such a policy would align with the administration’s overall encouragement of agency AI use as a path to greater efficiency.
We’ll also likely see emerging policy around cybersecurity as it relates to AI. “One of the key reasons for that is that adversaries are moving at machine speed,” Bagley says. “In our CrowdStrike 2026 Global Threat Report, we found that the amount of time it takes on average for an adversary to move to another system, once the first system has been infiltrated, is now down to 27 minutes.”
As a result, the need for AI-powered cyber defenses “is going to be really important at the policy level,” he says. Agencies can be proactive in exploring solutions now to safeguard their data and systems and to fulfill what will likely be a policy structure that encourages AI-enabled defenses.
