Close

New AI Research From CDW

See how IT leaders are tackling AI opportunities and challenges.

Aug 28 2025
Security

AI-Powered Physical Security Can Benefit Federal Agencies

The technology is taking pressure off security personnel with its ability to more accurately discern threats.

Artificial intelligence is taking pressure off security personnel and reducing human error by enhancing threat detection, optimizing surveillance systems, analyzing video and mitigating risks.

As technology experts extol the benefits of generative and agentic AI in boosting government productivity and cybersecurity efforts, its physical security benefits should not be overlooked.

How AI Is Enhancing Government Physical Security

By its very nature, AI lends itself well to supporting government needs related to physical security.

“There is an unbelievable shift in capabilities because AI is never-sleep, always-on,” says William Plante, a member of the American Society for Industrial Security’s Emerging Technology Community. “From video technology and access control to intrusion detection and process automation, AI is one of these things where you’ll never, ever be able to go back once you’ve done it.”

In place of inefficient and possibly error-prone human efforts, agencies will look to AI for process automation approaches to threat identification, categorization and preliminary response. This, in turn, can speed responses in critical security incidents.

“Historically in physical security, you had cameras all over the place, and you’d have a security guard or security personnel watching those cameras,” says Elyson De La Cruz, senior member of IEEE and adjunct professor at University of the Cumberlands. “So, you have the stresses of just being human as part of the task.”

“But we have automated and orchestrated some of these things already,” he adds. “And the best part about it is, computers don’t get tired.”

 


 

Unlock Exclusive Cybersecurity Insights

Complete the form below to be redirected to CDW's exclusive proprietary research report on Cybersecurity. Once the form is submitted, you’ll be opted into our Security email stream.


 

Use Cases: From Face Recognition to Predictive Threat Detection

Several key use cases demonstrate the potential value of AI in physical security.

  • Face recognition promises to support consistent access control at government facilities and will streamline processes at border crossings, for example. “You might not have to pull your passport if you’re crossing boundaries, let’s say, from here to Europe and vice versa,” De La Cruz says. “That’s because of the things that we’ve already gathered on our systems and have authorized within our border coordination with our partners.”
  • Continuous monitoring and automated response can elevate protections in agencies’ physical spaces. “In surveillance operations, AI systems continuously monitor multiple video feeds — detecting and classifying objects, people and behaviors in real time,” according to security expert Ryan Schonfeld, writing for the Security Industry Association. Automated alerts then clue operators in on threats.
  • Predictive threat detection becomes increasingly possible with AI. “We can see patterns around the environment, and then we can tie in correlations across a different number of patterns,” De La Cruz says. AI can spot not just aberrations from the norm but also long-term threats. “Imagine an actual system that could recall not only years but potentially decades of data. Now you could have a threat-detection matrix that actually tells you about those threats.”
  • Computer vision refers to AI’s ability to make sense of visual imagery. “These technologies excel at identifying objects, tracking movement and recognizing patterns,” writes security executive Jason Veiock in an SIA blog post. When paired with generative AI’s situational understanding, potential threats can be predicted, scenarios simulated, and routine tasks automated.
Elyson De La Cruz
We need to make sure that we don’t get vendor lock-in.”

Elyson De La Cruz Senior Member, IEEE

Choosing Interoperable, AI-Powered Surveillance Tools

To make effective use of AI enhancements, agencies must look for interoperability in their surveillance tools — capabilities that can merge seamlessly into their existing physical security solution sets.

“There are definitely open standards for connecting your devices together,” De La Cruz says.

SIA promotes interoperability through its Open Supervised Device Protocol, while the O-RAN Alliance and Telecom Infra Project are advancing open interface specifications for AI-enabled network components. It’s important for agencies to align their efforts with these emerging standards.

“We need to make sure that we don’t get vendor lock-in — that we can interconnect and integrate these particular systems and that we can connect physical security systems to large data sets that an organization has,” De La Cruz says.

Click the banner below for the latest federal IT and cybersecurity insights.

 

It’s also important for agencies to understand how their AI-driven security solutions were developed and what type of data was used to train the database.

“A lot of early developers used actual data from access control and video surveillance systems to train their AI, whereas many products now will rush to market,” Plante says. “They want to catch up to the early creators, and they’ll use a lot of synthetic data.”

Synthetic data runs the risk of unintended bias, depending on how it was created, so product evaluation and selection should involve a deep dive into how the program was written and its baseline model.

UP NEXT: Data poisoning threatens AI’s promise.

The Future of Physical Security: Smarter, Safer, More Scalable

AI-enabled physical security promises to be safer and smarter, and thus pervasive across agencies.

“It will probably end up feeling just as ubiquitous as smartphones,” De La Cruz says. “It’s just going to be part of our daily life.”

Modern cameras often aren’t noticeable when entering a facility, and soon they’ll be AI-informed.

“Being able to discern credible threats — real threats versus potential ones that need to be verified — it’s just going to be much more discreet and seamless,” Plante says. “Security is going to be much better off.”

Daniel Balakov/Getty Images