Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.

Mar 01 2021
Security

Artificial Intelligence and Extended Reality May Pose Security Risks, Expert Warns

Theresa Payton, a cybersecurity expert and former White House CIO, says organizations need to be on guard against threats from emerging technologies.

Just as federal agencies are getting more adept at deploying technologies such as augmented reality and artificial intelligence to enhance their mission effectiveness, they need to also start worrying about how those tools could compromise their cybersecurity.

That’s according to cybersecurity expert Theresa Payton, who detailed her IT security predictions for 2021 and 2022 during a recent webinar sponsored by CDW and Intel.

Payton, the CEO of Fortalice Solutions and a former White House CIO, noted that the cybersecurity landscape has been been shaken.

“Everyone is in reimagining mode,” she said. “I know each of your organizations are. I know for those of you that are in government, whether it’s state, whether it’s local. The businesses in your area are also in reimagining. And personally, each one of us is in reimagining mode. But guess who else is? Cybercriminals. They’re in reimagining mode too.”

Payton said organizations need to guard against AI systems being tampered with during the time the system is being developed and trained. She also predicted that extended reality systems, which include virtual and augmented reality tools, will be attacked by malicious actors.

Protect AI Systems from Manipulation

Payton predicted that “AI poisoning” will be something to be concerned about in 2021. As Towards Data Science notes, a “poisoning attack happens when the adversary is able to inject bad data into your model’s training pool, and hence get it to learn something it shouldn’t.”

In solidly built AI models, Payton noted, “your [AI] coach should be self-learning and contextually aware and almost become a black box to the engineer” once it gets up and running.

“My prediction is that, as we’re implementing more AI, hackers will hack in and change that algorithm undetected, so that the AI will do things not initially in the design,” she said. “AI is going to be cybercriminals’ weapon of choice, to help them crack into more accounts, networks and data stores.”

Organizations should not abandon AI and need to leverage AI tools in many cases to combat cyberattacks, Payton said. AI is also needed, she said, for “resiliency and reliability in your operations and to be able to scale your operations.”

Payton recommended that organizations make sure that all of their AI componentry “has a champion challenger test where you can actually run sample decisions outside of AI, compare it to the decision that the AI came up with and have it reviewed to make sure you don’t have issues going on inside the black box.”

MORE FROM FEDTECH: Find out how to best update your agency’s incident response plan.

Watch Out for Hacks on Extended Reality Platforms

When it comes to extended reality, Payton said, she believes “it’s going to pick up in adoption in 2021, meaning it’s going to have its first public hack in 2022.”

Payton said she is bullish on extended reality (XR) technology. It is “going to give your organization the power to really revolutionize how your operations work in any pandemic, or a natural disaster or a man-made disaster.”

However, because XR platforms thrive on collecting users’ emotional reactions, they are potentially valuable data troves for malicious actors.

“When you interact with the technology, what makes you breathe in? What makes you hold your breath? What makes you happy? What makes you sad?” she said. “Because that’s all going to be correlated with a reaction to you to personalize experiences for you, it’s going to be a treasure trove to be hacked.”

Payton recommended agencies have a playbook to protect such data and prevent such attacks.

Quardia/Getty Images