Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.

Oct 10 2024
Data Analytics

Artificial Intelligence May Augment Diplomatic Data Security

The State Department eyes emerging technology for efficient and effective cyberdefense.

The State Department’s law enforcement arm, the Bureau of Diplomatic Security, defends the department and the foreign affairs community by applying cybersecurity, technology security and law enforcement expertise to help advance U.S. foreign policy and safeguard national security interests. Specifically, the bureau’s Directorate of Cyber and Technology Security is responsible for ensuring tactical cyberdefense for all U.S. diplomats and data held by the department, spanning more than 260 locations in 170 countries around the world. From a cybersecurity standpoint, the State Department is the No. 1 target for nation-state cyber hackers — including the Big Four of Russia, China, North Korea and Iran — because we develop and act upon U.S. foreign policy.

Our adversaries are knocking at our door around the clock, meaning our directorate stays busy. In an effort to capitalize on ways to better manage our influx of data, we are greatly interested in recent developments surrounding artificial intelligence. We must capitalize on AI opportunities for productivity throughout our department and improve our cybersecurity posture.

The State Department would like to use AI to help our analysts do their jobs. Instead of creating more work for our analysts by writing a ticket and pushing information to different bureaus, automation can move the information along quickly and effectively, allowing our analysts to focus more closely on the issue at hand. An analyst may sift through a mountain of data, only to obtain a sliver of actionable intelligence. But AI opens the door to reviewing that data and uncovering information more expeditiously.

Click the banner below to begin developing a comprehensive cyber resilience strategy.

 

AI Disrupts Agencies Lacking the Proper Infrastructure

It is important that we take care not to let our imagination run away with us. AI will not give us capabilities that we did not previously possess. What this technology will do is allow us to complete the things we have already mastered in a more efficient manner. As with any emerging technology, if an organization’s foundation is not fundamentally strong, AI will disrupt operations, not improve them. And so, AI presents us with a great opportunity to ensure that our foundation is strong and to check that we have done the painstaking work required to empower our goals and aspirations.

To use AI properly, we must understand what we want from a generative AI model. We feed the model with data, and every human that works with the model will add to that data. If you have 10 or 20 people working with an AI model, you face an amazing problem of human engineering. We must understand what we want the model to do but also understand how we influence human behavior when interacting with the model. We must ensure that we have the right controls in place to allow the model to access the right data to perform its functions.

For the State Department, AI offers a lot of promise in trend analysis of threat intelligence. For example, we’ve seen white papers written by foreign college students about how to crack encryption or how to leverage certain pieces of operating systems for nefarious ends. A threat of that nature will not typically show up in an intelligence brief. But we want to know if academia is developing such malicious activity. We want to monitor for any such behavior we have not seen previously and develop defenses against it.

DISCOVER: There are many ways to ensure interoperability between zero-trust tools.

Lowering the Technical Barriers to Entry for Cyber Analysts

We also would like to lower the technical barrier of entry for cyber practitioners. In our security operations centers, we have a lot of talented people who may not have a high level of technical expertise. Yet they are valuable minds when it comes to developing cyberdefenses. They may be able to use AI models to explore pathways to defend against specific attacks.

Our bureau once witnessed an engagement with an advanced adversary, and an analyst realized that a specific behavior occurred at the same time every week. That one piece of information was vital to the trend analysis in that threat situation. We could use that information in an AI trend analysis to quickly identify other instances that fit the pattern.

As foreign service officers, we change jobs every two to three years. We will post a political or economic officer to one country, and they will serve there for two years. The officer will make many useful contacts and collect good information. Then, after two years, the officer leaves. There may be a two-month gap before another officer arrives to take his or her place. Our Center for Analytics is looking at ways to use machine learning and AI to allow new officers to quickly aggregate all of the work of their predecessors in context so they can quickly get up to speed on where the United States is with that bilateral relationship. It’s a simple use case but a tremendous one where we can do something faster and more effectively.

RELATED: ModelOps helps agencies innovate artificial intelligence.

Bring Everyone to the Table

We must hold AI to the same rigorous standards that we hold all applications. We must monitor the data accessed by an AI model, and we must monitor which people access the AI model that has access to that data. Administrators must map out an ecosystem to understand what normal behavior looks like so they can understand anomalous behavior.

We also must be consistent. Large language models act like big probability engines. They seek to determine what is the most probable thing to happen next. That leads to inconsistencies, so there must always be a human to verify the results. You can ask a large language model the same question and get a slightly different answer every time. Thus, humans must provide consistency in how we look at a problem and how thought guides our answers. For the model to be consistent, we must build processes where the human is still making decisions.

The advancement and application of AI is a great opportunity to bring more people to the table who may not have been at the table before and let them share their ideas. AI, as with any emerging technology, allows for us to go back to basics and ensure our fundamentals and foundation are strong. AI has captured everyone’s imagination. It has given us a great opportunity to engage with segments of our own organization that we may not have engaged with before and hear new ideas. Even if it doesn’t always translate to a technical solution, it can translate to a stronger foundation within our organization.

UP NEXT: Hyperconverged infrastructure supports AI for faster data analysis.

Just_Super/Getty Images