ICYMI: Private Sector Stakeholders Provide Insights on Future of AI in the Homeland Security Mission
May 24, 2024
WASHINGTON, D.C. — This week, the House Committee on Homeland Security, led by Chairman Mark E. Green, MD (R-TN), held a hearing to examine how artificial intelligence (AI) is enhancing the homeland security mission, and the cybersecurity implications for AI development, deployment, and use. Witnesses provided testimony on the beneficial uses of AI across industry sectors, opportunities to increase security for certain AI applications amid increasing threats from adversaries, and ways AI innovation can bolster the cyber workforce, which continues to face retention and recruiting challenges.
Witness testimony was provided by Troy Demmer, co-founder and chief product officer at Gecko Robotics; Michael Sikorski, chief technology officer and vice president of engineering for Unit 42 of the Palo Alto Networks; Ajay Amlani, president and head of the Americas at iProov; and Jake Laperruque, deputy director of the Security and Surveillance Project at the Center for Democracy and Technology.
Chairman Green asked witnesses how to assure the public that they can trust the evolving uses of AI:
“Are there requirements that we can put in the system that would give people a sense of security? Kill switches? Always having a person in the system? What are some things that we can do to give the public a sense of security?”
Laperruque answered:
“Human review and control is one of several factors that’s critical for something like this. I think you need strong principles to ensure responsible use—from creation, to what data you’re putting into systems, and what systems you’re using it for, and what data you’re taking out and how you’re using it. As you said for human review, one of those steps on the outside is there should be human cooperation for AI results. It shouldn’t just be AI making its own decisions, and we have to know how reliable AI is in certain circumstances. Sometimes it can provide small bits of insight, sometimes it can be very reliable, sometimes it gives a degree and you have to treat it with a bit of skepticism, but also maybe it provides a bit of value. But along the lines of human review, not just human review, but trained staff [is needed].”
Chairman Green also asked Sikorskiabout the potential for adversaries to use ‘data poisoning’ to infiltrate and impact the AI used in operational technology systems, to which Sikorski answered:
“I think it goes back to the ‘secure AI by design’ as we’re building out these applications––how are we securing that information, how are we securing the models themselves that make the decisions as well as the training data that is goes there. There is a lot of research and a lot of thought about what attackers could do if they can manipulate that data, which would then in turn not even necessitate an attack against the technology itself––it’s an attack against the underlying data which it’s trained with.”
Subcommittee on Counterterrorism, Law Enforcement, and Intelligence Chairman August Pfluger (R-TX) asked Sikorski how our adversaries may be using AI to increase threats:
“Talk to me about how you see adversaries increasing the scope or scale, and actually the threat, using AI.”
Sikorski answered:
“One of the things Unit 42 does is threat intelligence. So we are monitoring the dark web, we are seeing what they’re talking about in their forums, what they’re selling to each other, the access to networks, but also talking about how to leverage this technology to create really efficient attacks. The big focus, so far, has been on social engineering attacks, which means things like phishing…and also manipulate you to get multi-factor authentication.”
“Where we start to see them poking around is using AI to be able to do better reconnaissance on the networks they’re attacking so they know what holes are in networks across the world. And then also they are starting to wade into how they can develop malware efficiently and variations of it so that it’s not the same attack you see over and over again. Which goes back to the point of: how do you fight against that? Which is why you need to develop technologies that are really efficient at using AI to see those variations.”
Subcommittee on Cybersecurity and Infrastructure Protection Chairman Andrew Garbarino (R-NY) asked what Congress can do to bolster the cyber workforce:
“Over half a million cyber job openings in the U.S. That’s what keeps me up at night—that we don’t have the workforce to defend against these cyberattacks. AI can only bring us so far, we need that human element, so [does Congress] have a role, and what is it?”
Sikorski answered:
“I think you absolutely do. There is an ability to create these types of programs where it makes it really easy for people to apply and get into—the point made earlier about hey, it’s hard to get into these schools that have these programs available. I think we often think that ‘Oh it needs to be a very specific cyber program that they’re doing.’ Some of it is they can learn those types of skills on the job when they get in and it’s more about building that broad base of technical capability in our workforce.”
“I do think there’s a lot of government agencies out there, like CISA, that have information out there that people can learn and train up. I think there are a lot of virtual education things going on that are very powerful.”
Subcommittee on Emergency Management and Technology Chairman Anthony D’Esposito (R-NY) asked witnesses about the impact of drones on law enforcement following last week’s hearing:
“It seems that there may be a space for AI in these drones so generally speaking, if any of you could answer, is AI already being used in drones either by those in law enforcement, the government, or privately?”
Demmer answered:
“Being in a related field with wall climbing robots primarily, I can say that AI is useful for using these smart tools properly. Everything from localizing data, ensuring that the data point is synced to the location on a real world asset, to processing millions of data points or in this case visual images. We heard a little bit earlier [about] drones being used as well to secure the border, so there are definitely applications here for that.”
Subcommittee Chairman D’Esposito then asked the panel:
“[The NYPD] currently use[s] Chinese technology in their drones, and they’re working to eliminate them from the fleet because of the issues and concerns that we have…Those Chinese drones are still in our atmosphere, still being utilized by first responders and law enforcement agencies, so how can AI help us mitigate the threats that they pose?”
Amlani replied:
“There is also a significant amount of work being done by the Defense Innovation Unit and other agencies on mitigation of drones and counter-drone work. So AI used for counter-drone work is also a way to mitigate this.”
Representative Laurel Lee (R-FL) asked for elaboration on the concept of ‘Secure by Design,’ to which Sikorski answered:
“You think about what are you building, how are you building it, what are you pulling in from the supply chain into that build, how are you protecting the application as it’s running, how are you protecting the models, the data, everything as it’s actually flowing out to customers or otherwise. And I think that’s where a really big focus on building things in a secure way is really important.”
###