Skip to content

News

ICYMI: Microsoft, Securin, and More Cyber Leaders Testify on Using AI in the US Cybersecurity Mission

June 18, 2025

WASHINGTON D.C.— Last week, Rep. Andrew Garbarino (R-NY), chairman of the Subcommittee on Cybersecurity and Infrastructure Protection, held a hearing to discuss the secure implementation of artificial intelligence (AI) to strengthen U.S. cybersecurity.

Witness testimony was provided by Kiran Chinnagangannagari, co-founder and chief product & technology officer at Securin Inc.; Steve Faehl, U.S. government security leader at Microsoft; Gareth Maclachlan, chief product officer at Trellix; and Jonathan Dambrot, chief executive officer at Cranium AI Inc.

Witnesses highlighted how the public and private sectors can leverage AI to stay one step ahead of cyber threats and stressed the importance of ensuring AI is designed securely. 

In his opening statement, Chairman Garbarino explained that while AI could be utilized to enhance our nation’s cybersecurity defenses, it could also provide new tools to cybercriminals and our adversaries:  

“While AI bolsters our productivity and security, our adversaries also hope to use the technology for their own gain. Our nation’s adversaries increasingly weaponize AI to scale and more quickly develop attacks against American citizens, businesses, and government entities. Additionally, phishing attacks have increased nearly 1,200% since the rise of generative AI in late 2022.”

In his opening statement, Faehl discussed the importance of the federal government utilizing AI securely:

“We’re at an inflection point and adding more human attention is just not scalable. There is no way to tackle this urgent national security issue without organizations, including the federal government immediately embracing AI. There are three primary ways at Microsoft that we think about security and AI together. One of them is security with AI, then security of AI, and finally security from AI. Security with AI is grounded in using large language models that provide the opportunity to supplement human effort and attention with computational power. We track the implications of fueling this technology closely as we support customers with our security co-pilot product. As a result, we’ve seen a 34% decrease in mistakes, a 17% decrease in breaches, and a 30% faster time to incident resolution using this technology, which is a huge leap forward in capability, and we’re still only in the early innings of generative AI.”

In his opening statement, Maclachlan discussed the dangers of failing to use AI to strengthen our cybersecurity:

“If we do not use Gen AI to actually secure environments, we will be lost against the attackers themselves. And the reason being is security operations, historically, has been based upon hire a few experts, get them to find the most important things and go look at them in detail. Doesn’t work when Gen AI allows the bad guys to hide within the shadows, do things at scale, personalize every single attack so you can’t spot for patterns. But adding Gen AI to the security operation side works. Gen AI is never going to get bored, it’s never going to get distracted, it doesn’t care about looking at the same things over and over again every time it gets a little bit more information.”

Chairman Garbarino asked how the public and private sectors can adapt “Secure by Design” principles for AI: 

“Given how AI has evolved rapidly in just a few years from machine learning to generative AI to Agentic AI, it appears we have an opportunity to adopt a similar philosophy and deploy secure AI tools. How can we adopt ‘Secure by Design’ principles for AI?”

Chinnagangannagari answered: 

“I would say that software secure by design policies as framework is not new. It was introduced back in 2005 by NIST [National Institute of Standards and Technology] and unfortunately those ‘Secure by Design’ standards have not been adopted or not been practiced. And with AI, it’s actually exacerbating that problem that we are seeing with vibe coding, lot of students, lot of kids, lot of folks that do not have any software experience or writing code and it’s introducing more and more software vulnerabilities and weaknesses. We actually did a research analysis, and we saw most of the models that are out there can be jailbroken if you have the patience and will to get to it. And we also have analyzed about 15,000 MCP servers that are out there. One other thing that we notice is that CWE-20, which is input validation, which is very old vulnerability and weakness, it’s been exploited even today…So there needs to be a big push from the Congress and legislation to mandate and ‘Secure by Design’ in the products and offerings that these vendors are providing out there. And I saw the EO on Friday that actually pushes that software by design policies forward.”

Chairman Garbarino asked witnesses about how the Trump administration’s cybersecurity executive order, which makes datasets publicly available for cyber defense research, could help improve their products: 

“Last Friday, the Trump administration released its first executive order on cybersecurity. It addressed problematic elements from previous EOs including the approach to AI security proposed in the final Biden administration cyber EO. The Trump administration’s EO focuses on vulnerability identification and management including making data sets publicly available for cyber defense research… How would access to data sets for cyber defense improve your products offerings?”

Chinnagangannagari answered: 

“Now, as we are using these models, we—just like any other organization that’s using these models, it’s a black box. We do not know what these models were trained on, and we do not know how the data is biased. So that actually poses a significant question whenever we use these models for any purposes in any sector or any use cases. So having access to that type of data at least will help us understand what the model was trained on and if there is any bias that is introduced. In fact, we do propose a—just like FDA, when you go to the store, when you buy food or when you buy a drug, there is label and ingredients on it. You know exactly what you’re consuming. So you know what the impact of it is. We need a similar one for AI, understanding the entire model built off materials, including the data. That is a missing piece, in my opinion, and there’s something that, you know, Securin has been working [on]. Very happy and glad to provide that information to you, chairman.”

Faehl answered:

“The availability of data, I think, is especially important when it comes to benchmarking and assessment. So having common data available to agencies such that they can test AI solutions on that common data set to analyze for results. Many agencies would like to do assessment of AI solutions but are hesitant to make their own data available to do so. And so as a result, you can actually have broader testing, have AB comparisons, benchmarks, common benchmarks. We’ve seen that with data that NIST has provided where it fuels innovation because you can now start to benchmark and compare solutions. And any solution that involves AI is going to involve a lot of testing and validation, and so therefore providing data in that capacity is extremely valuable.”

###