OpenAI, Anthropic brief lawmakers on cyber risks from new AI models

OpenAI, Anthropic brief lawmakers on cyber risks from new AI models

OpenAI and Anthropic held separate closed-door sessions last week with House Homeland Security Committee staff to discuss their latest models and the cybersecurity dangers they pose, according to sources familiar with the meetings. The briefings marked one of the first formal engagements between major AI developers and lawmakers over threats posed by systems capable of finding and exploiting critical vulnerabilities.

Anthropic has delayed public release of its Mythos Preview model specifically because of its ability to rapidly identify and leverage serious security flaws. OpenAI, meanwhile, has adopted a staged approach to rolling out its GPT-5.4-Cyber model. Both companies are working with federal agencies to provide access to the systems.

The Thursday sessions covered recent model capabilities and what they mean for critical infrastructure, along with concerns about Chinese efforts to steal and replicate American AI systems. An OpenAI spokesman confirmed the company held multiple briefings with Senate and House committees last week, plus a separate session with the White House. Anthropic said it regularly briefs congressional staff on model capabilities and national security implications.

House Homeland Security Chair Andrew Garbarino said in a statement that industry-government partnerships are critical to keeping pace with emerging threats. "Productive partnerships between industry and government are essential to help us stay ahead of the evolving threat landscape, ensure the government is prepared to securely harness AI for its defensive capabilities, and support and protect American AI development as adversaries like China seek to gain an advantage by any means," Garbarino told the committee.

Garbarino has been hosting private roundtables with tech and AI executives and coordinating with Rep. Jay Obernolte of California, who introduced legislation this week outlining a federal AI framework. The committee has held several public hearings on generative AI's impact on national security and state-sponsored cyberattacks.

A separate briefing on jailbroken AI models, which have been stripped of safety controls, conveyed urgent warnings to committee members. Rep. August Pfluger of Texas, who attended that session, said he was alarmed at how quickly the systems could be manipulated to carry out harmful tasks. "What I just saw in there, with just a short amount of time typing in questions, is very scary," Pfluger told reporters.

Rep. Andy Ogles of Tennessee, also present at that briefing, warned that most of these tools are readily available and easily accessible, increasing the risk that they end up in the wrong hands. "It's rather frightening, and it underscores the fact that AI is advancing so rapidly and Congress is light years behind," Ogles said.

Author James Rodriguez: "These briefings show how nervous lawmakers should be about AI capabilities outpacing their understanding of what's actually possible."

Comments