Skip to content
RSS

European Commission Opens Regulatory Talks With Anthropic Over Restricted Cybersecurity AI Model

glass walled building during daytime

The European Commission confirmed April 17 that it is engaged in ongoing discussions with Anthropic, the San Francisco-based AI developer, covering the company's range of AI models, including cybersecurity-specific systems not currently available to the bloc's 27 member states. Anthropic has committed to respect the EU's General Purpose AI (GPAI) Code of Practice, Commission spokesman Thomas Regnier told reporters in Brussels. Regnier stated that under the Code's framework, "there is an obligation to assess and mitigate risks that could come from a service that may or may not be offered in Europe." The talks center on Claude Mythos, Anthropic's advanced cybersecurity model, which the company itself has declined to release to the general public.

Anthropic publicly introduced Mythos on April 7 and withheld it from general availability on national-security grounds. A pre-publication document described the model as posing "unprecedented cybersecurity risks." Mythos represents a significant advance in AI's ability to automate complex cyberattacks, specifically the identification of vulnerabilities and the generation of exploits. An assessment by the UK's AI Security Institute (AISI), which received early access, found the model succeeded in expert-level hacking tasks 73 percent of the time. Prior to April 2025, no AI model could complete those tasks at all. Rather than a public rollout, Anthropic released Mythos to a limited group as part of an initiative called Project Glasswing, billing it as more effective than competing AI systems at detecting software vulnerabilities.

The Commission's engagement with Anthropic proceeds under the GPAI Code of Practice, which the European AI Office published in final form on July 10, 2025. The Code is a voluntary tool, prepared by independent experts through a multi-stakeholder process, designed to help industry comply with the AI Act's obligations for providers of general-purpose AI models. The Safety and Security chapter applies to the small number of providers of the most advanced models, specifically those subject to obligations for GPAI models with systemic risk under Article 55 of the AI Act. The Commission's formal enforcement powers remain prospective. From Aug. 2, 2026, the Commission's enforcement powers enter into application, and it will enforce compliance with obligations for providers of GPAI models, including fines. Failure to satisfy those requirements may result in penalties of the greater of €15 million or 3 percent of a company's global annual turnover. Commission spokesman Regnier signaled the August 2026 date during the April 17 briefing, noting that "certain rules will kick in" by then, and underscored that the Commission is currently receiving information from Anthropic and will proceed from there. [POLITICO]

The talks unfold against a backdrop of mounting concern across Western governments and financial institutions. The heads of America's largest banks reportedly met this month with Federal Reserve Chairman Jerome Powell and Treasury Secretary Scott Bessent to weigh the security implications of the as-yet unreleased model. The European Central Bank plans to engage the chief risk officers of eurozone banks to evaluate potential threats linked to Mythos. Anthropic CEO Dario Amodei separately met with senior officials from the Trump administration "for a productive discussion on how Anthropic and the U.S. government can work together on key shared priorities such as cybersecurity, America's lead in the AI race, and AI safety." The EU's regulatory posture is further complicated by a reported access gap: at the April 17 briefing, reporters pressed Regnier on why the Commission had not yet obtained direct access to Mythos, noting that approximately 40 unnamed organizations in the UK have already received it. Regnier acknowledged that no governmental or intergovernmental body has access to the model at this stage. [POLITICO]

The timeline places pressure on the Commission's AI Office to build capacity before enforcement begins. Analysts have noted that the AI Office appears to lack the internal capacity to scan trends at the frontier of AI development, with exceptional but few staff facing constant deadline pressure that leaves no one specifically tasked with monitoring frontier AI developments and translating them into regulatory action. Within six days of Anthropic's announcement, UK AISI published results of comprehensive cyber-capability testing of Mythos, illustrating the gap between the UK's evaluation pace and the EU's. The Commission's decision to engage Anthropic through a voluntary compliance channel rather than an adversarial enforcement posture signals a strategic choice, but the Aug. 2, 2026 enforcement trigger means that window is narrowing. Whether the Commission can obtain adequate model access and technical expertise to complete a meaningful risk assessment before that date remains the central unresolved question.

References:
[1] Reuters via Investing.com. (2026, April 17). Anthropic talks to EU, including on its cyber security models, Commission says. https://www.investing.com/news/economy-news/anthropic-talks-to-eu-including-on-its-cyber-security-models-commission-says-4619976

[2] Mobile World Live. (2026, April 20). Anthropic talks AI models with European Commission. https://www.mobileworldlive.com/regulation/anthropic-talks-ai-models-with-european-commission/

[3] Courthouse News Service / AFP. (2026, April 17). EU in talks with Anthropic over risks of AI model Mythos. https://www.courthousenews.com/eu-in-talks-with-anthropic-over-risks-of-ai-model-mythos/

[4] European Commission Audiovisual Service. (2026, April 17). EC Midday press briefing of 17/04/2026. https://audiovisual.ec.europa.eu/en/media/video/I-287781

[5] Scientific American. (2026, April). What is Mythos and why are experts worried about Anthropic's AI model. https://www.scientificamerican.com/article/what-is-mythos-and-why-are-experts-worried-about-anthropics-ai-model/

[6] Fortune. (2026, March 26). Exclusive: Anthropic 'Mythos' AI model representing 'step change' in capabilities. https://fortune.com/2026/03/26/anthropic-says-testing-mythos-powerful-new-ai-model-after-data-leak-reveals-its-existence-step-change-in-capabilities/

[7] TechPolicy.Press. (2026, April 24). How the EU and UK can learn from Anthropic's Mythos. https://www.techpolicy.press/how-the-eu-and-uk-can-learn-from-anthropics-mythos/

[8] CBS News. (2026, April 23). Anthropic investigating possible breach of its Mythos AI model. https://www.cbsnews.com/news/anthropic-investigates-mythos-ai-breach/

[9] Reel Financial. (2026, April). ECB engages eurozone banks on cybersecurity threats from Anthropic's AI model. https://www.reelfinancial.com/archives/95369

[10] European Commission. (n.d.). The General-Purpose AI Code of Practice. https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai

[11] European Commission. (n.d.). Guidelines for providers of general-purpose AI models. https://digital-strategy.ec.europa.eu/en/policies/guidelines-gpai-providers

[12] Jones Day. (2025, August 6). EU AI Act: European Commission publishes General-Purpose AI Code of Practice. https://www.jonesday.com/en/insights/2025/08/eu-ai-act-european-commission-publishes-generalpurpose-ai-code-of-practice

Sources:

Latest Articles

Discussion

Comments (0)

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top
Search