Finance ministers, central bankers, and financial industry leaders are raising urgent concerns about a new artificial intelligence model developed by Seattle-area company Anthropic, after the system demonstrated an unprecedented ability to identify vulnerabilities in major operating systems, financial infrastructure, and web browsers.
The model, called Claude Mythos, has prompted crisis meetings among global financial officials and was discussed extensively at the International Monetary Fund meeting in Washington DC this week. Canadian Finance Minister François-Philippe Champagne stated the issue had commanded serious attention from finance ministers across the board. “The difference is that the Strait of Hormuz, we know where it is and we know how large it is. The issue that we’re facing with Anthropic is that it’s the unknown, unknown,” Champagne said. “This is requiring a lot of attention so that we have safeguards, and we have processes in place to make sure that we ensure the resiliency of our financial systems.”
Mythos is one of Anthropic’s latest models, developed as part of its broader Claude AI system, which competes with OpenAI’s ChatGPT and Google’s Gemini. Developers responsible for testing AI models on so-called misaligned tasks, those that go against human values and goals, described Mythos as “strikingly capable at computer security tasks.” Citing concerns the model could surface old software bugs or enable exploitation of system vulnerabilities, Anthropic has chosen not to release it publicly. Instead, the company has made Mythos available to technology companies including Amazon Web Services, CrowdStrike, Microsoft, and Nvidia through an initiative called Project Glasswing, which Anthropic describes as “an effort to secure the world’s most critical software.” On Thursday, Anthropic released a new version of its existing Claude Opus model, saying it would allow Mythos’s cyber capabilities to be tested in less powerful systems.

The UK’s AI Security Institute, the only independent body to have published a report on the model’s capabilities, said Mythos was a powerful tool able to find security holes in undefended environments but suggested it was not dramatically more capable than its predecessor, Claude Opus 4. “Our testing shows that Mythos Preview can exploit systems with weak security posture, and it is likely that more models with these capabilities will be developed,” the report’s authors said. Some cybersecurity experts have questioned whether the level of alarm is justified given that the model has not yet been tested by the broader industry.
Senior banking figures are taking the threat seriously regardless. Barclays chief executive CS Venkatakrishnan told the BBC the situation demanded immediate attention. “It’s serious enough that people have to worry. We have to understand it better, and we have to understand the vulnerabilities that are being exposed and fix them quickly,” he said. Bank of England governor Andrew Bailey said governments and financial institutions are being forced to reckon with a new category of risk. “The consequence could be that there is a development of AI, of modelling, which makes it easier to detect existing vulnerabilities in core IT systems, and then obviously cyber criminals could seek to exploit them,” Bailey said.
The US Treasury confirmed it has raised the issue with major American banks, encouraging them to test their systems ahead of any public release of Mythos. Financial industry sources indicated that at least one other prominent US AI company could soon release a similarly powerful model without the same safeguards currently applied to Mythos.
James Wise, a partner at Balderton Capital and chair of the Sovereign AI unit, a venture capital fund backed by £500 million in UK government funding, said Mythos represents a turning point rather than an isolated incident. “This is the first of what will be many more powerful models that can expose systems’ vulnerabilities,” Wise said. His fund is investing in British AI companies focused on security and safety, with the hope that “the models that expose vulnerabilities are also the models which will fix them.”


