Anthropic investigating claim of unauthorised access to Mythos AI tool

Anthropic Examines Unauthorized Access to Mythos AI Cybersecurity Model

Anthropic is currently examining a report that a small group of individuals accessed its Claude Mythos model, a cutting-edge cybersecurity tool designed to be highly powerful yet secure enough for public use. The company stated in a release: ‘We are investigating a report of unauthorized access to the Claude Mythos Preview model via one of our third-party vendor environments.’

This follows a Bloomberg report indicating that users in a private online forum accessed the model without standard permissions. Concerns have been raised regarding the model’s capabilities, despite the UK’s lead cybersecurity authority suggesting that advanced AI tools could offer significant benefits if properly safeguarded from misuse.

“When powerful AI tools are accessed beyond their intended controls, the risk extends beyond a single security incident to the widespread dissemination of capabilities that could facilitate fraud, cyber abuse, or other harmful activities,” said Raluca Saceanu, CEO of cybersecurity firm SmartTech.

According to Bloomberg, the individual already had authorization to view Anthropic’s AI models as part of their work with a third-party contractor. The report further mentioned that the group has been utilizing the model since gaining access, though not for hacking purposes, as they aim to avoid detection.

On Wednesday, the head of the UK’s National Cyber Security Centre (NCSC) addressed a major cybersecurity conference, presenting a more optimistic outlook on AI’s potential to enhance security and safety. ‘Frontier AI has rapidly accelerated the identification and exploitation of existing vulnerabilities, underscoring how swiftly it can reveal gaps in fundamental cybersecurity practices,’ he stated.

Security Minister Dan Jarvis, also at the event, called for AI companies to partner with the government on a ‘generational effort’ to ensure AI is harnessed for defending critical networks against cyber threats. The most potent and sophisticated AI models, referred to as frontier AI, are primarily developed outside the UK, with leading firms situated in the US or China.

This dependency means the UK relies on entities like Anthropic to provide access to Mythos, with no oversight over its development, training, or distribution processes. OpenAI, too, has introduced a cybersecurity model named GPT 5.4 Cyber, which it claims is highly effective.

The discussions at CyberUK also emphasized the persistent threat posed by nation-state and hacktivist actors, notably from Russia and China. The NCSC cautioned that ‘cybersecurity has become the home front of defense in the UK,’ citing recent incidents like the Iran attacks as evidence of its growing significance in contemporary conflicts.

Additional reporting by Imran Rahman-Jones. Sign up for our Tech Decoded newsletter to follow the world’s top tech stories and trends. Outside the UK? Sign up here.

Leave a Reply

Your email address will not be published. Required fields are marked *