Earlier this month the AI company Anthropic said it had created a model so powerful that, out of a sense of responsibility, it was not going to release it to the public. Anthropic says the model, Mythos Preview, excels at spotting and exploiting vulnerabilities in software, and could pose a severe risk to economies, public safety and national security. But is this the whole story? Some experts have expressed scepticism about the extent of the model’s capabilities. Ian Sample hears from Aisha Down, a reporter covering artificial intelligence for the Guardian, to find what the decision to limit access to Mythos reveals about Anthropic’s strategy, and whether the model might finally spur more regulation of the industry.. Help support our independent journalism at theguardian.com/sciencepod