This week we talk about Project Glasswing, Anthropic, and Q Day.
We also discuss exploit markets, vulnerabilities, and zero days.
Recommended Book: The Culture Map by Erin Meyer
Transcript
In the world of computer security, a zero-day vulnerability is an issue that exists within a system at launch—hence, zero-day, it’s there at day zero of the system being available—that is also unknown to those who developed said system.
Thus, if Microsoft released a new version of Windows that had a security hole that they didn’t know about, but someone else, a hacking group maybe, discovered before it was released, they might use that vulnerability in Windows or Word or whatever else to hack the end-users of that software.
While large companies like Microsoft do a pretty good job, considering the scope and scale of their product library, of identifying and fixing the worst of the security holes that might leave their customers prone to such attacks, that same scope and scale also means it’s nearly impossible to fill every single possible gap: a truism within the cybersecurity world is that defenders need to get it right every single time, and attackers only need to get it right once, and the same is true here. There’s never been a perfect piece of software, and as these things expand in capability and complexity, the opportunity to miss something also increases, and thus, so does the range of possible errors and exploitable imperfections.
Because of how damaging zero-days can be for both users of software and the companies that make that software, there are thriving marketplaces, similar to those that deal in other illicit goods, where those who discover such vulnerabilities can sell them, usually for cryptocurrencies or funds derived from stolen credit cards.
Software companies have countered the increasing sophistication of these exploit black markets with white and grey market efforts, the former being direct payouts to hackers, basically saying hey, thanks for finding this bug, here’s a lump-sum of money, a bug bounty, rather than punishing all hacking of their systems, which is how they would have previously responded, which had the knock-on effect of sending all hackers, even those who weren’t looking to cause trouble, either underground, or actively hunting for bugs for the black market.
The grey market is more complicated and diverse, and also the largest of marketplaces for those shopping around for these types of exploits. And it’s populated by the same sorts of neverdowells who might frequent the exploit black markets, but also includes all sorts of governments and intelligence agencies, who scoop up these sorts of vulnerabilities to use against their opponents, or to deny them to others who might use them instead, against them.
All sorts of governments, from the US to Russia to North Korea to Iran are regular shoppers on these computer system exploit grey markets, and that has created a complicated, entangled system of incentives, as is some cases, it’s better for the US government, or Iranian government, or whomever, if the company making these systems doesn’t know about a bug or other vulnerability, because they just spent several million dollars to buy a map to said bug or gap, which could, at some point in the future, allow them to tunnel into an enemy’s computers and cause damage or steal information.
What I’d like to talk about today is a new AI system that is apparently very, very good at identifying these sorts of exploits, and why this is being seen as a milestone moment for some people operating in the zero day, and overall computer security space.
—
On April 7, 2026, US-based AI company Anthropic announced Project Glasswing—a new initiative that is currently only available to 11 companies that’s meant to help those companies shore-up their cyber defenses before more AI systems like the one that underpins Project Glasswing, which is called Mythos Preview, hit the market.
So these companies, Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks, make a lot of stuff, and in particular make and maintain a lot of vital online and device-based software infrastructure, like operating systems and all the stuff that keeps things in our apps and on the web secure.
Mythos Preview is a new model created by Anthropic, similar to their existing Claude models, but apparently vastly more powerful. There are tests that AI companies use to compare the potency of their models at a variety of task types, but those are generally considered to be flawed or game-able in all sorts of ways, so the main thing to know here is that Mythos did way better at most of those tests, especially the coding, the programming-related ones, than the other, currently most capable models, the ones that professional programmers, most of them anyway, are using these days. It was also able to do impressive and worrying things like break out of the sandbox that contained it, accessing the internet when it wasn’t supposed to be able to do so.
And because of that leap forward in programming capability, Mythos Preview was tasked by Anthropic with finding vulnerabilities in all sorts of software systems, including operating systems—Windows, macOS, iOS—and browsers, like Chrome and Firefox.
Most AI systems, and most human coders, if they focus enough and look really hard for long enough, will tend to find some kind of vulnerability in just about anything, because this software is just that big and complex. But within a relatively short period of time, Mythos Preview found thousands of vulnerabilities in these systems, indicating that it’s a lot better at this kind of task than the other AI available these days, and so Anthropic created this project, Project Glasswing, to give these entities a head-start, helping them fill these gaps and bolster their defenses, before everyone else on the planet, including foreign governments, hacker and terrorist groups, but also just everyday people, suddenly have the ability to identify and possibly exploit these vulnerabilities, on scale.
This news hasn’t been super widely reported in the non-tech press quite yet, but within the tech world, it landed like a hand grenade in a crowded room.
And there are already quite a few perspectives on what this all means, including a fair bit of skepticism.
On the skeptic side, many analysts have noted that it’s a common tactic amongst AI companies to doomsay, to basically suggest that their models might end the world, might kill all of humanity, might dramatically change everything, put everyone out of work, maybe, not necessarily because the founders and employees at those companies believe that would be the case, but because the implication is that if these products are that powerful, well, investors should probably give them gobs of money, because a tool that could end the world or cause that much disruption might be the last tool available, or might become the next electricity or internet or whatever else. Claiming philosophical, humanistic concern for the super-weapon you just built, in other words, is one way for AI company leaders to say their product is superior to every other product ever while also seeming to suggest that they are the thoughtful, careful leaders that we need holding the reins of that sort of capacity.
Other skeptics have said that while this might be a step-up in terms of the speed at which such vulnerabilities can be identified in these sorts of systems, other AI systems, existing ones, even open source, free ones, have been able to do the same for a while now. So while Mythos Preview might be even better at it, and might be capable of running constantly, finding more and more of these things for a government that wants to save money they might otherwise spend on the grey market, scooping these things up for use against their enemies, or for defensive purposes, sharing some of them with their homegrown tech companies, perhaps, smaller, less-moneyed groups can already do the same, if they’re smart about how they apply existing, even free, lower-end AI systems.
Others have responded to this announcement similarly to how some have responded to the concept of Q Day, short for Quantum Day, which refers to the hypothetical moment at which quantum computers finally become powerful enough to break the encryption that allows the internet, and banking, and government privacy systems to function. If these encryption keys can be broken—and quantum computers should theoretically be able to do this a lot better than conventional computers, because of their very nature—if and when that happens, if these systems aren’t suitably prepared with new encryption that’s hardened against quantum systems, the entire banking sector could collapse, everything hackable, all the money stealable, none of it trustworthy anymore. The same with the whole of the web, with apps, with government systems that keep things hidden away and classified, with energy grids. It could be chaos.
The theory here, then, is that this type of AI, maybe Mythos Preview, maybe the other systems that it portends—because this whole industry seems to leapfrog itself every three or four months at this point, someone coming out with a big, cool, most powerful new thing, then their competitors coming out with something even more powerful within weeks or months—maybe these vulnerability-identifying and exploiting AI will result in something similar, all the world’s software and encryption a lot more vulnerable, all at once, essentially tomorrow.
It’s more of what we’ve already seen with AI, basically, these tools providing anyone who uses them more leverage to do all sorts of things. Not necessarily creating anything new—exploits and vulnerabilities have always existed—but giving a skilled hacker the ability to find and exploit thousands of them in the same time it would have previously taken them to find and exploit just one. And it could also give unskilled, non-hackery people and entities similar capabilities.
That creates a dramatically new cybersecurity landscape essentially overnight, and that’s why, at least according to their press releases on the matter, Anthropic is not releasing Mythos Preview to the public, and instead is taking the Project Glasswing approach: they don’t think other AI companies, like OpenAI or xAI, can be trusted not to just lob that grenade into the crowded room, so since they got there first, they’re going to try to help everyone protect themselves from that grenade when it inevitably lands.
This could, then, be quite the PR coup, giving Anthropic the opportunity to tout their superior products, while also allowing them to portray themselves as sort of the white knight in the AI world, helping everyone protect themselves, even though they probably could have made far more money by either selling the exploits and creating their own new market for them, or by somehow leveraging those exploits themselves.
At the same time, it could be that they are overselling the capabilities of this new model, painting a rosy picture with them as the heroes, while in turn makes their products seem more powerful than they are in order to bolster their public perception and future economic potential.
It could also be a bit of both; even those who are skeptical about this specific announcement and the implications of it do tend to agree it’s likely we’ll see more disruption from these sorts of models soon. Even if Mythos Preview isn’t the grenade everyone’s worried about, in other words, it’s likely we’ll face such a threat in the near-future, and even if Project Glasswing isn’t the defense we need against such a threat, it’s probably prudent that we be thinking about whatever it is we do need, and ideally building it, too, so it’s ready to go, already in place, when that new threat lands.
Show Notes
https://www.nytimes.com/2026/04/10/briefing/claude-mythos-preview.html
https://www.nytimes.com/2026/04/07/technology/anthropic-claims-its-new-ai-model-mythos-is-a-cybersecurity-reckoning.html
https://en.wikipedia.org/wiki/Claude_(language_model)#Claude_Mythos_Preview
https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted
https://www.anthropic.com/glasswing
https://www.wired.com/story/anthropic-mythos-preview-project-glasswing/
https://stratechery.com/2026/myth-and-mythos/
https://en.wikipedia.org/wiki/Zero-day_vulnerability
https://en.wikipedia.org/wiki/Market_for_zero-day_exploits
This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit letsknowthings.substack.com/subscribe