Key Points
- Anthropic limited access to Claude Mythos Preview after it discovered thousands of critical vulnerabilities, many unpatched and decades old.
- AI-powered cyberattacks have surged 72% year-over-year, with 87% of organizations affected in 2025.
- Project Glasswing aims to unite major firms in using AI defensively to identify and patch vulnerabilities ahead of attackers.
Artificial intelligence is rapidly reshaping the cybersecurity landscape, and Anthropic’s latest move underscores how quickly the balance between defense and offense is shifting. The company’s decision to limit access to its new model, Claude Mythos Preview, comes as AI systems demonstrate the ability to identify and exploit software vulnerabilities at a level approaching — and in some cases exceeding — elite human experts. The development raises urgent questions about how markets, enterprises, and regulators will respond to a technology that can both secure and destabilize digital infrastructure.
AI Models Push Beyond Human-Level Vulnerability Detection
Anthropic revealed that Claude Mythos Preview uncovered thousands of critical vulnerabilities across major operating systems, web browsers, and widely used software libraries. Notably, the model identified high-severity flaws in every major operating system tested, including legacy vulnerabilities dating back decades.
Among the findings were a 27-year-old flaw in OpenBSD, a 17-year-old remote code execution vulnerability in FreeBSD, and a 16-year-old issue in FFmpeg. The scale and depth of these discoveries signal a step-change in capability. Historically, identifying zero-day vulnerabilities required scarce, high-cost human expertise. AI is now compressing that timeline dramatically.
Anthropic estimates that 99% of the vulnerabilities identified remain unpatched, highlighting both the magnitude of latent risk and the limitations of current remediation pipelines.
Rising Cyberattack Risks and Market Implications
The timing of this development aligns with a sharp increase in AI-assisted cyber threats. Data from AllAboutAI indicates a 72% year-over-year rise in AI-powered cyberattacks, with 87% of global organizations reporting exposure to such incidents in 2025.
For markets, this introduces a dual dynamic. On one hand, cybersecurity firms stand to benefit from heightened demand, potentially driving capital inflows and valuation expansion. On the other, systemic risk increases as critical infrastructure — from financial systems to cloud environments — becomes more exposed to automated exploitation.
Investors are likely to recalibrate risk models, particularly for companies with large digital attack surfaces. The emergence of AI-driven vulnerability discovery could also accelerate regulatory scrutiny, especially in sectors handling sensitive data such as banking, healthcare, and telecommunications.
Project Glasswing and the Defensive Coalition
In response, Anthropic has launched Project Glasswing, a collaborative initiative involving more than 40 major organizations, including leading cloud providers, financial institutions, and technology firms. The initiative aims to use AI defensively — identifying vulnerabilities, sharing intelligence, and accelerating patch deployment before malicious actors can act.
This approach reflects a broader strategic shift: cybersecurity is becoming increasingly collective rather than siloed. By pooling resources and intelligence, participating companies hope to outpace adversaries in what is effectively an AI-driven arms race.
However, coordination at this scale introduces its own challenges, including data-sharing constraints, competitive tensions, and varying levels of security maturity among participants.
Investor Psychology and Strategic Positioning
From a behavioral standpoint, the narrative around AI and cybersecurity is likely to amplify both fear and opportunity. Investors tend to overreact to tail risks, and the idea of autonomous systems uncovering exploitable flaws at scale could trigger short-term volatility, particularly in tech-heavy indices.
At the same time, long-term capital may increasingly favor firms positioned as “defensive enablers” — those providing infrastructure, threat detection, and automated remediation tools. The distinction between offensive and defensive AI capabilities will become a key lens through which investors evaluate technology exposure.
What Comes Next for AI and Cyber Defense
Anthropic’s restricted rollout signals a recognition that technological capability has outpaced governance frameworks. While AI promises to harden global software systems over time, the transition period may be marked by elevated risk, as vulnerabilities are discovered faster than they can be patched.
The trajectory suggests a future where cybersecurity becomes more proactive, continuous, and AI-driven — but also one where the consequences of misuse are amplified. For enterprises and investors alike, the imperative is clear: adapt quickly, invest in resilience, and prepare for a landscape where the line between protection and threat is increasingly defined by algorithms.
Comparison, examination, and analysis between investment houses
Leave your details, and an expert from our team will get back to you as soon as possible
Leave a comment