Claude Mythos Preview, Anthropic’s most advanced AI model so far, will not be released to the public due to what the company has cited as potential consequences to “economies, public safety, and national security.”
The model is capable of autonomously identifying thousands of zero-day vulnerabilities across major operating systems and browsers. Its arrival has fundamentally shifted the baseline assumptions for cybersecurity education and practice nationwide.
The emergence of Mythos prompts us to consider where cybersecurity is headed and how quickly new threats are evolving.
The Scale and Speed of AI-Driven Threats
AI attacks are increasing at an alarming rate. DeepStrike states 82.6% of phishing emails now use AI in some form, and 87% of organizations report experiencing an AI-enabled cyberattack in the past year. IBM's Cost of a Data Breach Report 2025 puts AI-powered attacks at 16% of all reported cyber incidents, with an average breach cost of $5.72 million.
In late 2025, Anthropic's own investigation documented what it called the first large-scale cyberattack executed without substantial human involvement. This cyber espionage campaign was 80–90% AI-driven, probing roughly 30 global targets in tech, finance, energy, and government.
The pace of change is just as significant as the scale. IBM’s 2026 X-Force Threat Intelligence Index reported a 44% rise in attacks targeting public-facing applications, much of it powered by AI-driven vulnerability discovery.
The average time to exfiltrate data has dropped dramatically, from nine days in 2021 to only 30 minutes by 2025. CrowdStrike’s 2026 Global Threat Report also noted an 89% year-over-year increase in attacks by attackers leveraging AI.
AI is not simply another tool in the attacker’s arsenal. It is accelerating the pace of threats, further narrowing the window for defenders to respond. In the wake of this reality, Mythos emphasizes why skilled human decision-making is more essential than ever.
How Cybersecurity Education Needs to Evolve
We can no longer approach cybersecurity as a fixed body of knowledge. Instead, we must teach it as a dynamic, evolving practice.
Threat modeling now requires an understanding of how AI accelerates both the discovery and exploitation of vulnerabilities. Students must go beyond simply identifying risks to grasping how these tools change the cybersecurity landscape. Developing fluency with AI-powered tools is no longer optional; it is foundational.
Reasoning must take precedence over rote memorization. Memorizing Common Vulnerabilities and Exposures (CVE) taxonomies is no longer sufficient. Individuals must be able to evaluate AI-generated findings, distinguish between signal and noise, and exercise sound judgment in uncertain situations.
Students need direct, hands-on experience with AI-assisted security workflows, not simply theoretical exposure. This is why lab-based, tool-integrated curricula are so important. Programs that treat AI as an add-on rather than as a core part of the operating environment are already falling behind.
We must move away from teaching security as a static discipline and instead accept it as a dynamic, developing field.
These shifts are already reshaping hiring expectations. Employers are looking for individuals who can assess AI-generated results, question false positives, and make well-informed decisions when the way is unclear. Ultimately, human insight remains the final checkpoint.
Why Ethical Hacking Just Got More Important, Not Less
Some may assume that Mythos signals the end of the penetration tester’s role. That is a misconception.
While Mythos can discover vulnerabilities at scale, it cannot replace the human decision-making needed to interpret commercial context, prioritize risks, communicate findings to non-technical audiences, or manage the ethical and legal complexities of authorized security work.
Ethical hacking education is fundamentally about developing that judgment. Hands-on defensive training builds the intuition that lets a practitioner know when an AI finding is signal versus noise, when a technically exploitable vulnerability isn't worth the remediation cost, and when a situation requires human escalation. These skills cannot be automated. They are deeply situated, experiential, and human.
If anything, Mythos makes it even more critical that we get this judgment right.
What Employers and Students Need to Understand Right Now
For students, the expectations for junior positions have changed. Simply knowing how to operate tools is now the minimum requirement. Employers are seeking individuals who can collaborate with AI systems, critically audit their outputs, and take responsibility for the decisions that result. Building a portfolio that demonstrates this accountability will set you apart.
For employers, it is time to revisit recruitment standards. The key question is no longer whether a candidate knows the OWASP Top 10, but whether they can function efficiently in AI-assisted security environments. These are distinct skill sets, and training pipelines need to evolve to reflect this new reality.
For educational programs and institutions, the workforce gap I observed in 2006—graduates lacking the practical skills needed for their roles—has become even more urgent. Mythos highlights how this gap could become a crisis.
If our graduates aren't prepared to work with AI in security contexts, critically evaluating its outputs rather than simply using it, we risk preparing them for a world that no longer exists.
Project Glasswing represents Anthropic’s effort to provide defenders with an early advantage. Our responsibility within education is to ensure that qualified experts are ready to put these tools to effective use.
About the Author: Keith A. Morneau
Dr. Keith Morneau is an experienced cybersecurity professional and the current Dean of Computer and Information Science at ECPI University. He has over 20 years of experience in cybersecurity education and has helped ECPI University become a National Center of Academic Excellence in Cyber Defense Education. Dr. Morneau is also an ABET CAC Commissioner and Team Chair for cyber programs and has secured several grants and published papers to advance cybersecurity education. His research interests focus on workforce issues and bridging the skills gap in cybersecurity professions.

