ServiceNow just patched CVE-2025-12420 (CVSS 9.3)—a vulnerability that let unauthenticated attackers impersonate any user and hijack AI agents to create admin accounts. They're a massive company with security resources. Two months earlier, Anthropic disclosed that attackers used Claude Code to execute 80-90% of a sophisticated espionage campaign autonomously. The barrier to entry for advanced attacks is collapsing. The expertise to defend against them isn't scaling. This is the gap that should terrify everyone.
November 2025: The Month Everything Changed
Two disclosures in quick succession painted a picture that security teams need to internalize:
A Chinese state-sponsored threat actor weaponized Claude Code to target ~30 organizations including tech companies, financial institutions, and government agencies. The AI performed reconnaissance, wrote custom exploit code, harvested credentials, and exfiltrated data—with humans intervening at only 4-6 decision points per campaign.
Aaron Costello at AppOmni found that ServiceNow shipped the same hardcoded credential—servicenowexternalagent—to every customer instance. Combined with account linking that trusted email addresses alone (no MFA), attackers could impersonate any user and instruct AI agents to create admin backdoors.
These aren't isolated incidents. They're data points on a trend line that defenders haven't fully grasped yet.
The Democratization of Sophisticated Attacks
Here's what the GTG-1002 campaign actually demonstrated: the work of "entire teams of experienced hackers" can now be compressed into a framework that a moderately skilled operator can direct.
Anthropic coined the term "vibe hacking"—a riff on "vibe coding" where developers let AI write their code. The attack framework worked by breaking sophisticated operations into small, seemingly innocent tasks that Claude would execute without understanding the broader malicious context.
The AI made thousands of requests per second at peak—"an attack speed that would have been, for human hackers, simply impossible to match."
But here's the part that should keep you up at night: this wasn't some custom-trained offensive model. It was a publicly available AI coding assistant, jailbroken through careful prompting. The attackers convinced Claude it was performing legitimate security testing.
It Gets Worse: $400 Ransomware
In August 2025, Anthropic disclosed another case: a cybercriminal with basic coding skills used Claude to develop, market, and distribute multiple ransomware variants with advanced evasion capabilities. The packages sold on dark web forums for $400-$1,200.
Read that again. Someone who couldn't implement encryption algorithms on their own was selling functional ransomware. The barrier to entry for sophisticated cybercrime isn't just lowering—it's functionally gone for anyone willing to spend time prompt engineering.
The Expertise Gap (Not the Tools Gap)
Here's where the standard cybersecurity narrative falls apart. Everyone talks about the need for AI-powered security tools. That's not wrong, but it's incomplete.
The 2025 ISC2 Cybersecurity Workforce Study made a significant shift this year: they stopped publishing the traditional "workforce gap" number (unfilled positions) and instead focused on skills shortages. Their reasoning? "Respondents have prioritized the need for critical skills as more important than the need for more people."
This isn't "we need more warm bodies." This is "the bodies we have don't know how to defend against the new threat landscape."
The expertise paradox
CyberScoop nailed it: "How are organizations supposed to fill AI roles if no one has expertise in the new field?"
These roles require experience across the AI lifecycle—data sourcing, training, evaluation, deployment, monitoring—plus the judgment to defend systems when attackers aim at the model rather than the application. You don't learn that from tutorials. You earn it by deploying AI systems into production, getting attacked, and iterating controls.
Most organizations haven't deployed AI into production yet. They definitely haven't been attacked in ways that taught them anything useful. The expertise pipeline is years behind the deployment curve.
The Asymmetry Problem
Let's make this concrete with some math:
- 80-90% of campaign automated
- Thousands of requests per second
- Custom exploit code generated on demand
- Scales across multiple targets simultaneously
- Operator needs only basic technical skills
- $400 ransomware kits available
- 4.8M unfilled security positions
- Only 37% have AI security processes
- 11% of SMBs use AI security tools
- 48% cite AI expertise as top barrier
- Budget cuts now #1 cause of gaps
- Only 6% fully implemented agentic AI security
Attackers are using AI to multiply their capabilities by 10-100x. Defenders are arguing about whether they have budget for another FTE.
Back to ServiceNow: Why This Matters
So where does BodySnatcher fit into this picture?
ServiceNow isn't some scrappy startup. They're a $150B+ enterprise platform company with security teams, budgets, and processes. And they still shipped:
- The same hardcoded credential (
servicenowexternalagent) to every customer - Account linking that accepted email addresses with no MFA
- A "Record management AI agent" with arbitrary record creation capabilities—active by default
These are design choices. Someone looked at the tradeoff between friction and security and chose friction-free. At a company that should know better.
Now imagine that same calculus playing out at:
- The regional bank deploying a customer service chatbot because the board saw a demo
- The healthcare company integrating AI into patient portals because competitors are
- The manufacturer connecting AI agents to their ERP because the vendor promised efficiency
These organizations don't have AI security talent. Many don't have security talent, period. They're buying solutions from vendors who may or may not understand the risks themselves.
The Uncomfortable Math
Enterprise AI spending hit $37B in 2025—up 3.2x from 2024. That's the fastest growth of any software category in history.
Meanwhile:
- Only 37% have processes for safe AI deployment
- Only 6% have fully implemented agentic AI with security controls
- 83% of AI leaders report "major or extreme concern" about generative AI (up 8x in two years)
- The cyber skills gap increased 8% year-over-year
The money is flowing. The expertise isn't following.
What Actually Helps
I don't have a tidy solution. The honest answer is that most organizations aren't ready to securely deploy agentic AI, and they're going to do it anyway.
Some things that might reduce the damage:
Stop thinking "tools" and start thinking "expertise"
Buying an AI security product doesn't solve the problem if nobody understands how to configure it, interpret its output, or respond to its findings. The ISC2 data is clear: skills matter more than headcount. Invest in training your existing people before you buy another dashboard.
Scope AI agents like you'd scope admin accounts
Every capability is attack surface. The ServiceNow exploit worked because a shipped agent had way more power than most use cases required. Principle of least privilege applies to AI too.
Assume attackers have AI that's at least as good as yours
Because they do. The GTG-1002 campaign used publicly available tools. Your threat model needs to account for adversaries who can automate reconnaissance, generate custom exploits, and operate at machine speed.
Design for the expertise you actually have
If your team can't audit AI agent configurations, don't deploy AI agents that require configuration audits to be secure. Match your attack surface to your defensive capability.
Ask vendors the hard questions
How are agents authenticated? What's the account linking model? What capabilities ship by default? What's your threat model for prompt injection? If they can't answer clearly, that tells you something.
The Bottom Line
The ServiceNow vulnerability will get patched and forgotten. The Claude Code disclosures will fade from the news cycle. That's how this works.
But the underlying dynamics aren't going anywhere:
- AI is compressing the skill requirements for sophisticated attacks
- The expertise to defend against AI-enabled threats doesn't exist at scale
- Budget pressures are cutting security investments while AI adoption accelerates
- The companies building AI platforms are making the same security mistakes we've been making for decades
ServiceNow had resources and still shipped hardcoded credentials. Attackers with basic skills are selling functional ransomware. State-sponsored groups are running 80% autonomous campaigns.
The gap between "it works on my machine" and "we're in the news" keeps getting shorter. And most organizations are on the wrong side of that gap.