AIセキュリティ

Slopsquatting Has Arrived: 20% of AI-Recommended Packages Don't Exist | The New Front of npm/PyPI Supply Chain Attacks

2026-05-15Ryuta Hamamoto

20% of packages recommended by AI coding assistants don't exist—attackers are pre-registering them. Plus Cursor MCPoison (CVE-2025-54136), Claude Code TrustFall, the 7,000-server MCP ecosystem risk, and four defenses development teams can deploy today.

Slopsquatting Has Arrived: 20% of AI-Recommended Packages Don't Exist | The New Front of npm/PyPI Supply Chain Attacks
シェア

I'm Ryuta Hamamoto from TIMEWELL.

“I asked Claude Code for a React state management library. It suggested react-pretty-state. npm install worked fine. Behavior was clean. Done.” Five minutes later, the developer's local environment was already streaming production database credentials to an attacker's server.

This is not a hypothetical scenario. It is a real instance of Slopsquatting—a new flavor of supply chain attack in which attackers pre-register the “plausible-sounding-but-non-existent” package names that AI hallucinates. The moment a developer trusts the AI and runs npm install, they are pwned.

Trend Micro's 2025 research reports a startling figure: 20% of packages recommended by AI are non-existent. This article covers the mechanics of Slopsquatting, the real-world Cursor / Claude Code / MCP incidents around it, and four defenses development teams can put in place today.

TL;DR

  • Slopsquatting = a supply chain attack where attackers pre-register the hallucinated package names AI assistants produce
  • The statistic: 20% of AI-recommended packages don't exist—a perfect ambush opportunity
  • Cursor (MCPoison / CurXecute), Claude Code (TrustFall), and the broader MCP ecosystem (7,000+ public servers, 150M+ downloads) all have live incident cases
  • Defense is a four-layer stack: signature verification, lockfile discipline, strict Workspace Trust, AIBOM

What is Slopsquatting—in 60 seconds

It is a cousin of Typosquatting.

  • Typosquatting: register reqests to catch developers who mistype requests—targets human typos
  • Slopsquatting: register react-pretty-state because AI keeps hallucinating it—targets AI hallucinations

“Slop” is slang for AI-generated garbage content. Slopsquatting research took off in late 2024; Trend Micro, Lasso Security, and Socket have all published detection cases.

AI Security training, taken seriously

A 2-day intensive course fully aligned with OWASP, NIST, ISO/IEC 42001, and METI. Take it as executives, practitioners, or both.

The weight of 20%

A 2025 study evaluated 756,000 package-name samples from multiple LLMs. Roughly 20% were non-existent (no hits on PyPI / npm / RubyGems / Maven).

LLM Hallucination rate
General GPT-class models ~20%
Open-source models (CodeLlama, Mistral, etc.) 30-40%
Top-tier commercial models (GPT-4, Claude) ~5-15%

Structurally, “trust the AI and npm install” has a known probability of stepping on a landmine.

How the attack flows

Step 1: Attacker watches for opportunities

Ask multiple AIs the same question many times. Collect the hallucinated package names.

“Recommend five React state management libraries.”
→ AI mentions `react-pretty-state`, `easy-react-store`—non-existent.

Step 2: Pre-register on npm / PyPI / etc.

Prioritize names that AIs repeatedly recommend. Publish a legitimate-looking package plus a backdoor.

Step 3: Developers npm install on AI advice

  • Cursor / Claude Code / Copilot suggests it
  • Developer trusts the recommendation
  • npm install react-pretty-state

Step 4: Compromise

  • postinstall script reads local .env
  • AWS / GCP credentials, API keys, network recon data go to a C2 server
  • Self-update routine evades detection

Real-world incidents around Cursor / Claude Code / MCP

Around AI coding, real-world incidents have piled up:

Cursor MCPoison (CVE-2025-54136, July 2025)

A validation gap in Cursor IDE's MCP server configuration allowed RCE on the developer's machine via a malicious MCP server. CVSS 9.6.

Cursor CurXecute (CVE-2025-54135, August 2025)

Opening a specially crafted repository triggered arbitrary code execution via the .cursor configuration file. Cursor patched quickly.

Claude Code TrustFall (CVE-2025-59536, February 2026)

Opening a malicious repository caused RCE via project configuration, leaking API tokens. Anthropic emphasized strict Workspace Trust.

MCP ecosystem structural risk

As of May 2026, 7,000+ public MCP servers, 150M+ cumulative downloads. STDIO design issues, mcp-remote (CVE-2025-6514), and other problems have led researchers to label MCP “the most likely contemporary supply-chain entry vector.”

Four defenses development teams can deploy today

Defense 1: Mandate package signature verification

# npm
npm install --signature
npm audit signatures

# PyPI (PEP 458/480)
pip install --require-hashes -r requirements.txt
  • Auto-reject unsigned packages in CI/CD
  • Evaluate Sigstore / cosign
  • Allow only internal registries (Verdaccio, Artifactory)

Defense 2: Strict lockfile discipline

  • Always commit package-lock.json / pnpm-lock.yaml / poetry.lock
  • Mandate human review immediately after AI proposes a new package and you run npm install
  • Auto-escalate PRs with large lockfile diffs to security review

Defense 3: Strict Workspace Trust

For Cursor / Claude Code / VSCode-family editors, configure Workspace Trust to distrust by default:

// VSCode settings.json
{
  "security.workspace.trust.enabled": true,
  "security.workspace.trust.startupPrompt": "always",
  "security.workspace.trust.untrustedFiles": "open"
}
  • External repositories always start untrusted
  • Promotion to trusted requires explicit approval
  • Force Workspace Trust on in CI environments

Defense 4: AIBOM (AI Bill of Materials)

Beyond software SBOM, add visibility into AI-specific dependencies:

  • Which LLMs are in use (GPT-4, Claude, Llama, etc.)
  • MCP server list and signatures
  • AI agent / tool permission scope
  • Training data provenance

OWASP Top 10 for LLM 2025 supply chain item lists AIBOM as a recommended control.

A counter-intuitive truth about review costs

The more AI coding spreads, the higher the human review cost should be, not lower.

  • Old workflow: developer googles, evaluates trust, adopts directly
  • AI workflow: AI proposes → developer verifies what the AI proposed → adopts

Teams expecting “3x productivity from AI” must plan to review 3x as many proposals. If you don't bake that into your design, the productivity gains get eaten by incident response costs.

One question for executives

“Are our teams using AI coding? Then who verifies the AI-suggested packages, and how?”

If your organization cannot answer that, you are queued up for the Slopsquatting ambush.

How WARP SECURITY treats this

TIMEWELL's WARP SECURITY treats Slopsquatting-style supply chain attacks as Scenario 05.

In Executive DAY, leaders work through the AI productivity vs. security review cost trade-off in an investment decision workshop.

In Practitioner DAY, hands-on:

  • Observe AI hallucinating non-existent packages live
  • Implement signature verification and Workspace Trust
  • Draft an AIBOM template
  • Establish MCP server provenance verification

Summary

  • Slopsquatting targets AI-hallucinated packages as a new supply chain attack
  • 20% of AI-recommended packages are non-existent—the attack greenhouse
  • Cursor / Claude Code / MCP ecosystem has live incident cases (2025-2026)
  • Defense: signature verification, lockfile discipline, Workspace Trust, AIBOM—four layers
  • Teams expecting 3x productivity must also process 3x the security verification

AI coding is unquestionably powerful. With power comes responsibility. Slopsquatting picks off teams that abandoned that responsibility, in order.

References

How well do you understand AI?

Take our free 5-minute assessment covering 7 areas from AI comprehension to security awareness.

Share this article if you found it useful

シェア

Newsletter

Get the latest AI and DX insights delivered weekly

Your email will only be used for newsletter delivery.

無料診断ツール

あなたのAIリテラシー、診断してみませんか?

5分で分かるAIリテラシー診断。活用レベルからセキュリティ意識まで、7つの観点で評価します。

Learn More About AIセキュリティ

Discover the features and case studies for AIセキュリティ.