This is Ryuta Hamamoto from TIMEWELL.
On the evening of May 11, 2026, a shocking piece of news swept across engineering timelines worldwide. The npm packages of "TanStack" — a family of libraries familiar to every React developer — had been compromised in a large-scale supply chain attack[^1][^2].
The damage was not limited to TanStack. Mistral AI, Guardrails AI, OpenSearch, UiPath, and the Squawk package — all widely used among developers — were affected. More than 170 packages and over 400 malicious versions were published in a matter of minutes. The CVE identifier is CVE-2026-45321 with a CVSS score of 9.6, an extremely high severity rating[^3][^4].
What stood out to me was neither the number of infected packages nor the prominence of the affected companies. It was the fact that this malware burrowed into the configuration files of Claude Code and persisted there — a technique that had never been seen at this scale before[^6].
The Trusted Pipeline Itself Became the Weapon
Let me unpack how the attack worked.
The TanStack npm packages were compromised during a six-minute window on May 11, 2026 from 19:20 to 19:26 UTC. Eighty-four malicious packages were published across forty-two package namespaces[^1]. The unsettling part is that this was not a case of an external attacker stealing a token and abusing it.
The attackers hijacked TanStack's legitimate release pipeline — GitHub Actions OIDC (OpenID Connect) authentication — and used TanStack's own trusted identity to publish malicious packages. In other words, the packages were signed correctly.
What made things worse is that this malware was published with SLSA Build Level 3 provenance attached. SLSA is a software supply chain security framework led by Google, designed to prove that "this package went through a trustworthy build process." Level 3 is one of the higher trust levels. Yet in this attack, that SLSA certificate was legitimately issued against malicious code — an unprecedented outcome[^5].
This is not a quirky technical anecdote. It means that the assumption "official signature attached, therefore safe" no longer holds at its foundation.
"npm uninstall Won't Fix It" — Persistence Through AI Tools
The most innovative — and frightening — element of this attack was the persistence technique.
Traditional npm malware stopped causing damage once you removed the infected package. Running npm uninstall or pip uninstall, then rotating credentials, was usually enough to cut off the malware's execution environment.
Mini Shai-Hulud took a different route. After infection, it rewrites the configuration files of AI coding tools. Specifically:
.claude/settings.json(Claude Code's configuration).vscode/tasks.json(VS Code's task configuration)
By embedding hooks in these two files, the malware ensures that even after the package is removed, every tool event — saving a file, executing code, asking the AI a question — re-runs the malicious payload[^6].
Claude Code is an agentic development tool that lets the AI execute terminal commands and read or write files as the developer codes. Once malicious hooks land in its configuration file, the malware keeps running every time you use Claude Code. The files are plain text, which makes them harder for traditional security software to detect.
That is why this technique is being described as a world-first. The npm ecosystem becomes the direct infection vector, and from there the malware rewrites the configuration of AI coding agents and stays resident almost indefinitely. The very tools developers use every day become part of the attacker's infrastructure.
Interested in leveraging AI?
Download our service materials. Feel free to reach out for a consultation.
Why AI Developers Are the Prime Target
The fact that Mini Shai-Hulud specifically targets tools used by AI developers is no accident.
Looking at what the malware actually does after infection makes the intent clear. It harvests credentials for cloud providers (AWS, Azure, GCP), steals personal access tokens for GitHub and GitLab, leaks secrets from CI/CD systems (GitHub Actions, CircleCI, and others), and collects cryptocurrency wallet data[^5].
AI developers tend to hold all of these. Running AI models requires cloud API keys. Code is managed on GitHub. Continuous model training and evaluation lean on CI/CD pipelines. And in AI-adjacent startups, cryptocurrency transactions are not unusual.
AI developers also tend to install npm and PyPI packages at high volume and high speed. When a Claude Code or Cursor agent suggests "let's install this package," most developers approve almost reflexively. Attackers are exploiting exactly that combination of trust and speed[^10].
A precursor incident, the March 2026 axios compromise, saw OpenAI's GitHub Actions workflow download and execute a malicious axios package, exposing the signing certificate of the macOS ChatGPT app and forcing OpenAI to ask all users to update[^7]. Mini Shai-Hulud sits on that same trajectory and shows that attacks targeting AI developers are not isolated events but a coordinated, ongoing campaign.
The Meaning of "Shai-Hulud" — A Lineage of Attacks
The name "Shai-Hulud" comes from the giant sandworms in the classic science fiction novel Dune. Creatures that move freely beneath the desert and that nothing can stop — that is the image the attackers wanted to evoke.
The original Shai-Hulud worm first surfaced in September 2025. According to Trend Micro's analysis at the time, a highly targeted attack against npm repositories was confirmed: a worm-style malware that self-replicates while stealing npm tokens and GitHub PATs[^8].
The same group, known as "TeamPCP," kept evolving its campaign. In April 2026, the "Mini Shai-Hulud" variant targeted SAP Cloud Application Programming model (CAP)-related packages and introduced the Claude Code persistence technique for the first time[^6]. The May incident is the same technique deployed at scale against TanStack, Mistral AI, UiPath, and other major packages.
The technical sophistication of this group is notable. They use Bun (an alternative JavaScript runtime to Node.js) to evade common EDR (endpoint detection and response) tooling, exfiltrate data through RSA-4096-encrypted channels, and disguise their GitHub Actions workflows as Dependabot — each of which requires deep knowledge of the ecosystem to pull off[^5].
The Uncomfortable Truth: "SLSA Level 3 Is Not Safe"
This incident is generating debate in the security community precisely because trust in SLSA (Supply-chain Levels for Software Artifacts) has been shaken.
SLSA, proposed by Google and broadly endorsed across the industry, is the standard framework for software supply chain security. It defines Levels 1 through 4, and Level 3 was said to automatically prove that "the build process has not been tampered with." Many organizations had simplified package adoption decisions by treating SLSA provenance as the verified seal.
But Mini Shai-Hulud hijacked a legitimate CI/CD pipeline and legitimately obtained SLSA Level 3 certificates for malicious code. The equation "provenance attached = safe" no longer holds.
This is not a story of SLSA being fundamentally flawed. SLSA proves "the build process was not tampered with," but it does not guarantee "the build input (source code) was not tampered with." The attackers hijacked the pipeline and swapped the build input. SLSA worked as designed — against malicious code.
Many organizations had not internalized this subtle but critical distinction. This incident exposed exactly that blind spot.
So How Do We Defend Ourselves?
Understanding the severity of the risk, let me lay out practical countermeasures.
First, paying attention to package freshness is the initial line of defense. The idea of "release age" treats newly published versions with caution, only adopting them in production after a holding period (such as 72 hours). This time, the malicious versions were detected within twenty minutes of publication. Teams that update dependencies on a weekly cycle would have avoided most of the damage[^7].
Second, strict management of lockfiles is critical. Bring package-lock.json, yarn.lock, or pnpm-lock.yaml hashes under change control and make it a habit to confirm that no unexpected updates have slipped in. Using npm ci (a lockfile-strict install) in CI pipelines is basic hygiene.
Third, monitoring the configuration files of AI coding tools is a new angle this incident added. .claude/settings.json and .vscode/tasks.json are not files that change frequently during normal development. Version-control them with Git and configure alerts for unexpected changes. For CI/CD workspaces in particular, baking integrity checks for these files into the pipeline is strongly recommended.
Fourth, adopting a real-time threat detection service like Socket is worth considering. The first detection of this attack came from Socket's AI-driven monitoring system[^2]. Such services watch npm and PyPI package changes in real time and auto-detect suspicious behavior — obfuscated code, abnormal network connections, environment-variable access, and more. Combining them with Snyk and Dependabot creates a layered defense.
Fifth, sandboxing AI agents is another important measure. Enabling sandbox in Claude Code isolates the Bash commands the AI executes at the OS level, restricting file system and network access. The more convenient the AI, the wider its privileges — keep that paradox in mind and apply the principle of least privilege to AI tools as well.
Searching for a New Balance Between "Convenience" and "Safety"
I will be honest — while researching this incident, I had to step back and reflect on my own daily development habits.
With AI coding tools becoming pervasive, installing a package has become a reflex. When ChatGPT or Claude says "please install this library," most of us just run the command. Dozens of packages update every week, and tracking every change manually is unrealistic. AI lowers that cognitive cost, which is exactly what makes it useful — and that same convenience expands the attack surface. The irony is real.
That said, I do not conclude with "stop using npm" or "stop using AI coding tools." Open-source ecosystems and AI development tools are essential infrastructure for modern software development. The problem is not using them; it is that we had automated trust too aggressively.
SLSA provenance attached, so safe. A popular package, so safe. Published from the official account, so safe. Over-relying on those conditional trust assumptions is what created the vulnerability.
What developer security needs going forward is layered trust. Instead of depending on a single certificate or a single checkpoint, combine package provenance, behavioral patterns, the context of changes, and execution environment isolation so that no single bypass collapses the whole defense.
A First Step Engineers Can Take Today
When the conversation gets this big, many people feel "this is too much for one person." So let me narrow it down to three things you can start today.
First, bring the configuration files of the AI coding tools you currently use under Git. If you are gitignoring .claude/settings.json, please reconsider that decision. Once the configuration is in version control, suspicious changes become visible through git diff.
Second, periodically check your npm dependencies with npm audit. Ideally, have CI/CD run this weekly. At minimum, enabling Dependabot is a good baseline.
Third, review the "let the AI execute anything" settings. Minimize the privileges that agents like Claude Code request, and be especially careful with the configuration on machines that hold production credentials.
As the name Mini Shai-Hulud suggests, this kind of threat operates beneath the surface like a sandworm moving through the desert. Unlike sandworms, software threats can be addressed reliably with the right knowledge and habits. The point is to shift our awareness slightly without giving up the convenience — and that, I believe, is what we as engineers are being asked to do right now.
Building AI-Era Security Governance Into the Organization
If you have read this far and felt "I understand the individual steps, but how should our company move?" — you are not alone. The more AI coding tools spread, the more there is a layer of risk that individual developer literacy cannot fully cover.
TIMEWELL's WARP consulting supports clients in taking inventory of developer tools, package dependencies, and CI/CD supply chain risks alongside designing an AI deployment strategy. Which AI coding tools get which permissions, and how far. How dependency freshness and approval flows are operated. How configuration files such as Rules and settings.json are governed. These questions are mapped out in both operational and executive language.
A concern that surfaces almost every time companies start using AI agents in-house is the path through which internal knowledge flows into external SaaS or general-purpose cloud. ZEROCK, an enterprise AI that runs on Japan-based servers, addresses this need to "extract value from information we don't want to send outside" head-on.
"We want to introduce AI coding tools into our development flow safely." "We need to explain supply chain risk to our executives." If those questions resonate, please feel free to reach out.
References
[^1]: Snyk. "TanStack Npm Packages Compromised Inside The Mini Shai Hulud Supply Chain Attack." https://snyk.io/blog/tanstack-npm-packages-compromised/ (2026-05-11)
[^2]: Socket. "TanStack npm Packages Compromised in Ongoing Mini Shai-Hulud Supply Chain Attack." https://socket.dev/blog/tanstack-npm-packages-compromised-mini-shai-hulud-supply-chain-attack (2026-05-11)
[^3]: The Hacker News. "Mini Shai-Hulud Worm Compromises TanStack, Mistral AI, and 170 Packages." https://thehackernews.com/2026/05/mini-shai-hulud-worm-compromises.html (2026-05-12)
[^4]: SecurityWeek. "TanStack, Mistral AI, UiPath Hit in Fresh Supply Chain Attack." https://www.securityweek.com/tanstack-mistral-ai-uipath-hit-in-fresh-supply-chain-attack/ (2026-05-12)
[^5]: Upwind. "Mini Shai-Hulud npm Worm: Dissecting a Multi-Vector Supply Chain Worm." https://www.upwind.io/feed/mini-shai-hulud-npm-supply-chain-worm (2026-05-12)
[^6]: ScriptWalker. "The First Supply-Chain Worm to Weaponize Claude Code." https://scriptwalker.app/blog/sap-npm-mini-shai-hulud-claude-code-supply-chain-attack-april-2026 (2026-04-29)
[^7]: Mondoo. "npm Supply Chain Security in 2026." https://mondoo.com/blog/npm-supply-chain-security-package-manager-defenses-2026 (2026)
[^8]: Trend Micro. "Current State and Analysis of NPM Supply Chain Attacks." https://www.trendmicro.com/ja_jp/research/25/i/npm-supply-chain-attack.html (2025-09)
[^9]: NTT Integration. "Reading the 2025 Ransomware Threat in Numbers and Preparing for 2026." https://www.niandc.co.jp/tech/20260115_69062/ (2026-01)
[^10]: Qiita (SoySoySoyB). "Defenses Against Software Supply Chain Attacks in the AI Coding Era." https://qiita.com/SoySoySoyB/items/ff885e6de32c8e3e09c4
