Protect.Computer
ARTICLE

The Claude Code Source Leak: What Happened, What Was Exposed, and What It Means for You

· 12 min read · Data exposure Supply chain
The Claude Code Source Leak: What Happened, What Was Exposed, and What It Means for You

Photo by Unsplash

On the morning of March 31, 2026, security researcher Chaofan Shou posted a single message on X that set the developer world on fire: Anthropic had accidentally shipped the complete source code of Claude Code — its flagship AI coding assistant — inside a routine npm package update.

Within two hours, mirror repositories on GitHub had accumulated over 50,000 stars. Within a day, the code had been forked over 41,500 times, rewritten in Python and Rust, and uploaded to decentralized hosting that cannot be taken down.

This wasn’t a hack. No one broke into anything. A developer downloaded a public npm package and looked at what was inside.

Here’s what happened, what the code revealed, and — most importantly — what it means if you’re a developer, a business leader, or just someone who uses AI tools.

How the leak happened

The root cause was embarrassingly simple: a missing line in a configuration file.

Source maps 101

When developers build JavaScript or TypeScript applications for production, they run the code through a bundler that compresses and minifies it. The output is fast to execute but unreadable by humans.

Source map files (.map) reverse that process. They map the minified code back to the original source, letting developers debug crashes using readable file names and line numbers instead of a wall of compressed text.

Source maps are strictly internal tools. They should never ship to end users.

The chain of failures

When Anthropic published version 2.1.88 of @anthropic-ai/claude-code to npm, the package included a 59.8 MB source map file that was never supposed to be there.

Claude Code is built with the Bun JavaScript runtime (which Anthropic acquired in late 2025). Bun generates source maps by default during builds. The way to exclude them is with a .npmignore file or a files allowlist in package.json.

Neither was configured to exclude .map files.

But the source map didn’t contain the code directly — it referenced a URL pointing to a ZIP archive on Anthropic’s own Cloudflare R2 cloud storage bucket. That bucket was publicly accessible with no authentication.

The full chain:

  1. npm install @anthropic-ai/claude-code downloads the package, including main.js.map (59.8 MB)
  2. The .map file contains a URL to src.zip on Anthropic’s R2 storage
  3. The ZIP is downloadable by anyone — no password, no auth
  4. Inside: 512,000 lines of TypeScript across ~1,900 files

Two separate misconfigurations — a missing .npmignore entry and an unsecured cloud storage bucket — stacked on top of each other to produce a complete exposure.

This was the second time

The truly uncomfortable fact: this had happened before.

On February 24, 2025 — Claude Code’s launch day — developer Dave Shoemaker found an 18-million-character inline source map embedded directly in the npm package. Anthropic pulled it within two hours.

Thirteen months later, the same class of vulnerability, the same attack vector, the same result.

Boris Cherny, the head of Claude Code, confirmed the specifics after the incident: “There was a manual deploy step that should have been better automated.” He added that no one was fired and called it “an honest mistake.”

Timeline of events

Here’s how the day unfolded (all times UTC):

  • ~00:21 — Malicious axios versions (1.14.1 and 0.30.4) appear on npm containing a Remote Access Trojan. Unrelated to Anthropic, but catastrophically bad timing.
  • ~04:00 — Claude Code v2.1.88 is pushed to npm with the source map included.
  • ~04:23 — Chaofan Shou (@Fried_rice on X) discovers and publicly posts the leak. The post reaches 16 million views.
  • Within 2 hours — GitHub mirrors appear. One repo hits 50,000 stars in under two hours — a platform record.
  • ~08:00 — Anthropic pulls the npm package and issues its official statement: “This was a release packaging issue caused by human error, not a security breach.”
  • Same day — A Python clean-room rewrite (claw-code) appears, designed to be legally immune to DMCA takedowns. Decentralized mirrors go live on IPFS. The code is permanently in the wild.

What was NOT exposed

Before diving into what was inside, it’s important to clarify what the leak did not include:

  • No model weights. Claude’s AI models were not exposed. You cannot recreate Claude from the leaked code.
  • No customer data. Claude Code is a developer CLI tool. No end-user data, API keys, or credentials were compromised.
  • No inference infrastructure. The leaked code is the client-side agent harness, not the server infrastructure that runs Claude.

Anthropic’s official statement: “No sensitive customer data or credentials were involved or exposed.”

For businesses using Claude via API, Claude.ai, or Claude Enterprise: nothing changed from a data security perspective.

What WAS inside: the full picture

What leaked was the complete client-side architecture of Claude Code — the “agentic harness” that wraps Claude’s language model and gives it the ability to use tools, manage files, run shell commands, and coordinate multi-agent workflows.

For competitors and AI researchers, this is arguably more strategically valuable than model weights. It’s a complete blueprint for building a production-grade AI coding agent.

The tool system (~40 tools, ~29,000 lines)

Claude Code isn’t a chatbot wrapper. It’s a modular, plugin-style system where every capability is a separate, permission-gated tool:

  • BashTool — shell command execution with safety guards
  • FileReadTool / FileWriteTool / FileEditTool — file operations
  • WebFetchTool — live web access
  • LSPTool — Language Server Protocol integration for IDE features
  • GlobTool / GrepTool — codebase search
  • NotebookReadTool / NotebookEditTool — Jupyter support
  • MultiEditTool — atomic multi-file edits
  • TodoReadTool / TodoWriteTool — task tracking

Each tool has its own permission model, validation logic, and output formatting. A tree-sitter WASM parser builds an abstract syntax tree (AST) of every shell command before Claude executes it — a security layer that goes far beyond simple string matching.

The memory architecture (three layers)

This is what competitors and AI researchers will study most closely. Anthropic built a sophisticated solution to what’s called “context entropy” — the tendency for AI performance to degrade as conversations grow longer. Their answer is a three-layer memory system:

  • Layer 1 — Index (always loaded): lightweight pointers, ~150 characters per entry. Stores locations, not data.
  • Layer 2 — Topic files (loaded on demand): actual project knowledge, fetched when needed. Never fully in context at once.
  • Layer 3 — Raw transcripts (never re-read): only searched via grep for specific identifiers when needed.

The key design principle is what the code calls Strict Write Discipline: the agent can only update its memory index after a confirmed successful file write. This prevents the agent from polluting its context with failed attempts. The agent also treats its own memory as a “hint” and verifies facts against the actual codebase before acting.

Hidden feature flags (44 unreleased features)

The codebase contained 44 compile-time feature flags gating capabilities that Anthropic has never publicly announced:

  • KAIROS — An always-on autonomous daemon mode. Claude Code runs in the background 24/7, evaluating what needs attention via heartbeat prompts every few seconds. It can fix errors, respond to messages, update files, and run tasks without user initiation. During idle time, it runs a process the code calls autoDream — overnight memory consolidation that merges observations, removes logical contradictions, and converts vague insights into verified facts.
  • BUDDY — A terminal companion pet (think Tamagotchi). 18 species with generated names, personality descriptions, sprite animations, and stats like DEBUGGING, PATIENCE, and CHAOS. Planned rollout window in the code: April 1–7, 2026.
  • COORDINATOR_MODE — Multi-agent orchestration where one Claude spawns and manages parallel worker Claudes through structured research-synthesis-implementation phases.
  • ULTRAPLAN — Deep planning sessions offloaded to a remote Opus 4.6 instance for up to 30 minutes.
  • DREAM — Self-maintaining memory consolidation that reorganizes what the agent knows every night.

Anti-distillation measures

Perhaps the most competitively interesting discovery: Claude Code actively tries to poison training data for competitors attempting to copy its behavior.

When enabled, an ANTI_DISTILLATION_CC flag sends anti_distillation: ['fake_tools'] in API requests, which injects decoy tool definitions into system prompts. If a competitor records Claude Code’s API traffic to train their own model, the fake tool definitions corrupt that training data.

A second mechanism buffers the assistant’s reasoning between tool calls, returns only cryptographically signed summaries, and withholds the full reasoning chains. Competitors intercepting API traffic get summaries, not substance.

Security analysts noted that a determined team could likely bypass both mechanisms within an hour using a MITM proxy. The stronger deterrent is probably legal, not technical.

Undercover Mode

An 89-line module called undercover.ts automatically strips all Anthropic-identifying information when Claude Code operates in non-internal repositories. When active, the model is instructed never to mention internal codenames, Slack channels, repository names, or even call itself “Claude Code.”

The design has a deliberate one-way door: there is no force-OFF. You can turn Undercover Mode on, but you cannot turn it off. In external builds, the function gets dead-code-eliminated entirely.

The irony was not lost on the community. A system purpose-built to prevent leaking internal information was itself exposed by a configuration file someone forgot to update.

Model codenames

The code referenced several unreleased model codenames:

  • Capybara (also called Mythos): already on version 8 internally, with a 1M-token context window and a “fast mode.” Internal comments note it still has issues with “over-commenting and making false claims.”
  • Numbat: tagged with a comment reading "@[MODEL LAUNCH]: Remove this section when we launch numbat"
  • Fennec: speculated by multiple researchers to be Opus 4.6
  • Tengu: referenced in the Undercover Mode codename suppression list

Code quality observations

Community analysis highlighted some eye-catching details about the codebase itself:

  • print.ts is 5,594 lines with a single function that spans 3,167 lines and 12 nesting levels deep
  • Zero tests in the leaked repository
  • Comments throughout are written for AI agents working on the codebase, not human readers — Claude Code maintains its own code
  • A compaction routine was silently failing and retrying thousands of times per session, contributing to higher-than-expected API usage for some users
  • Idle processes could grow to 15 GB of memory
  • Internal comments reference a 29–30% false-claims rate

The real security risks

The source code leak itself is primarily an intellectual property event, not a user security event. But several real security risks emerged around it.

1. The axios supply chain attack (unrelated but simultaneous)

On the same morning, completely independently, malicious versions of the axios HTTP library were published to npm:

  • axios@1.14.1
  • axios@0.30.4

Both contained a Remote Access Trojan (RAT) via a dependency called plain-crypto-js. Google suspects North Korean threat actors.

If you ran npm install or updated Claude Code between 00:21 and 03:29 UTC on March 31, 2026, you may have pulled the trojanized axios version. Check your lockfiles (package-lock.json or bun.lock) for these versions and rotate all credentials if found.

Anthropic now recommends its native installer (curl -fsSL https://claude.ai/install.sh | bash) instead of npm, as the standalone binary bypasses the npm dependency chain entirely.

2. Trojanized Claude Code clones

Attackers quickly began seeding trojanized versions of the leaked code on GitHub. Zscaler identified repositories distributing:

  • Vidar Stealer — a credential-stealing malware
  • GhostSocks — a tool for proxying network traffic through compromised machines
  • Cryptocurrency miners

If you downloaded the leaked source code from an unofficial repository, assume it may be compromised.

3. Typosquatting attacks

Attackers published npm packages mimicking internal Claude Code dependency names (e.g., audio-capture-napi, color-diff-napi, image-processor-napi) to target developers trying to compile the leaked source. Currently empty stubs, but designed to be updated with malicious payloads once installations accumulate.

Anthropic has since reserved these package names as defensive placeholders.

Broader context: a bad month for AI security

The Claude Code leak didn’t happen in isolation. March 2026 was arguably the worst month in AI developer security on record:

  • axios (100M weekly npm downloads): maintainer account hijacked, RAT deployed across macOS, Windows, and Linux
  • LiteLLM (97M monthly PyPI installs): backdoored with a three-stage attack harvesting SSH keys, cloud credentials, and crypto wallets
  • Railway (2M users, 31% of Fortune 500): CDN misconfiguration leaked authenticated user data between users
  • OpenAI Codex: command injection via branch names allowed GitHub auth token theft
  • GitHub Copilot: injected promotional ads into 1.5M+ pull requests as hidden HTML comments

The Claude Code leak is, in some ways, the least dangerous item on that list. But it’s the most visible — and it raises important questions about operational security at companies that position themselves as safety-first.

What should you do?

If you use Claude Code

  • Update past v2.1.88 immediately
  • Use the native installer (curl -fsSL https://claude.ai/install.sh | bash) instead of npm
  • If you updated via npm between 00:21 and 03:29 UTC on March 31: check your lockfiles for axios@1.14.1 or axios@0.30.4, and rotate all secrets if found
  • Do not download the leaked source from unofficial repositories — multiple trojanized clones exist

If you ship npm packages

This incident is a textbook lesson in build hygiene:

  • Add *.map to your .npmignore or use a strict files allowlist in package.json
  • Run npm pack --dry-run before every publish to verify exactly what’s included
  • Never store source archives in publicly accessible cloud buckets without authentication
  • Automate these checks in CI — don’t rely on human memory for security-critical steps

If you’re evaluating AI coding tools

The leak confirmed that Claude Code is a genuinely sophisticated piece of engineering. The three-layer memory architecture, the permission-gated tool system, and the multi-agent coordination patterns represent the current state of the art for production AI agents.

It also confirmed that even companies with $19 billion in annual revenue and a brand built on safety can make elementary operational security mistakes — twice in 13 months.

Evaluate tools based on their engineering quality and their vendor’s operational track record. Both matter.

The bottom line

A single missing line in a configuration file exposed 512,000 lines of proprietary source code from one of the most valuable AI companies in the world. The code spread permanently within hours. No customer data was compromised, but the competitive intelligence is now public knowledge.

The engineering inside Claude Code is genuinely impressive. The memory architecture, the anti-distillation measures, the multi-agent coordination — this is serious software solving hard problems.

But the incident highlights a tension at the heart of modern software development: we build incredibly sophisticated AI systems and then ship them using the same npm publish workflow that has been causing supply chain incidents for a decade. The AI inside the tool is state-of-the-art. The release process that exposed it was a known, solved problem.

For a company whose entire brand is safety, that gap is the real story.

Sources

Related reading