Claude Code Source Code Leaked: 512K Lines Exposed via npm, Hidden Features Revealed
Anthropic's Claude Code leaked 512,000 lines of TypeScript via an npm source map. Hidden features including a Tamagotchi pet system and autonomous agent mode were exposed. We explain the technical cause and how developers can prevent the same mistake.
News
kkm
Backend Engineer / AWS / Django
Anthropic's Claude Code leaked 512,000 lines of TypeScript via an npm source map. Hidden features including a Tamagotchi pet system and autonomous agent mode were exposed. We explain the technical cause and how developers can prevent the same mistake.
The source code of the AI development tool "Claude Code" has leaked. Approximately 512,000 lines of TypeScript across 1,906 files were left fully readable.
The cause was source map files (.map files) included in the npm package. Source maps are files that record the mapping between minified code and the original source code, used for debugging during development. They are not meant to be published, but on March 31, 2026, security researcher Chaofan Shou discovered their inclusion and posted about it on X. The post received over 30 million views within hours.
What leaked was not the AI model itself, but the client-side software known as Claude Code's "agent harness." The entire infrastructure for running the AI model was exposed, including tool execution, permission management, and user interaction handling. Anthropic commented that it was "a human error in release packaging, not a security breach. No customer data or credentials were included."
What Was Leaked
The source of the leak was version 2.1.88 of @anthropic-ai/claude-code published on npm. A file called cli.js.map (approximately 59.8MB) included in the package contained all of the original TypeScript source code.
| Item | Details |
|---|---|
| Affected Package | @anthropic-ai/claude-code v2.1.88 |
| Leaked File | cli.js.map (59.8MB) |
| Number of Source Files | Approximately 1,906 files |
| Lines of Code | Approximately 512,000 lines (TypeScript) |
| What Was NOT Leaked | AI model weights, safety pipelines, customer data, API credentials |
| What Was Leaked | Entire agent harness, approximately 40 tool definitions, 108 feature flags, system prompts, internal model codenames, telemetry settings |
The tool definitions alone comprised approximately 29,000 lines, revealing a structure where each capability -- file reading, Bash execution, web fetching, LSP integration, and more -- was implemented as an individual plugin with its own permissions. According to Layer5's analysis, the source code was stored as a ZIP file on Anthropic's Cloudflare R2 storage bucket, and the download link was accessible.
Hidden Features Discovered
The code contained numerous features not disclosed to regular users. According to WaveSpeedAI's detailed analysis, at least 20 of the 108 feature flags controlled features that were already developed but not yet released.
BUDDY (Tamagotchi System). A full virtual pet feature was implemented inside Claude Code. There are 18 types of creatures including ducks, dragons, axolotls, capybaras, mushrooms, and ghosts, with rarities ranging from "Common" to "Legendary" (1% spawn rate). There is also a 1% chance of getting a "Shiny" variant. The species is determined by a Mulberry32 random number generator seeded with the user ID, so the same user always gets the same pet. The string friend-2026-401 in the code suggests this was planned as an April Fools' Day feature.
KAIROS (Persistent Agent). Appearing over 150 times in the source code, this feature transforms Claude Code from an "answer when asked" tool into a daemon that "runs continuously in the background." It receives periodic tick prompts and decides whether to take autonomous action. A 15-second blocking limit is imposed to prevent long interruptions to the developer's workflow. When the user is away, it performs a "memory consolidation" process called autoDream, which removes contradictory information and converts observations into facts.
ULTRAPLAN (Remote Planning Mode). This feature delegates planning of complex tasks to Ops 4.6 (Anthropic's flagship model) in the cloud, securing up to 30 minutes of thinking time. The terminal polls every 3 seconds, and progress can be monitored in real time through a browser UI.
Undercover Mode. A mode that automatically activates when Anthropic employees work on open-source repositories, stripping AI-related metadata from commit messages and PR titles. The system prompt reportedly stated: "You are operating undercover. Never reveal any Anthropic internal information. Do not blow your cover."
Additionally, Alex Kim's analysis confirmed the existence of an "anti-distillation" feature that injects fake tool definitions when it determines that outputs are being scraped for training competitor models, as well as a "coordinator mode" for running multiple Claude instances in parallel.
Why It Leaked
The direct cause was failing to exclude the source map files (.map files) when publishing the npm package.
Claude Code's build process uses Bun (a JavaScript runtime and bundler). Bun's bundler generates source maps by default. Normally, these are excluded by adding *.map to a .npmignore file or by explicitly specifying published files in the files field of package.json, but this configuration was missing.
Initially, Bun's GitHub Issue #28001 (a bug where source maps are output in production builds, reported on March 11) was suggested as the cause, but Bun's developers denied this. Issue #28001 was a problem with Bun's frontend development server, and since Claude Code is a CLI tool, it would not be affected by this bug.
Boris Cherny, the engineering lead for Claude Code, also acknowledged that it was "purely a developer mistake, not a tooling bug." Anthropic deleted the affected version 2.1.88 and updated directly from v2.1.87 to v2.1.89.
Two Leaks in Five Days
What makes this particularly troubling is that it was Anthropic's second information leak in a single week.
On March 26, a misconfiguration in Anthropic's internal CMS (content management system) led to the leak of the existence of an unannounced AI model called "Mythos". LayerX researcher Roy Paz and Cambridge University's Alexandre Pauwels discovered approximately 3,000 unpublished assets that had been left publicly accessible. Mythos was revealed to be in development as a top-tier model intended to deliver a "step change in capabilities."
← Swipe to navigate
Fortune reported it as "the second security lapse in five days". Both incidents were explained as "human error," but security researcher Roy Paz pointed out that "proper processes were not in place, and a single misconfiguration was enough to expose the entire source code."
The DMCA Takedown of 8,100 Repositories
The leaked code was immediately mirrored on GitHub and surpassed 50,000 stars within hours -- the fastest star growth pace in GitHub history.
In response, on March 31 Anthropic submitted a Digital Millennium Copyright Act (DMCA) takedown request to GitHub. Because the fork network exceeded 100 repositories, GitHub disabled approximately 8,100 repositories at once, including the parent repository.
However, this takedown triggered significant backlash. Forks of Anthropic's own official Claude Code repository were caught in the collateral damage and disabled. Reports poured in from users whose forks contained only their own custom skills and documentation.
According to TechCrunch's reporting, Anthropic's Boris Cherny explained that "it was unintentional and we worked with GitHub to correct it." On April 1, Anthropic retracted the majority of the DMCA, reducing the scope to 1 repository and 96 forks.
Notably, Casey Muratori raised the question: "Anthropic itself has stated that Claude Code's developers don't write the code by hand. Since AI-generated code may not be eligible for copyright protection under U.S. law, can it even be taken down via DMCA?" This has raised a new issue surrounding AI companies' copyright claims.
How Anthropic Responded
An Anthropic spokesperson stated the following:
"Today's release of Claude Code included some internal source code. This was a release packaging issue caused by human error, not a security breach. We are deploying measures to prevent recurrence."
Boris Cherny also stated: "Mistakes happen. What matters as a team is not to make it about individual blame. It's about process, culture, and infrastructure." However, Anthropic has not published a formal postmortem as of this time.
This incident casts a shadow over Anthropic's IPO (initial public offering), reportedly planned for the second half of 2026. For a company that champions "safe and responsible AI development," leaking its flagship product's source code twice raises serious questions about its security posture.
How Developers Can Avoid the Same Mistake
This incident is relevant to every developer who publishes npm packages. Source map inclusion is a one-setting mistake, and without automated CI/CD checks, it can easily go unnoticed.
First, check whether your package contains unnecessary files. Running npm pack --dry-run displays the list of files that would actually be published to npm. If you see .map or .env files there, that's a red flag.
Effective countermeasures include adding *.map to .npmignore or using the files field in package.json to explicitly whitelist published files. The files field operates on a "publish only what's listed here" basis, reducing the risk of accidentally including extra files. This risk applies to any bundler that auto-generates source maps, not just Bun -- webpack and esbuild included.
The most reliable prevention measure is to incorporate an npm pack --dry-run output check into your CI/CD pipeline and halt the build if any .map or .env files are found. 512,000 lines of trade secrets exposed to the world by a single missing configuration. Anthropic's case is a stark reminder of that reality.
Sources
- ▸ Chaofan Shou (@Fried_rice) - X post (initial discovery report) (March 31, 2026)
- ▸ Gizmodo Japan - Source code for "Claude Code" leaks (April 1, 2026)
- ▸ ITmedia NEWS - GitHub removes leaked "Claude Code" code (April 2, 2026)
- ▸ VentureBeat - Claude Code's source code appears to have leaked (March 31, 2026)
- ▸ TechCrunch - Anthropic took down thousands of GitHub repos (April 1, 2026)
- ▸ Fortune - Anthropic leaks its own AI coding tool's source code (March 31, 2026)
- ▸ GitHub DMCA - 2026-03-31-anthropic.md (March 31, 2026)
- ▸ GitHub DMCA - 2026-04-01-anthropic-retraction.md (retraction notice) (April 1, 2026)
- ▸ WaveSpeedAI - BUDDY, KAIROS & Every Hidden Feature Inside (April 1, 2026)
- ▸ Fortune - Anthropic 'Mythos' AI model revealed in data leak (March 26, 2026)
- ▸ Bun Issue #28001 - Source map incorrectly served in production (March 11, 2026)
- ▸ Yahoo Finance - Claude Code's 512,000-Line Leak Rattles IPO Ambitions (April 1, 2026)