Published April 01, 2026
Anthropic on Tuesday, March 31, confirmed that an internal code for its popular artificial intelligence (AI) coding assistant, Claude Code had been mistakenly released.
“No sensitive customer data or credentials were involved or exposed,” said an Anthropic spokesperson who told CNBC News.
The spokesperson labelled the error as human mistake, adding, “This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again.”
A source Code leak is seen as a major setback to the startup, as it could help Anthropic’s competitors, revealing how it constructed its viral coding assistant.
The leak came after a security researcher Chaofan Shou made public a link to Anthropic’s code on X (formerly Twitter), on Tuesday at 4:23 a.m. EDT which has been viewed by over 31 million users.
He wrote, “Claude code source code has been leaked via a map file in their npm registry!”
The following leak also marks Anthropic’s second major data breach within a week.
For context, just last week, as per Fortune report, descriptions of Anthropic’s upcoming AI model and other documents were recently found in a publicly accessible data cache.
The leak surfaced after Anthropic released version 2.1.88 of the Claude Code npm package, after users noticed it contained a source map file that could be used to access Claude Code’s Source code.
That file made up of roughly about 2,000 TypeScript files and more than 512,000 lines of code. However, the version is now available for download from npm.
A source code leak of this magnitude is huge, as it gives software developers and Anthropic’s market competitors a roadmap of how the buzzing coding tool operates.
Anthropic was established by a group of OpenAI’s former leadership and researchers in 2021, and is globally known for developing a family of AI models called Claude.