February 13, 2026
A little over a year after DeepSeek caught the AI world by surprise, OpenAI has allegedly claimed that Chinese AI startup DeepSeek trained its disruptive AI models by distilling those developed and operated in the US.
The ChatGPT maker has also warned U.S. lawmakers that DeepSeek is targeting it and other leading AI companies in the US to replicate models and use them for its own training, Reuters reported, citing a memo it viewed.
The Sam Altman-led AI firm accused DeepSeek of "ongoing efforts to free-ride on the capabilities developed by OpenAI and other U.S. frontier labs."
The technique is known as distillation. It involves the use of an older, more established and powerful AI model to evaluate the quality of the answers coming out of a newer model. This approach is undertaken to transfer the older model's learnings into the nascent one.
In the memo sent to the U.S. House Select Committee, OpenAI stated: "We have observed accounts associated with DeepSeek employees developing methods to circumvent OpenAI's access restrictions and access models through obfuscated third-party routers and other ways that mask their source."
"DeepSeek employees developed code to access U.S. AI models and obtain outputs for distillation in programmatic ways," the memo added.
Those unaware of DeepSeek's groundbreaking entry must know that it shook markets in January last year with a set of AI models that rivalled some of the best offerings from the U.S., showing China catching up in the AI race despite restrictions.
OpenAI said that Chinese LLMs are "actively cutting corners when it comes to safely training and deploying new models."
DeepSeek's popular models, named DeepSeek-V3 and DeepSeek-R1, also gained Silicon Valley executives' praise. They are available globally.
OpenAI said it removes users who seem to be distilling its models to develop rival models.