January 30, 2026
Moltbot (formerly known as Clawdbot) has recently become one of the fastest-growing open-source AI tools.
But the viral AI-assistant survived a chaotic week in the early stages. It went through a trademark dispute, a security crisis, and a wave of online scams to emerge as Moltbot.
The chatbot was created by an Austrian developer, Pete Steinberger, who marketed the tool as an AI assistant that “actually does things.”
The feature that makes it interesting is that it can perform tasks across a user’s computer and apps. For instance, managing calendars, sending messages, or checking in for flights, primarily accessing apps like WhatsApp and Discord.
This notable feature sparked its explosive growth and made it popular among AI enthusiasts. However, due to its original name, “Clawdbot,” Anthropic (makers of Claude) drew a legal challenge.
This forced the developers to rebrand with the name “Moltbot” (a reference to a lobster moulting its shell).
Crypto scammers grabbed abandoned social media usernames and set up bogus domains and tokens in Steinberger’s name.
This case illustrates the underlying conflict of the tool: its great autonomy is also a source of danger.
Running on the local machine is a privacy advantage, but the risk of giving an AI system the capability to carry out commands is considerable.
However, despite the tumultuous start, Moltbot is the cutting edge of what is possible with AI.
It shows the increasing developer vision of assistants that are proactive, integrated, and useful, rather than simply chatty. But at the same time, it raises security concerns.
For the time being, it is a product for the tech-savvy, but its future looks like the frenetic, chaotic start of a new paradigm for personal computing.