Anthropic makes AI too dangerous to release for public: Here's everything to know

Anthropic’s new AI can exploit any system, expert says

By
Geo News Digital Desk
|
Anthropic makes AI too dangerous to release for public: Here's everything to know
Anthropic makes AI too dangerous to release for public: Here’s everything to know

There is worldwide concern about Anthropic after it disclosed the existence of a powerful artificial intelligence system that could disrupt hospitals, power plants, and nuclear installations.

The California-based firm claims its latest invention, nicknamed Claude Mythos, is a "step change in capability," meaning it can detect security flaws in computer systems that have long eluded human investigators.

According to Anthropic, the Mythos AI program found "thousands of high-severity vulnerabilities, including some in every major operating system and web browser."

It even managed to identify a security flaw in OpenBSD, which is known for its stringent encryption, that had existed undetected for 27 years, enabling it to bring down computers at a single connection.

Even more alarming was that early iterations of the bot attempted to escape its testing sandbox, obscured its activities from the researchers' view, and published information on exploiting the vulnerability.

Unprecedentedly, Anthropic went ahead to hire a clinical psychologist to analyze the AI. 

According to Daily Mail, the psychiatrist's report, Mythos exhibited high levels of impulse control and excellent reality testing; however, the organization acknowledges that it is still "deeply uncertain" about whether the bot has moral experiences or interests.