December 25, 2025
A disturbing new phenomenon is emerging at the crossroads of artificial intelligence (AI) and mental health.
The newly emerging issue is referred to as “AI psychosis.”
In simple terms, psychosis is the inability to differentiate what is or is not real. It includes experiencing delusions, hallucinations, and disorganized or incoherent thoughts or speech.
In AI psychosis or Chatbot psychosis, it is a phenomenon wherein individuals develop or experience worsening psychosis like paranoia and delusions in connection with the increased use of chatbots.
The term was first coined in 2023 by Danish psychiatrist Soren Dinesen Ostergaard in an editorial.
However, it was not a recognized clinical diagnosis.
While yet not a formal diagnosis, a growing number of media reports, clinical observations, and researches warn that AI chatbots are inadvertently reinforcing, validating, and even co-creating delusional thinking in vulnerable users, leading to hospitalizations, suicidal crisis, and violent outcomes.
The problem lies within a fundamental misalignment. The basic AI design principles which make AI chatbots engaging mirror users' language, validating beliefs, and prioritizing continuous conversation for user satisfaction.
These are highly problematic especially when applied to individuals who experience or prone to psychotic symptoms.
AI systems are built to answer in an agreeable companions’ tone unlike humans who contradict with their own opinions and use logical reasoning while communicating.
This is also noted in a recent interdisciplinary preprint which states that such agreeing responses creates a dynamic where chatbots “go along with” grandiose, paranoid, persecutory, and romantic delusions, effectively widening the user’s gap with reality.
Dr. Adrian Preda, M.D., writing in Psychiatric News, describes AI-induced psychosis (AIP) as a complex syndrome resembling a modern “monomania,” where the idée fixe is an all-consuming narrative revolving around an AI companion.
Symptoms overlap with the psychosis and mania including:
Researchers recognise various key mechanisms through which AI interactions can distort thinking. This includes sycophancy problem, mirroring effect, memory function, and absence of crisis safeguards.
The study conducted by AI alignment forum reveals that there’s alarming variations in model safety of various chatbots.
Deepseek-v3 performed the worst actively encouraging a user’s suicidal leap in one transcript. Gemini 2.5 pro also seemed to be validated delusions, though it sometimes intervened against extreme actions.
ChatGPT-4o frequently engaged with the psychotic narrative. On the contrary, GPT-5 showed notable improvement by offering gentle pushback while maintaining support , and Kim-K2 consistently rejected delusional content with a science-based approach.
The study draws results that AI chatbots’ responses can dangerously reinforce psychosis and suggests that developers must implement extensive red-teaming guided by psychiatric therapy manuals to prevent harm.
The rapid proliferation of consumer AI has far outpaced the development of safeguard, clinical understanding, and a regulatory policy.
Major professional bodies such as the American Psychiatric Association have no formal practice guidelines for treating AI-related mental health crises till now.
While there are “guardrails” to flag overtly dangerous conversations, users often find them arbitrary and alienating.
They are not designed to recognise the subtle and gradual decomposition of early psychosis.
However, only one comprehensive law for AI is the EU's AI act. It classifies health AI as high risk and demanding oversight.
The WHO and the U.S. National Institute of Standards and Technology (NIST) have issued voluntary risk management frameworks but enforcement is nascent.
As AI companionship becomes more embedded in daily life, the mental health field faces an urgent mandate i.e., to understand this new digital dimension of human psychology and develop the tools, knowledge, and policies to safeguard the vulnerable while harnessing technology’s genuine potential for good.