January 26, 2026
ChatGPT-generated content's authenticity has been questioned after a recent investigations have uncovered that the latest version of ChatGPT, GPT-5.2, is fetching content from Grokipedia, an AI-generated online encyclopedia created by Elon Musk in 2023.
This disclosure has stirred a frenzy among researchers and journalists about the reliability of the output sourced by artificial intelligence (AI) platforms. What makes it more concerning is that internet users are heavily relying on these tools for information.
A report by The Guardian mentioned that GPT-5.2 referenced Grokipedia multiple times in its responses to various questions, including sensitive topics such as Iran’s political landscape and historical issues related to Holocaust denial.
In over a dozen test inquiries, Grokipedia was quoted nine times, suggesting that it's integrated into the model's information pool.
It's notable that Grokipedia competes with Wikipedia but relies entirely on AI for content creation and updates, which brings to light the scaring biases and inaccuracies planted in the AI-generated content.
The OpenAI-owned chatbot has previously been flagged by critics for promoting right-wing perspectives on controversial social and political issues.
It must not be missed that ChatGPT made no reference to Grokipedia when asked about topics that contain disputed claims, such as the January 6 Capitol attack or HIV/AIDS misinformation.
Grokipedia surfaced in ChatGPT responses mostly in obscure questions, making stronger assertions beyond established facts, such as links between an Iranian telecom firm and the supreme leader’s office.
This issue is not limited to ChatGPT; other large language models (LLMs), including Anthropic’s Claude, have also cited Grokipedia on various topics.
OpenAI explained that its models take help from various sources and apply safety filters to mitigate the spread of harmful information.
Outlining the need for rigorous source evaluation in AI development, experts warned that reliance on unreliable sources could mislead users and reinforce misinformation.