October 23, 2025
Artificial intelligence assistants distort or misrepresent news content in almost half their responses, according to research released on Wednesday by the European Broadcasting Union (EBU) and the BBC.
The study reviewed 3,000 answers generated by leading AI-powered assistants to news-related questions. The systems, which include OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity, were tested for factual accuracy, source attribution and ability to separate fact from opinion.
The research covered 14 languages and found widespread inconsistencies, highlighting risks for users who rely on AI tools for news consumption. The findings come as media regulators and news organisations grow increasingly concerned about misinformation spread by generative AI models.
The EBU and BBC said the study underscores the need for transparency in how AI assistants process and present news content, warning that their growing popularity could blur lines between verified journalism and synthetic information.
Overall, 45% of the AI responses studied contained at least one significant issue, with 81% having some form of problem, the research showed.
Reuters has made contact with the companies to seek their comment on the findings.
Gemini, Google's AI assistant, has stated previously on its website that it welcomes feedback so that it can continue to improve the platform and make it more helpful to users.
OpenAI and Microsoft have previously said hallucinations - when an AI model generates incorrect or misleading information, often due to factors such as insufficient data - are an issue that they are seeking to resolve.
Perplexity says on its website that one of its "Deep Research" modes has 93.9% accuracy in terms of factuality.
A third of AI assistants' responses showed serious sourcing errors such as missing, misleading or incorrect attribution, according to the study.
Some 72% of responses by Gemini, Google's AI assistant, had significant sourcing issues, compared to below 25% for all other assistants, it said.
Issues of accuracy were found in 20% of responses from all AI assistants studied, including outdated information, it said.
Examples cited by the study included Gemini incorrectly stating changes to a law on disposable vapes and ChatGPT reporting Pope Francis as the current Pope several months after his death.
Twenty-two public-service media organisations from 18 countries, including France, Germany, Spain, Ukraine, Britain and the United States, took part in the study.
With AI assistants increasingly replacing traditional search engines for news, public trust could be undermined, the EBU said.
"When people don’t know what to trust, they end up trusting nothing at all, and that can deter democratic participation," EBU Media Director Jean Philip De Tender said in a statement.
Some 7% of all online news consumers and 15% of those aged under 25 use AI assistants to get their news, according to the Reuters Institute’s Digital News Report 2025.
The new report urged AI companies to be held accountable and to improve how their AI assistants respond to news-related queries.