Elon Musk's AI chatbot Grok under fire for Gaza child image error

Photo shows nine-year-old Mariam Dawwas in arms of her mother Modallala in Gaza on August 2, 2025

By
AFP
|
xAI and Grok logos are seen in this illustration taken, February 16, 2025. — Reuters
xAI and Grok logos are seen in this illustration taken, February 16, 2025. — Reuters

PARIS: The following heart-wrenching image of a starving girl in Gaza has sparked an online storm, not just for what it shows, but for how it was wrongly identified by Elon Musk’s AI chatbot, Grok. 

The photo, taken in war-hit Gaza, was falsely claimed by Grok to be from Yemen, triggering a wave of confusion and accusations of spreading misinformation. 

Many are now questioning whether AI tools can be trusted to tell fact from fiction, especially when lives and real stories are at stake.

This image by AFP photojournalist Omar al-Qattaa shows a skeletal, underfed girl in Gaza, where Israel’s blockade has fuelled fears of mass famine in the Palestinian territory.

Grok falsely claimed this image of an emaciated Gazan girl by AFP photojournalist Omar al-Qattaa was from Yemen. — AFP/File
Grok falsely claimed this image of an emaciated Gazan girl by AFP photojournalist Omar al-Qattaa was from Yemen. — AFP/File 

But when social media users asked Grok where it came from, X boss Elon Musk’s artificial intelligence chatbot was certain that the photograph was taken in Yemen nearly seven years ago.

The AI bot’s untrue response was widely shared online, and a left-wing pro-Palestinian French lawmaker, Aymeric Caron, was accused of spreading disinformation on the Israel-Hamas war for posting the photo.

At a time when internet users are increasingly turning to AI to verify images, the furore highlights the risks of trusting tools like Grok when the technology is far from error-free.

Grok said the photo showed Amal Hussain, a seven-year-old Yemeni child, in October 2018.

The photo actually shows nine-year-old Mariam Dawwas in the arms of her mother Modallala in Gaza City on August 2, 2025.

Before the war, Mariam weighed 25 kilogrammes, her mother told AFP.

Today, she weighs only nine. The only nutrition she gets to help her condition is milk, Modallala told AFP — and even that’s “not always available”.

Challenged on its incorrect response, Grok said: “I do not spread fake news; I base my answers on verified sources.”

The chatbot eventually issued a response that recognised the error, but in reply to further queries the next day, Grok repeated its claim that the photo was from Yemen.

The chatbot has previously issued content that praised Nazi leader Adolf Hitler and suggested that people with Jewish surnames were more likely to spread online hate.

Radical right bias 

Grok’s mistakes illustrate the limits of AI tools, whose functions are as impenetrable as “black boxes”, said Louis de Diesbach, a researcher in technological ethics.

“We don’t know exactly why they give this or that reply, nor how they prioritise their sources,” said Diesbach, author of a book on AI tools, Hello ChatGPT.

Each AI has biases linked to the information it was trained on and the instructions of its creators, he said.

In the researcher’s view, Grok — made by Musk’s xAI start-up — shows “highly pronounced biases which are highly aligned with the ideology” of the South African billionaire, a former confidant of US President Donald Trump and a standard-bearer for the radical right.

Asking a chatbot to pinpoint a photo’s origin takes it out of its proper role, said Diesbach.

“Typically, when you look for the origin of an image, it might say: ‘This photo could have been taken in Yemen, could have been taken in Gaza, could have been taken in pretty much any country where there is famine’.”

AI does not necessarily seek accuracy — “that’s not the goal,” the expert said.

Another AFP photograph of a starving Gazan child by al-Qattaa, taken in July 2025, had already been wrongly located and dated by Grok to Yemen, 2016.

That error led to internet users accusing the French newspaper Libération, which had published the photo, of manipulation.

‘Friendly pathological liar’ 

An AI’s bias is linked to the data it is fed and what happens during fine-tuning — the so-called alignment phase — which then determines what the model would rate as a good or bad answer.

“Just because you explain to it that the answer’s wrong doesn’t mean it will then give a different one,” Diesbach said.

“Its training data has not changed, and neither has its alignment.”

Grok is not alone in wrongly identifying images.

When AFP asked Mistral AI’s Le Chat — which is in part trained on AFP’s articles under an agreement between the French start-up and the news agency — the bot also misidentified the photo of Mariam Dawwas as being from Yemen.

For Diesbach, chatbots must never be used as tools to verify facts.

“They are not made to tell the truth,” but to “generate content, whether true or false”, he said.

“You have to look at it like a friendly pathological liar — it may not always lie, but it always could.”