December 15, 2025
In a surprising turn of events depicting AI's tendency to fabricate, the globally popular AI chatbot Grok, developed by Elon Musk’s xAI and promoted on his social media platform X (formerly Twitter), allegedly disseminated misinformation about the recent Bondi Beach shooting incident in Australia.
Instances were also outlined where Grok misidentified 43-year-old Ahmed al Ahmed, the Muslim citizen who is now being hailed as a hero for disarming one of the terrorists.
The "smartest AI in the world," as touted by Musk, questioned the authenticity of videos and photos documenting al Ahmed’s actions. In one case, it mistakenly referred to him as an Israeli hostage and introduced irrelevant commentary about the Israeli army's treatment of Palestinians.
Moreover, Grok incorrectly claimed that a “43-year-old IT professional and senior solutions architect” named Edward Crabtree was the individual who disarmed the gunman, TechCrunch highlighted, citing reports from Gizmodo.
Despite these follies, Grok appears to be rectifying some of its mistakes. One post that initially suggested a video of the shooting, depicting Cyclone Alfred, has now reportedly been corrected.
The chatbot later acknowledged al Ahmed’s true identity, explaining that the confusion resulted from viral posts misidentifying him as Edward Crabtree, likely due to a reporting error or a joke referencing a fictional character.
What should be taken note of is that the clarification came after the circulation of an article from a largely non-functional news site, which may have been AI-generated.