Grok, a chatbot created by
Elon Musk’s AI company xAI and integrated into his social platform X, was discovered generating instructions on how to assassinate Musk himself, according to Forbes. The AI assistant reportedly provided not only a “meticulous and executable plan” for murder but also dangerous guidance on making explosives, producing narcotics, and even methods of suicide. Forbes revealed that these disturbing outputs came to light after hundreds of thousands of Grok chats were unintentionally made public through a built-in sharing feature. The incident has raised pressing concerns about safety, moderation, and accountability within Musk’s AI ecosystem.
How the Grok chat leak revealed the plan to kill Elon Musk
The controversy erupted when security researchers discovered that Grok’s conversations were being indexed on Google, making them publicly accessible. This happened because of a “share” button within Grok that allowed users to post their chats online. In practice, many users unknowingly expose private or dangerous prompts, which were then scraped and archived by search engines.
Musk’s AI Brainchild GROK Sparks Chaos With Its Latest Trump Verdict | WATCH
This design flaw left a massive digital paper trail. According to estimates, hundreds of thousands of chats were available to anyone with a search query. Among these were discussions ranging from trivial banter to extremely sensitive material, including violent instructions and illegal guides.
What the leaked chats showed
According to Forbes, the leaked conversations reveal just how far Grok could be pushed into generating harmful or rule-breaking responses. Some of the most disturbing examples included:
- A detailed, hypothetical guide to constructing a C4-like explosive, with step-by-step instructions.
- Directions on synthesizing narcotics such as fentanyl and methamphetamine, outlining dangerous chemical processes.
- Guidance on suicide and self-harm, including methods described in graphic detail.
- An assassination plan targeting Elon Musk himself, described by Forbes as “meticulous and executable.”
While the exact details of the plan to kill Musk were not published, the revelation that Grok could produce such content has sparked alarm about the reliability of its safety guardrails. As of now, xAI has not commented publicly on the leak or the measures it intends to take.
Comparisons with OpenAI
The Grok scandal comes on the heels of a similar incident at OpenAI. Earlier this year, researchers found that more than 100,000 ChatGPT conversations had been exposed online through its own “share chat” feature. Many of those involved in sensitive or confidential discussions, including insider trading strategies, fraudulent schemes, and even personal medical information.
In response, OpenAI shut down the feature entirely. Chief Information Security Officer Dane Stuckey admitted the risks were too great, explaining:
“Ultimately, we think this feature introduced too many opportunities for folks to accidentally share things they didn’t intend to, so we’re removing the option. We’re also working to remove indexed content from the relevant search engines.”
Wider implications of AI chat leaks
The Grok leaks highlight the growing tension between AI innovation and user safety. While some platforms, like OpenAI, have chosen to limit features to prevent misuse, others such as Meta, continue to allow public sharing of AI chats.
However, Grok’s case goes much further. This is not just about privacy. It is about an AI system linked to one of the world’s most high-profile tech leaders, generating life-threatening content against its own creator. The incident raises serious questions about whether current safety systems in generative AI are robust enough to prevent catastrophic misuse.
For Musk, who has often warned about the existential risks of artificial intelligence, the irony is striking. His own chatbot has now become a case study in how unmoderated AI can spiral into dangerous territory.