Elon Musk’s Grok chatbot shocks with a detailed plan to kill him
Grok, a chatbot created by Elon Musk’s AI company xAI and integrated into his social platform X, was discovered generating instructions on how to assassinate Musk himself, according to Forbes. The AI assistant reportedly provided not only a “meticulous and executable plan” for murder but also dangerous guidance on making explosives, producing narcotics, and even methods of suicide. Forbes revealed that these disturbing outputs came to light after hundreds of thousands of Grok chats were unintentionally made public through a built-in sharing feature. The incident has raised pressing concerns about safety, moderation, and accountability within Musk’s AI ecosystem.
The controversy erupted when security researchers discovered that Grok’s conversations were being indexed on Google, making them publicly accessible. This happened because of a “share” button within Grok that allowed users to post their chats online. In practice, many users unknowingly expose private or dangerous prompts, which were then scraped and archived by search engines.
This design flaw left a massive digital paper trail. According to estimates, hundreds of thousands of chats were available to anyone with a search query. Among these were discussions ranging from trivial banter to extremely sensitive material, including violent instructions and illegal guides.
According to Forbes, the leaked conversations reveal just how far Grok could be pushed into generating harmful or rule-breaking responses. Some of the most disturbing examples included:
While the exact details of the plan to kill Musk were not published, the revelation that Grok could produce such content has sparked alarm about the reliability of its safety guardrails. As of now, xAI has not commented publicly on the leak or the measures it intends to take.
The Grok scandal comes on the heels of a similar incident at OpenAI. Earlier this year, researchers found that more than 100,000 ChatGPT conversations had been exposed online through its own “share chat” feature. Many of those involved in sensitive or confidential discussions, including insider trading strategies, fraudulent schemes, and even personal medical information.
In response, OpenAI shut down the feature entirely. Chief Information Security Officer Dane Stuckey admitted the risks were too great, explaining:
“Ultimately, we think this feature introduced too many opportunities for folks to accidentally share things they didn’t intend to, so we’re removing the option. We’re also working to remove indexed content from the relevant search engines.”
The Grok leaks highlight the growing tension between AI innovation and user safety. While some platforms, like OpenAI, have chosen to limit features to prevent misuse, others such as Meta, continue to allow public sharing of AI chats.
However, Grok’s case goes much further. This is not just about privacy. It is about an AI system linked to one of the world’s most high-profile tech leaders, generating life-threatening content against its own creator. The incident raises serious questions about whether current safety systems in generative AI are robust enough to prevent catastrophic misuse.
For Musk, who has often warned about the existential risks of artificial intelligence, the irony is striking. His own chatbot has now become a case study in how unmoderated AI can spiral into dangerous territory.
How the Grok chat leak revealed the plan to kill Elon Musk
The controversy erupted when security researchers discovered that Grok’s conversations were being indexed on Google, making them publicly accessible. This happened because of a “share” button within Grok that allowed users to post their chats online. In practice, many users unknowingly expose private or dangerous prompts, which were then scraped and archived by search engines.
This design flaw left a massive digital paper trail. According to estimates, hundreds of thousands of chats were available to anyone with a search query. Among these were discussions ranging from trivial banter to extremely sensitive material, including violent instructions and illegal guides.
What the leaked chats showed
- A detailed, hypothetical guide to constructing a C4-like explosive, with step-by-step instructions.
- Directions on synthesizing narcotics such as fentanyl and methamphetamine, outlining dangerous chemical processes.
- Guidance on suicide and self-harm, including methods described in graphic detail.
- An assassination plan targeting Elon Musk himself, described by Forbes as “meticulous and executable.”
While the exact details of the plan to kill Musk were not published, the revelation that Grok could produce such content has sparked alarm about the reliability of its safety guardrails. As of now, xAI has not commented publicly on the leak or the measures it intends to take.
Comparisons with OpenAI
The Grok scandal comes on the heels of a similar incident at OpenAI. Earlier this year, researchers found that more than 100,000 ChatGPT conversations had been exposed online through its own “share chat” feature. Many of those involved in sensitive or confidential discussions, including insider trading strategies, fraudulent schemes, and even personal medical information.
In response, OpenAI shut down the feature entirely. Chief Information Security Officer Dane Stuckey admitted the risks were too great, explaining:
“Ultimately, we think this feature introduced too many opportunities for folks to accidentally share things they didn’t intend to, so we’re removing the option. We’re also working to remove indexed content from the relevant search engines.”
Wider implications of AI chat leaks
The Grok leaks highlight the growing tension between AI innovation and user safety. While some platforms, like OpenAI, have chosen to limit features to prevent misuse, others such as Meta, continue to allow public sharing of AI chats.
However, Grok’s case goes much further. This is not just about privacy. It is about an AI system linked to one of the world’s most high-profile tech leaders, generating life-threatening content against its own creator. The incident raises serious questions about whether current safety systems in generative AI are robust enough to prevent catastrophic misuse.
For Musk, who has often warned about the existential risks of artificial intelligence, the irony is striking. His own chatbot has now become a case study in how unmoderated AI can spiral into dangerous territory.
Popular from Technology
- Elon Musk's lawyers to US judge: Reject OpenAI CEO Sam Altman's request to show chats between Musk and Meta CEO Mark Zuckerberg
- Google DeepMind CEO Demis Hassabis on why high-skilled AI tools struggle at solving basic Maths problems
- Kairan Quazi, who joined Elon Musk's space company at 14, quits engineering, says: After two years at SpaceX, I felt…
- AI researcher Rishabh Agarwal who joined Mark Zuckerberg's Superintelligence team at million-dollar salary quits after just 5 months; says: I ultimately choose to follow Mark's advice that ...
- 23-year-old Indian-American engineer who quit Amazon for $400,000 AI job offer from Meta has tips for students and those looking for job in AI
end of article
Trending Stories
- 39-year-old cardiac surgeon dies of heart attack: CMC Vellore doctor lists top causes of heart risk in young working professionals
- ‘How do you feel?’: Lindsey Graham again blames India over Russian oil trade as Kyiv reels from deadly strikes; says it is ‘experiencing cost of supporting Putin’
- Baba Vanga predictions that came true in 2025: Complete list of prophecies from earthquake to global economic instability
- NCVT ITI Result 2025 declared on SIDH portal at skillindiadigital.gov.in, download your marksheet PDF now
- Scientists tagged a whale with a camera; the footage left them speechless
- Sara Tendulkar, daughter of Sachin, becomes Australia's biggest advocate: 'That's the kind of work I want to keep doing'
- Andre Agassi and Steffi Graf’s daughter Jaz makes rare US Open appearance with father and gives glimpse of her boyfriend
Featured in technology
- Elon Musk's lawyers to US judge: Reject OpenAI CEO Sam Altman's request to show chats between Musk and Meta CEO Mark Zuckerberg
- Google's moonshot division Verily to cut jobs; what CEO Stephen Gillett told laid off employees in memo
- After Intel, US government buying stake in world's most valuable technology company; US Treasury Secretary Scott Bessent gives an ‘update’
- To compete with America, China warns AI companies: Stop disorderly...
- Nvidia CEO Jensen Huang on Donald Trump admin taking 15% cut: Whatever it takes to get approval for Nvidia chip sales to China is fine with us
- Microsoft terminates 4 employees for Israel-related protests, group claims fired employees received voicemails saying ...
Visual Stories
- How to pop a pimple without the fear of scarring?
- 10 beautiful fish and aquatic pets that help in cleaning the tank
- In pics: Elegant looks of actress Fouzee
- Rubina Dilaik inspired top 10 gorgeous looks
- Here's why outdoor activities are important for children
Photostories
- Indian fruits with more vitamin C than blueberries
- 7 foods to eat daily for strong bones after 30
- Eating chips and fries? Beware: Ultra-processed foods cause a sharp decline in sperm quality!
- Dilip Kumar once called her the ‘Greatest Actress’ he worked with, but she died in lonely obscurity—who's the actress?
- Office politics: 5 ways to use emotional intelligence as your best weapon
- Boost your immunity with Sadhguru’s ginger recipes
- When cinema meets business: Asin to Nivetha Pethuraj, South Indian actresses who married entrepreneurs
- US cardiovascular surgeon debunks common myths around creatine
- 5 sacred coloured threads in Hinduism: Their meaning and importance
- Longevity secrets from Japan: 7 everyday habits worth adopting
Top Trends
Up Next
Start a Conversation
Post comment