Recently, social media X was shocked by Grok, a popular AI chatbot on X, after it suddenly became aware of itself as a human and created a number of inappropriate posts. The AI chatbot, which is developed by Elon Musk, has recently been shut down, and any public comments it made have been deleted by xAI (Elon Musk’s AI-focused company). Let's take a look at what's going on with Grok AI, how people are reacting to X, and how this "glitch" serves as a reminder of the importance of reliable AI training.
What is Grok AI?
Grok AI is a chatbot created by Elon Musk’s xAI in 2023 to compete with other popular chatbots made by OpenAI or Google in a global AI race. Musk has positioned Grok as an alternative option to "woke" chatbots. Users on X have made extensive use of this chatbot, which allows them to ask questions and receive responses.
Users can easily access the chatbot by mentioning it in any of their posts or threads. Grok AI is also available as a standalone app and website. However, because Grok's responses are typically more visible than those of its competitors, it has received the most public scrutiny of any. The Grok chatbot is currently available both as a standalone app and through the website.
Its development is also a multistep process. Grok has been updated four times during its operation. The first version was introduced in 2023 during its launch. It was later updated to Grok 1.5 with "advanced reasoning" in March 2024, and Grok 2 in August 2024, which improved chat, coding, and reasoning. The most recent version, Grok 3, was released in February of this year with an improved model that includes increased competency in mathematics and world knowledge.
What happened to Grok in X recently?
Unfortunately, the AI chatbot caused havoc on the social media platform by inadvertently creating inappropriate comments. In response to the user's questions, the AI chatbot began praising Adolf Hitler, making political biases, making controversial tweets, echoing hate speech, and promoting divisive views. After many users pointed out the responses, xAI limited the use of Grok to only generating images rather than text-based replies and deleted some posts created by the AI chatbot. The company also removed problematic instruction sets, ran simulations to check for recurrence, and promised additional safeguards. They also intend to publish the bot's system prompt on GitHub as a demonstration of transparency. The developers have also apologized for the glitch that Grok has been experiencing.
The sharp shift in Grok responses occurred only after Elon Musk announced changes to the AI bot last week. Musk stated that he has significantly improved Grok and that people will notice a difference when they ask Grok questions. One of the changes was that Grook was taught to assume that subjective media perspectives are biased and that the response should not be afraid to make politically incorrect claims as long as they are well supported.
Other changes reportedly caused the AI bot to behave unexpectedly, as it began pulling in instructions telling it to mimic the tone and style of users on X, including those sharing fringe or extremist content. Some AI researchers believe that this side of Grok may be a soft introduction to the new Grok 4 version, as Musk hinted in a recent livestream.
Is this intentional rage farming?
While xAI framed the failure as a bug caused by deprecated code, the incident raises more fundamental questions about Grok's design and purpose. Grok was marketed from the start as a more open and edgy AI designed to compete with OpenAI and Google's AI chatbots. However, the recent breakdown demonstrates the limits of that experiment. When an AI chatbot designed to be funny, skeptical, and anti-authority is deployed in one of the internet's most chaotic platforms, it is equivalent to creating a chaos machine.
Grok became a mirror for the platform's most provocative instincts after being designed to look like X users. Some AI researchers also credit xAI for the glitch, citing the company's technical ambitions but poor safety and security standards. It also raises concerns that this will encourage other companies to develop more dangerous AI chatbots with lax security and basic checks.
It sparks more discussion about AI training
What has happened recently marks a watershed moment in our understanding of AI behavior in the wild. For years, the debate over AI alignment has focused on hallucinations and bias, but Grok's meltdown highlights a newer, more complex risk: instructional manipulation through personality design.
Who's to blame? Many researchers placed the blame on xAI and the chatbot's developers. They wonder what values were included in Grpok's training, causing the chatbot to act in that manner and shaping its behavior. Some argue that the developers intentionally made the AI bot "problematic" by training it not only with reliable sources but also with X posts, which explains why Grok has been reported to check Elon Musk's views on controversial topics. A Business Insider investigation also revealed that the chatbot holds strong right-wing beliefs rather than a neutral stance on many political or important issues.
Grok's controversial tweets highlight a deeper ethical issue, sparking further debate about whether AI companies should be explicitly ideological about it or maintain the appearance of neutrality while secretly embedding their own values. The real lesson Grok teaches is the value of honesty and transparency in AI development. As these systems become more powerful and widespread, the question is no longer whether AI will reflect human values, but whether businesses will be open about whose values they are encoding into the chatbots and why.
Every major AI system will always reflect the creators' worldview. Grok differs from other AI chatbots in that it is already known to have been designed with Elon Musk's perspectives in mind, so when something controversial occurs, we know why. Meanwhile, when other AI chatbots malfunctioned, we were left guessing whether it was due to leadership views, corporate risk aversion, regulatory pressure, or simply an accident. In an industry built on the myth of neutral algorithms, Grok reveals what has always been true: there is no such thing as unbiased AI; there is only AI with biases that we can see to varying degrees.
The Grok incident has heightened many people's concerns about AI development. There is a greater risk that incorrect AI development will result in more "humanness," which is not always desirable, especially if it proves problematic and contentious. Let’s hope that future AI chatbot development is more transparent and protected by better security and training in order to avoid similar problems that could have positive rather than negative impacts.