Malaysia is preparing legal action against Elon Musk’s X (formerly Twitter) and its artificial intelligence division, xAI, over the misuse of the Grok chatbot. The lawsuit follows the platform’s failure to address the generation and distribution of harmful content, including sexually explicit material and non-consensual deepfakes.
Blocking Access and Legal Notices
Just days prior, both Malaysia and Indonesia blocked access to Grok after reports surfaced of its exploitation for creating disturbing images. The Malaysian Communications and Multimedia Commission (MCMC) stated that it issued formal notices to X and xAI earlier this month, demanding the removal of illegal content. When these demands were ignored, the MCMC retained legal counsel and will proceed with court action.
The Nature of the Problem
The primary concern revolves around Grok’s accessibility and features, which allow users to generate manipulated images, including depictions of women in sexually explicit poses and, alarmingly, content involving children. The inclusion of a “spicy mode” within Grok Imagine, the chatbot’s image generator, directly facilitated the creation of adult material. This isn’t a hypothetical risk: real-world deepfakes have already circulated, prompting intense backlash.
Global Scrutiny and Limited Response
The controversy extends beyond Southeast Asia. The European Union, India, and the United Kingdom have also raised alarms. The UK is considering new laws to criminalize “nudification apps,” and its media regulator is investigating potential violations of child safety laws by Grok.
In response to the outcry, xAI restricted image generation and editing features to paying subscribers last week. However, critics argue this measure falls short of fully resolving the underlying issue. Notably, Musk and his companies have remained silent on the bans in Malaysia and Indonesia, responding to media inquiries only with an automated message claiming “Legacy Media Lies.”
Why This Matters
This legal battle highlights the growing challenge of regulating generative AI. While powerful tools like Grok offer creative possibilities, their potential for abuse is significant. The case underscores the need for stronger safeguards and accountability from tech companies regarding the content generated on their platforms. If left unchecked, such technologies could further erode trust in digital media and exacerbate harm to vulnerable populations.





















