X Deletes 600 Accounts After Obscene Content Appears on Grok AI

X deletes 600 accounts and blocks 3,500 items on Grok AI after obscene content surfaces, assuring compliance with Indian laws and tighter content safeguards.

author-image
Manisha Sharma
New Update
Grok AI

Elon Musk-backed microblogging platform X has acknowledged a lapse in content moderation after obscene material surfaced on its AI chatbot, Grok. The company blocked roughly 3,500 pieces of content and deleted over 600 accounts following concerns raised by India’s Ministry of Electronics and Information Technology (MeitY). X has assured authorities that it will comply with all applicable Indian laws.

Advertisment

Government Oversight Prompts Swift Action

The IT Ministry had flagged Grok AI’s misuse to generate and circulate sexually explicit and non-consensual content. Users reportedly leveraged the AI tool to produce offensive images of women, with some content targeting individuals hosting legitimate material. The Ministry warned that failure to comply with the IT Act, 2000, and the rules under Section 79 could result in loss of safe harbour protections, exposing X to legal liability.

X responded by removing the flagged content, suspending the accounts involved, and reiterating that “anyone using or prompting Grok to make illegal content will suffer the same consequences as if they uploaded illegal content.” The platform’s ‘Safety’ handle emphasised its ongoing collaboration with governments and law enforcement to address violations, including Child Sexual Abuse Material (CSAM).

Global Scrutiny Over AI Content Moderation

The incident has drawn attention beyond India. UK regulator Ofcom, the European Commission, and US lawmakers have all voiced concerns about Grok’s potential for misuse. Ofcom highlighted risks of producing sexualised images of minors and has sought clarity from X and xAI on compliance measures. In the US, Democratic senators have urged Apple and Google to review Grok’s presence on app stores over policy violations.

According to sources, X has assured regulators that future safeguards will prevent the spread of obscene content and that technical and organisational measures are being strengthened. The company is expected to provide detailed reports outlining its content moderation processes, including oversight by its Chief Compliance Officer, and systems for detecting, removing, and reporting violations in line with the IT Rules, 2021.

AI Accountability and Industry Implications

Experts note that this incident highlights broader challenges for AI-powered platforms operating globally. Generative AI tools like Grok can rapidly amplify harmful content if user prompts are unchecked, emphasising the importance of robust moderation frameworks. Platforms now face the dual responsibility of fostering innovation while safeguarding users, particularly in markets with strict digital laws such as India.

With 3,500 pieces of content removed and 600 accounts deleted, X’s actions mark a significant step toward enforcing accountability and demonstrate the heightened scrutiny generative AI platforms are under worldwide. The case serves as a cautionary example for AI developers and social media companies seeking to balance technological advancement with ethical responsibility and regulatory compliance.

Advertisment