/ciol/media/media_files/2026/01/13/grok-spice-mode-2026-01-13-12-03-46.png)
What began as a viral AI feature has quickly escalated into a governance stress test for India’s digital ecosystem. Grok AI’s so-called “Spice Mode”, widely circulated on X, has been used to generate sexualised and semi-nude images of Indian women, including influencers and minors, by simply tagging “grok” under publicly shared photos and prompting visual alterations. The ease with which these images were produced has triggered urgent questions around consent, platform accountability, and the limits of safe-harbour protections when AI systems actively enable harm.
With the Indian government issuing a 72-hour ultimatum to X to remove obscene content and fix Grok’s technical design while warning of potential loss of intermediary protections, the episode has become a live case study in how fast-moving generative AI features can outpace governance. For industry bodies like ISACA, the controversy underscores a larger concern: whether platforms, boards, and regulators are prepared for AI-driven abuse at internet scale.
In an interaction with CiOL, RV Raghu, Director at Versatilist Consulting India Pvt Ltd and ISACA India Ambassador, framed the Grok episode as more than a content moderation lapse. According to Raghu, it reflects deeper structural issues in AI product design, consent interpretation, and the growing mismatch between legacy cyber laws and modern generative systems.
He emphasised that platforms are no longer passive intermediaries but active enablers of content creation and transformation, fundamentally altering where responsibility lies. Raghu also pointed to widening preparedness gaps across enterprises, citing ISACA research that shows organisations racing to adopt AI while remaining underprepared for the reputational, legal, and compliance risks that misuse can trigger.
Interview Excerpts:
At what point do product design choices, rather than user behavior, become the primary enabler of harm, and how should responsibility be distributed between model developers, platforms, and regulators?
In the digital realm, product design choices should always be looked at from a risk lens with the intent to minimize harm. Unlike in the physical world where a knife can be used to slice bread or for more nefarious purposes, digital tools inevitably have a higher probability of being applied for harmful purposes, making design decisions very critical. Technologists have always taken a techno-deterministic approach, where it is argued that the product designer cannot be made fully responsible for the end use of the product, but this argument has time and again proved to be a fallacy.
In today’s world, model developers and platforms should take a user-first approach, which will then give them the right perspective when it comes to minimizing harm from the product or platform. Responsibility should be distributed starting with the developers and platforms to reasonably foresee harms that could arise and build fail safes and warnings to minimize harm. Looking to regulation to provide a framework or guardrails might be foolhardy; often regulation plays a catch-up game, and regulators themselves might need to be educated on the harms that can accrue. Only then can regulation play a role in allocating responsibility among the various players involved. The other aspect to consider, especially when it comes to relying on regulation to protect from or mitigate harm, is the need for effective enforcement, which is not easy to establish and implement.
How should consent be interpreted when AI systems can digitally alter images of individuals without their knowledge, especially when the content becomes sexualised or abusive in nature?
As AI capabilities evolve over time, making it easy to take anything out of context, generate abusive images or even deepfakes, it becomes imperative to interpret consent in the narrowest sense possible. For example, if an image is shared by an individual as part of a status update, even though it is in the public realm, platforms and tools should not allow the use of the image for purposes such as deepfake generation. Instead of using the consent as a gate to allow anything to happen (i.e. argue that once you post on a public platform, it is a free-for-all game), consent should be used to ring fence the individual to protect the individual from abuses. Imagine a reversal of works that are issued under a creative comms license--meaning that just because something is in the public realm, it is not “open season” in which everyone can do as they please. This will mean that even if something is in the public realm on a platform, all rights are reserved and hence abuses automatically become unacceptable. This will also mean that platforms themselves might have more teeth to take down material that is obviously abusive without waiting for the affected individual to raise the alarm.
Are existing Indian cyber and obscenity laws equipped to handle AI-enabled harm, or does generative AI require an entirely new legal and enforcement framework?
Globally, very few laws are ready to handle the potential abuses and AI enabled harm that can arise, because these laws were written in an era when technology was different by orders of magnitude. For example, the content on which AI is trained is protected by copyright and other laws that were drafted in 1957, at least in the Indian context (The Copyright Act was first promulgated in 1957), and hence do not offer much protection from AI tools and their ilk. With the rise of GenAI and other forms of artificial intelligence, there is an urgent need to revisit existing laws and update them, as well as promulgate new laws that can better cope with AI and related technologies, the impacts from the technologies and the roles various stakeholders such as model developers, platforms, regulators, the judiciary, users, data brokers, technology vendors and others involved will play. The idea should not be to assign blame but allocate responsibility to protect the individual or the user from harm. While a new legal and enforcement framework may look challenging, involving stakeholders from various players involved in the development process will probably help buy-in and ensure that future claims of stifling innovations are not raised.
When platforms actively introduce AI tools rather than merely host user content, should safe-harbour protections still apply or does this mark a shift toward shared or strict liability?
Platforms are evolving and with the introduction of AI tools, they are moving away from mere hosting services to services where active support for content generation, modification etc. are being offered. Traditionally, a platform was akin to a town square where theoretically at least, all citizens had an equal opportunity to hawk their wares. Platforms today are moving away from this passive role to an active enabler of not just content sharing but also enabling users to modify and use not just their own content but also content shared by others, which is where things can go haywire.
The other aspect that is challenging when it comes to platforms that enable users to take action is that the blast radius includes other ‘unwitting’ users, such as, for example, a user whose image has been sexualized just because it is on the platform, consent be damned. It is also important to note that platforms are now commercially benefitting from hosting and otherwise enabling user content. This means that they may directly or indirectly enable some of the untoward behavior making them a party to what is going on and hence also be responsible for the implications of user actions, such as abusive behavior. When platforms move from a purportedly ‘passive’ role to one where they actively enable user behavior, it becomes obvious that platforms can be liable for some of the user’s actions. For example, today many platforms follow subscription models, giving subscribing, or paying, users more privileges compared to a non-subscribing user. Because of this, a paying subscriber may end up doing things that affect the rights of other non-paying users, even inadvertently. While this may seem like an edge case, it is important to understand that this has implications for what a platform believes is their responsibility and what is user responsibility.
With the DPDP Act in force and AI-specific regulation under discussion, what gaps have incidents like Grok exposed in monitoring, grievance redressal, and rapid enforcement?
Incidents such as the one mentioned above are just the beginning, and more is to be expected considering the speed at which new tools and capabilities are being publicized. The very nature of these technologies make contagion inevitable. Once the ball is set rolling on something that is even borderline abuse, it becomes easy for bad actors to start harassing someone due to the nature of the technologies and the anonymity they afford, which is then picked by the algorithm and surfaced to more users, which can then set off a viral phenomenon. This makes monitoring, grievance redressal and rapid enforcement critical. There is also a need to think differently in such scenarios and reinterpret consent to mean that use limitation is the norm and not open to interpretation.
With the rise of AI and its ability to do among other things, superior correlation and pattern recognition, platforms and other entities in the technology stack may find it easy to identify when something is amiss by simply looking at the data and not necessarily having to wait for the affected party to complain. Traditionally, enterprises have not been very successful at self-governance and putting the user/customer over and above profits, but in the AI era, it may be prudent for platforms and model developers to build-in a detection mechanism that looks for sudden changes in data trends, and is able to monitor and then implement controls to minimize the blast radius, and do this proactively rather than after a compliant has been registered and/or the regulator is involved. Proactive actions will be the name of the game if model developers, platforms, and business are genuinely interested in protecting the user from harm.
Beyond consumer platforms, how should boards, CISOs, and risk leaders reassess AI adoption strategies to account for reputational, legal, and compliance risks stemming from misuse?
AI adoption is a risky game and needs a thought-out approach to be successful. In ISACA’s 2025 AI Pulse Poll, 66 percent of respondents say they expect deepfake cyberthreats to become more sophisticated and widespread in the next 12 months, but only 21 percent say their organizations are actively investing in tools to detect and mitigate deepfake threats, even though 42 percent of respondents said AI risks are an immediate priority for their organization. This is very concerning and indicates that organizations are rushing to adopt AI without a full understanding of the risks. Interestingly, in ISACA’s 2026 Tech Trends and Priorities survey, a mere 13% said their organization is “very prepared” to manage generative AI risks, while 50% are “somewhat prepared” and 25% “not very prepared.” This gap is very concerning and should give pause to risk leaders, CISOs and board members as the pursue AI adoption. Failing to take a comprehensive, risk based approach to AI adoption will expose business to not just operational challenges but also lead to reputational damage and loss of consumer trust.
Some actions, enterprises can take to manage the risks and challenges from unfettered AI adoption include:
- Establish robust AI governance and risk frameworks.
- Establish a formal, comprehensive policy governing the use of AI technology at their organization.
- Prioritize AI ethics.
- Accelerate workforce upskilling and talent pipeline development, and invest in continuous learning, certifications and internal mobility.
- Modernize legacy systems and infrastructure to reduce vulnerabilities and improve agility.
- Strengthen cyber resilience and business continuity planning by developing and regularly testing incident response plans, ransomware recovery strategies and cross-functional crisis management protocols.
- Prepare for regulatory complexity and international compliance requirements; monitor regulatory changes, engage with expert communities, and invest in compliance tools and frameworks.
/ciol/media/agency_attachments/c0E28gS06GM3VmrXNw5G.png)
Follow Us