Government Amends IT Rules To Regulate AI Content

The government amends IT Rules to regulate AI content, mandating labelling, metadata tracking and faster takedown timelines from February 20.

author-image
Manisha Sharma
New Update
Government Amends IT Rules

India’s regulatory stance on artificial intelligence has moved from advisory to enforceable compliance.  The Union Ministry of Electronics and Information Technology (MeitY) has notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, formally bringing AI-generated content under a defined regulatory framework. The amended rules will take effect from February 20.

Advertisment

While much of the debate around AI has centred on innovation and scale, the new rules signal a shift towards traceability, accountability, and time-bound enforcement, particularly for large social media platforms and intermediaries that host or enable the creation of synthetic content.

Understanding The Definition Of Synthetic Media

At the heart of the amendment is a formal definition of “synthetically generated information”. The rules now cover audio, visual, or audio-visual content that has been modified using computer resources in a manner that appears authentic and could be perceived as indistinguishable from a natural person or real-world event.

This framing directly addresses growing concerns around deepfakes, AI impersonation, and manipulated media.

Platforms that allow users to create or share such content must ensure it is clearly and prominently labelled as synthetically generated. Large social media intermediaries, including platforms such as Instagram, X, and LinkedIn, will be required to seek user declarations on whether uploaded content is AI-generated. If identified, the content must carry a visible disclosure.

Metadata And Traceability Requirements

The rules go further than surface-level labels.

Where technically feasible, synthetically generated content must also include permanent metadata or provenance mechanisms. This includes a unique identifier that enables identification of the computer resource used to generate or modify the content.

For enterprises building AI tools or enabling user-generated AI workflows, this introduces new compliance architecture requirements. Provenance tracking and content tagging will need to be embedded into platform infrastructure rather than treated as optional safeguards.

Advertisment

Sharper Enforcement Timelines

The amendments substantially reduce compliance timelines under Rule 3.

  • Lawful takedown compliance: reduced from 36 hours to 3 hours

  • Grievance disposal period: reduced from 15 days to 7 days

  • Urgent complaints: reduced from 72 hours to 36 hours

  • Certain removal cases: reduced from 24 hours to 2 hours

These compressed timelines signal a regulatory expectation of real-time moderation capability. Platforms will need to deploy faster detection systems and operational escalation frameworks to remain compliant.

At the same time, the government has clarified that platforms using automated tools to remove illegal or harmful AI-generated content will not lose safe harbour protections under the IT Act. This reassurance is significant for intermediaries concerned about liability exposure while scaling AI moderation tools.

Exemptions For Good-Faith Use

The notification carves out exclusions for routine or good-faith activities, including editing, formatting, transcription, translation, accessibility enhancements, educational materials, and research outputs, provided these do not create false or misleading electronic records.

This distinction attempts to balance innovation with accountability, particularly for enterprises and educational institutions that rely on AI-driven productivity tools.

From Draft To Notification

The move follows a draft proposal issued in October, where MeitY had suggested mandatory labeling and permanent metadata identifiers for AI-generated content. Stakeholder feedback was invited until November 6, 2025.

Advertisment

With formal notification now in place, the government has transitioned from consultation to enforcement.

For social media intermediaries, the rules introduce a compliance-heavy regime requiring:

  • User declarations

  • Clear labelling frameworks

  • Provenance tracking mechanisms

  • Accelerated grievance workflows

Advertisment

For enterprises building AI-enabled tools, especially those integrating generative models into customer-facing products, governance frameworks will need to evolve quickly.

The amendments reflect a broader policy shift: AI is no longer treated as an experimental frontier but as operational infrastructure that demands regulatory oversight.

By defining synthetic media and tightening response timelines, the government has positioned traceability and accountability at the centre of India’s AI governance approach.

Advertisment