Friday, October 24, 2025
HomeNotificationMeitY Proposes Mandatory Labelling, Traceability for AI Generated Content...

MeitY Proposes Mandatory Labelling, Traceability for AI Generated Content in Draft IT Rules Amendments

The Ministry of Electronics and Information Technology (MeitY) has released draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, aimed at addressing the growing concerns surrounding deepfakes and AI-generated misinformation. The proposed amendments introduce a comprehensive regulatory framework for identifying and labelling synthetic or AI-generated content.

Public comments on the draft will be accepted until November 6, 2025.

Key Provisions of the Draft Amendments

At the heart of the proposal lies the introduction of a new definition—“synthetically generated information.” The draft describes it as any information “artificially or algorithmically created, generated, modified or altered using a computer resource in a manner that appears reasonably authentic or true.” This broad definition brings AI-generated text, images, videos, and audio under the same due diligence and takedown obligations that currently apply to unlawful online content.

Mandatory Labelling and Traceability

Under the draft rules, intermediaries offering tools or services that can generate or alter digital media—such as AI platforms or editing applications—will be required to ensure that all synthetic content carries a clear and permanent identifier.

  • For images and videos: A visible label covering at least 10% of the surface area must indicate that the content is AI-generated.
  • For audio content: The label must be audibly played during the first 10% of the duration.
  • For metadata: A permanent embedded identifier should be included to ensure the content’s traceability.

The rules also prohibit intermediaries from suppressing or removing such identifiers, preventing attempts to conceal the synthetic origin of digital material.

Obligations for Significant Social Media Intermediaries

Large social media platforms, categorized as significant social media intermediaries (SSMIs), will face more stringent compliance requirements. Before users upload content, these platforms must:

  • Require users to declare whether the content being uploaded is synthetically generated.
  • Deploy “reasonable and proportionate technical measures,” such as automated detection systems, to verify such declarations.

Once identified or verified, synthetic content must be clearly labelled or accompanied by a visible notice to help users differentiate between authentic and manipulated media.

Failure to comply with these provisions could result in loss of safe-harbour protections under Section 79 of the Information Technology Act, 2000, exposing platforms to potential legal and regulatory action.

Protection for Good-Faith Actions

To encourage proactive compliance, MeitY has included a safeguard clause stating that intermediaries will not lose safe-harbour protection if they act in good faith to remove or disable access to synthetic content as part of grievance redressal or harm-prevention measures.

Policy Rationale and Broader Context

According to MeitY’s explanatory note, the amendments are designed to promote an “open, safe, trusted, and accountable Internet” by ensuring greater transparency in AI-generated media. The ministry emphasized that the rules aim to empower users to identify synthetic content while maintaining a balance between innovation and responsibility.

The note highlighted a surge in deepfake-related incidents—including non-consensual imagery, impersonation, and misinformation—both in India and globally. The ministry pointed out that concerns over such content have also been raised in Parliament, leading MeitY to issue advisories in the past urging social media companies to act against deepfake harms.

Expert Reactions

Commenting on the draft, Dhruv Garg, Partner at the Indian Governance & Policy Project, observed, “It is interesting to note that India has implicitly chosen to regulate generative AI platforms as intermediaries, giving them plausible safe-harbour protections. While other jurisdictions have established similar disclosure norms, it is crucial that India’s framework balances transparency with scalability, innovation, and creative expression.”

Scope and Applicability

The proposed obligations will apply only to publicly available or published AI-generated content—not to private or unpublished material. This distinction ensures that private use or experimental content remains outside the scope of regulatory oversight.

Read More: Social Media Platforms Must Act On Unlawful Content Within 36 Hours of Govt or Court Notice [READ NOTIFICATION]

Mariya Paliwala
Mariya Paliwalahttps://www.jurishour.in/
Mariya is the Senior Editor at Juris Hour. She has 5+ years of experience on covering tax litigation stories from the Supreme Court, High Courts and various tribunals including CESTAT, ITAT, NCLAT, NCLT, etc. Mariya graduated from MLSU Law College, Udaipur (Raj.) with B.A.LL.B. and also holds an LL.M. She started as a freelance tax reporter in the leading online legal news companies like LiveLaw & Taxscan.
donate