Introduction
As artificial intelligence tools grow increasingly advanced, deepfakes—synthetic audio, video, or image content that convincingly mimics real individuals—have emerged as a significant legal concern. In India, courts are beginning to encounter such content in both criminal and civil matters, yet the existing legal framework offers little clarity. The lack of targeted laws leaves courts grappling with questions of admissibility, authenticity, and accountability.
The Current Legal Landscape: Partial Coverage, Major Gaps
India lacks a deepfake-specific law. Instead, courts and investigators rely on provisions scattered across older laws:
- Information Technology Act, 2000
Sections such as 66C (identity theft), 66D (cheating via impersonation), and 66E (privacy violations) may apply. However, these were not drafted with deepfakes in mind and often fall short. - Indian Penal Code (IPC) & Bharatiya Nyaya Sanhita (BNS), 2023
Sections related to defamation, voyeurism, or cyberstalking can be invoked. Yet, their scope remains reactive rather than preventative. - IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
These guidelines place content moderation responsibilities on platforms, but do not directly define deepfake violations. - Digital Personal Data Protection Act, 2023
While this act addresses unauthorized use of personal data, it only indirectly tackles deepfakes generated without consent.
Why Is This a Legal Grey Zone?
- No Clear Legal Definition
Deepfakes sit in an undefined space under Indian law. There’s no consensus on what constitutes a deepfake or how to classify its various uses—satirical, pornographic, political, or malicious. - Authentication Challenges in Court
Sections like 65B of the Indian Evidence Act require specific certification for electronic records. With deepfakes, verifying authenticity becomes far more complex, often requiring expert analysis or forensic tech. - Lack of Precedents
Courts have only recently begun commenting on deepfakes. In one Delhi family court case, a judge rejected photographic evidence submitted in a divorce case, citing concerns about the prevalence of fabricated media. - Cross-Border Barriers
Many deepfakes originate outside India, complicating enforcement. Legal processes are slow, while deepfake creation and distribution is almost instantaneous.
Judicial Attitudes and Recent Observations
- Delhi High Court: In 2023, the court ruled in favor of actor Anil Kapoor, restraining misuse of his likeness through AI or manipulated content—an early step in recognizing digital persona rights.
- Supreme Court, May 2025: In response to a plea involving a viral deepfake video of a military officer, the apex court declined to intervene directly but acknowledged the need for legislative action.
- Justice Hima Kohli (2023): The Supreme Court judge publicly highlighted deepfakes as a serious invasion of digital privacy and warned of their potential to destabilize trust in media.
Deepfake Evidence in Courts: A Risky Proposition
Courts have historically leaned toward admitting digital evidence, especially in cases involving public interest. However, with AI-generated media, the risk of deception is unprecedented. Evidence that appears genuine may, in fact, be synthetic and misleading.
While tools like video forensics or blockchain timestamping are starting to enter Indian courts, they remain underutilized and lack regulatory backing.
The Way Forward: Reform Recommendations
Objective | Legal/Policy Recommendation |
---|---|
Legal Definition | Introduce a statutory definition for deepfakes, clearly differentiating between malicious and benign uses. |
Evidence Verification | Require independent certification and forensic examination for suspected deepfake evidence. |
Platform Accountability | Update IT rules to enforce takedown obligations and liability for content hosts. |
Judicial Training | Establish AI and cyber forensic training for judges, prosecutors, and law enforcement. |
International Collaboration | Create treaties or agreements for identifying and prosecuting cross-border deepfake creators. |
India’s legal apparatus is reactive, not proactive, in dealing with these sophisticated threats. Without a systemic update, victims will continue to struggle with proving harm, while perpetrators remain difficult to trace or punish.
Real-World Impact: Why This Can’t Wait
Deepfakes are already being used in India to:
- Falsely implicate individuals in criminal or sexual acts.
- Manipulate political narratives during elections.
- Defraud businesses through impersonation.
- Harass individuals by misusing personal data.
If the judiciary does not evolve its standards of evidence to account for AI manipulation, innocent individuals may be wrongly incriminated—or guilty parties may go free.
Conclusion: The Need for Tech-Aware Justice
As India moves deeper into the digital age, deepfakes present a serious threat to truth itself. Courts must walk the fine line between maintaining trust in evidence and protecting against synthetic manipulation. Without clear legal standards and updated evidentiary procedures, justice risks being derailed by digital deceit.
Efforts like Vastav AI, an Indian initiative aimed at deepfake detection, signal progress—but legislative support, forensic innovation, and judicial awareness must move in tandem to restore faith in the judicial process.
- Deepfake Evidence in Indian Courts: Navigating a Legal Grey Zone - August 3, 2025
- Indirect Tax Weekly Flashback: 27 July To 2 August 2025 - August 3, 2025
- GST Weekly Flashback: 27 July To 2 August 2025 - August 3, 2025