Deepfakes were very recently in the news due to a video circulated of Rashmika Mandanna, an actress, wherein her face had been superimposed on another woman’s body. The Ministry of Electronics and IT (MeitY) was forced to act following the incident and issued a slew of advisories enabling the removal of such content from all across the internet.
The term ‘deepfake’ came to light in 2017 on Reddit wherein several users were found to be engaged in superimposing the faces of popular celebrities onto the faces of individuals, primarily engaged in sexual acts. As a matter of common usage, deepfakes are referred to as a process of creating a video by employing sophisticated means to superimpose the faces of individuals onto the faces of others who are seen to be doing something they haven’t done in reality.
Uncovering these distortions is a tough task with ancillary uses ranging from the production of lifelike videos without the need for sophisticated technology, high-resolution videos of imaginary people, fabricated audio or text pieces, and altered satellite signals. These encompassing viewpoints are considered essential for legal and policy deliberations, focusing on results rather than specific technical procedures.
Laws in India
In India, the legal measures against deepfakes are encapsulated in Section 66E and Section 66D of the Information Technology Act of 2000. Section 66E pertains to privacy infringements concerning the recording, dissemination, or broadcasting of an individual’s images in the media via deepfake technology. Violations of this nature can result in an imprisonment sentence of up to three years or a fine of ₹2 lakh or both. Section 66D targets those who maliciously use communication devices or computer resources, leading to identity fraud or deception. An offence under this clause can lead to a maximum of three years imprisonment and/or a fine of ₹1 lakh.
Moreover, Section 51 of the Indian Copyright Act of 1957 specifically offers protection against unauthorized use of works, enabling copyright owners to pursue legal recourse.
While there is no dedicated legislation for deepfakes, the Ministry of Information and Broadcasting put forth an advisory on January 9, 2023, encouraging media entities to identify manipulated content and proceed with prudence.
India has also sketched out possible regulatory structures, hinting at a risk matrix and advocating for a legal authority. Major tech companies such as Alphabet, Meta, and OpenAI are implementing measures like watermarking to fight against deepfakes. As a crucial participant in the worldwide advancement of AI, India has a responsibility to help shape the regulatory environment, striking a balance between fostering innovation and addressing regulatory issues.
Admissibility as Evidence
The rise of deepfake technology poses significant challenges in legal proceedings, particularly in criminal trials, impacting individuals personally and professionally. Most legal systems lack robust mechanisms to verify evidence, placing the responsibility on the accused to challenge potential manipulations, turning a widespread problem into a private burden. To address this, proposed regulations might mandate evidence verification, potentially through entities like the Directorate of Forensic Science Services, though this could entail additional costs.
In India, while current laws provide some measures to tackle the problems posed by deepfakes, there needs to be a precise legal definition to ensure focused legal action. The dynamic nature of deepfake technology intensifies the difficulties for automated detection systems, resulting in increased challenges, especially when dealing with context-specific complexities. This presents a considerable risk to legal processes, potentially extending the duration of trials and increasing the likelihood of erroneous conclusions.
Apart from the immediate legal implications, deepfakes intensify problems such as slut-shaming and revenge pornography, leading to severe repercussions for individuals’ reputations and self-perceptions. The complex issues necessitate all-encompassing legal structures to counter emerging threats and protect individuals from potential damage.
Significantly, communities may continue to assert claims based on the original fabricated message, and influence public opinion. Initially, sensational fake news tends to attract more attention than the subsequent refutation, leaving individuals with residual uncertainties. With the increasing prevalence of deepfakes, the potential implications for the legal system and societal trust highlight the pressing need for strong measures to verify evidence and tackle the emerging challenges.
Conclusion
Deepfakes have emerged as a significant technological advancement with far-reaching implications for society, particularly in the legal realm. The ability to manipulate videos and audio with such finesse poses challenges for the admissibility of evidence in court, potentially jeopardizing the integrity of legal proceedings and individual reputations. While existing laws in India provide some protection against deepfakes, the absence of a precise legal definition and the dynamic nature of the technology hinders effective legal action.
To address these challenges, India must implement a comprehensive regulatory framework encompassing legal definitions, verification mechanisms, and clear guidelines for evidence admissibility. Moreover, public awareness campaigns and educational initiatives are crucial to empower individuals to identify and counter deepfake content. By striking a balance between fostering innovation and addressing the regulatory issues surrounding deepfakes, India can safeguard its legal system and protect its citizens from the potential harm posed by this nascent technology.
This article is written and submitted by Devam Krishnan during his course of internship at B&B Associates LLP. Devam is a B.A. LLB 4th year student at National University of Study and Research in Law, Ranchi.