On 18th November, Electronics & Information Technology Minister Ashwini Vaishnaw said there will be a meeting between major social media platforms and government officials to address the growing concerns over deep fake technology. He asserted that the protective safe harbour immunity would be revoked if the platforms failed to implement sufficient measures to counter the use of deep fake in media.
In his statement to the press, the minister revealed that the government had recently issued a notice to companies regarding the deep fake issue. Furthermore, he acknowledged some efforts by the social media platforms but emphasised the need for a more proactive approach to combat such content.
#WATCH | Delhi: On deepfake issue, Union Minister for Communications, Electronics & IT Ashwini Vaishnaw says, "Deepfake is a big issue for all of us. We recently issued notices to all the big social media forms, asking them to take steps to identify deepfakes, for removing those… pic.twitter.com/FTsvRo9JQS
— ANI (@ANI) November 18, 2023
He said, “While they are taking some steps, we believe more significant measures must be implemented. We will soon convene a meeting with all the platforms, possibly within the next 3-4 days, to engage in a brainstorming session and ensure that platforms take adequate measures to prevent and remove deep fakes from their systems.”
When the minister was asked about the participation of significant entities like Meta and Google in the upcoming meeting, the minister confirmed their involvement.
Additionally, he underscored the current protection of safe harbour immunity under the IT Act would not apply to platforms unless concrete actions were taken to address the deep fake threat. He said, “The safe harbour clause, enjoyed by most social media platforms, will not be applicable if they fail to take sufficient steps to remove deep fakes from their platforms.”
The recent incidents of deep fake videos of actresses including Rashmika Mandanna, Katrina Kaif and Kajol Devgan heightened concerns about the misuse of technology to create false narratives.
In a related development, Prime Minister Narendra Modi also cautioned about the potential threat stemming from AI-generated deep fakes. He urged the media to raise awareness about the matter and stressed the importance of educating people about the risks involved.
What is deep fake, and how to detect it?
Deep fakes are AI-manipulated photographs or videos that can convincingly depict events or statements that never occurred. Some apps use machine learning and face-swapping technology to create these forgeries by superimposing one person’s image onto another. The end results are often realistic and hard to detect.
Deep fakes, aided by Generative Adversarial Networks (GAN), pose serious privacy risks. They propagate false narratives and challenge the trust in visual media. It has a serious impact on politics and journalists. Detection of deep fakes is challenging, and there is no definite solution to detect deep fakes as of now. However, signs, including unnatural facial features, irregular shadows, and mismatches in lip movement, are among the possible ways to detect a deep fake video.
The case of Rashmika Mandanna
On 6th November, actress Rashmika Mandanna expressed distress over a deep fake video using her face on the video of another woman that went viral on social media. She highlighted the urgent need to address technology misuse and emphasised potential harm, particularly for vulnerable individuals. She also called for community action against such incidents.
The video in question featured Instagram model Zara Patel with Mandanna’s face superimposed using deep fake technology. The post sparked calls for legal frameworks against such incidents in India. Bollywood veteran actor Amitabh Bachchan also endorsed the call for legal action in the matter.
Government of India issued guidelines to tackle deep fake videos and photos
Soon after the incident, the Ministry of Electronics and Information Technology of India (MeitY) issued an advisory to the social media companies to tackle the deep fake menace. he ministry reaffirmed the existing guidelines. Three main laws and guidelines are in place for such fake videos. The first is Section 66D of the Information Technology Act, 2000 and the second is Rule 3(1)(b)(vii) of IT Intermediary Rules 2021, and the third is Rule 3(2)(b) of the IT Intermediary Rules 2021.
According to Section 66D of the Information Technology Act, 2000, “Whoever, by means of any communication device or computer resource, cheats by personation, shall be punished with imprisonment of either description for a term which may extend to three years and shall also be liable to fine which may extend to one lakh rupees.”
Rule 3(1)(b)(vii) of the IT Intermediary Rules 2021 states that social media intermediaries must implement due diligence measures to ensure platform functionality and user safety. Rules, privacy policies, and user agreements must prohibit certain activities, including impersonation.
According to Rule 3(2)(b) of the IT Intermediary Rules 2021, “The intermediary shall, within twenty-four hours from the receipt of a complaint made by an individual or any person on his behalf under this sub-rule, in relation to any content which is prima facie in the nature of any material which exposes the private area of such individual, shows such individual in full or partial nudity or shows or depicts such individual in any sexual act or conduct, or is in the nature of impersonation in an electronic form, including artificially morphed images of such individual, take all reasonable and practicable measures to remove or disable access to such content which is hosted, stored, published or transmitted by it.” This particular rule binds the intermediary, or the social media companies, to act on the complaint of deep fakes and remove them from the platform within 24 hours.