Karnataka Police arrested a 22-year-old techie, Manthan Patil, for using Artificial Intelligence to morph his love interest and her friends’ photographs to circulate on social media. As per reports, Patil was irked as the 19-year-old victim rejected his proposal. He used a deep fake app to morph her photographs and created nude images.
OpIndia accessed FIR registered Section 67A of the Information Technology Act 2008 and Sections 506 and 509 of the Indian Penal Code at Khanapur Police Station, district Belagavi, Karnataka. The FIR was registered on 2nd November, days before the Deep Fake video of actress Rashmika Mandanna went viral on social media. The incident took place between 17th October and 18th October.
The FIR was registered based on the complaint of the 19-year-old victim, who works as a house help. Apart from the victim, Manthan morphed images of at least seven of her friends and circulated them on social media. All of the victims are under 22 years old.
According to the complainant, Manthan approached the victim and proposed to her. When she refused to reciprocate his affection, he threatened to circulate her objectionable photographs on social media. Later, on 17th October, the accused created fake IDs on Instagram and sent the victim a friend request. He also sent friend requests to her friends. Manthan allegedly used the fake ID to send deepfaked images of the victim and her friend.
On 18th October, another victim’s friend received a friend request from the same account, and he repeated the same with other friends. Their photographs were also morphed and sent to other people from the village.
Speaking to Republic TV, Bheemashankar Guled, Superintendent of Police Belagavi Rural, said, “The incident from Khanapur where the accused, 22-year-old Manthan Patil, was in love with a girl and she was not responding to his love. Angered by this, Manthan Patil warns her that she and her friends will have to pay dearly for this. He created a fake profile in this girl’s name on social media and uploaded morphed pictures of the girl and her friends as they had not supported him to take his love forward. The accused had morphed the pictures of the girls using deep fake software which can edit photos and videos. He morphed their faces with nude images and uploaded the nude pictures on social media. To create pressure on the girl he loved, he uploaded the nude pictures of his friends from her fake social media profile.”
What is deep fake, and how to detect it?
Deep fakes are AI-manipulated photographs or videos that can convincingly depict events or statements that never occurred. Some apps use machine learning and face-swapping technology to create these forgeries by superimposing one person’s image onto another. The end results are often realistic and hard to detect.
Deep fakes, aided by Generative Adversarial Networks (GAN), pose serious privacy risks. They propagate false narratives and challenge the trust in visual media. It has a serious impact on politics and journalists. Detection of deep fakes is challenging, and there is no definite solution to detect deep fakes as of now. However, signs, including unnatural facial features, irregular shadows, and mismatches in lip movement, are among possible ways to detect a deep fake.
The case of Rashmika Mandanna
On 6th November, actress Rashmika Mandanna expressed distress over a deep fake video that went viral on social media. She highlighted the urgent need to address technology misuse and emphasised potential harm, particularly for vulnerable individuals. She also called for community action against such incidents.
The video in question featured Instagram model Zara Patel with Mandanna’s face superimposed using deep fake technology. The post sparked calls for legal frameworks against such incidents in India. Bollywood veteran actor Amitabh Bachchan also endorsed the call for legal action in the matter.
Government issued guidelines to tackle deep fake videos and photos
Soon after the incident, the Ministry of Electronics and Information Technology of India (MeitY) issued an advisory to the social media companies to tackle the deep fake menace. he ministry reaffirmed the existing guidelines. Three main laws and guidelines are in place for such fake videos. The first is Section 66D of the Information Technology Act, 2000 and the second is Rule 3(1)(b)(vii) of IT Intermediary Rules 2021, and the third is Rule 3(2)(b) of the IT Intermediary Rules 2021.
According to Section 66D of the Information Technology Act, 2000, “Whoever, by means of any communication device or computer resource, cheats by personation, shall be punished with imprisonment of either description for a term which may extend to three years and shall also be liable to fine which may extend to one lakh rupees.”
Rule 3(1)(b)(vii) of the IT Intermediary Rules 2021 states that social media intermediaries must implement due diligence measures to ensure platform functionality and user safety. Rules, privacy policies, and user agreements must prohibit certain activities, including impersonation.
According to Rule 3(2)(b) of the IT Intermediary Rules 2021, “The intermediary shall, within twenty-four hours from the receipt of a complaint made by an individual or any person on his behalf under this sub-rule, in relation to any content which is prima facie in the nature of any material which exposes the private area of such individual, shows such individual in full or partial nudity or shows or depicts such individual in any sexual act or conduct, or is in the nature of impersonation in an electronic form, including artificially morphed images of such individual, take all reasonable and practicable measures to remove or disable access to such content which is hosted, stored, published or transmitted by it.” This particular rule binds the intermediary, or the social media companies, to act on the complaint of deep fakes and remove them from the platform within 24 hours.