Sunday, December 22, 2024
HomeSpecialsOpIndia ExplainsDeep fake videos: Everything you need to know about them, how to detect them,...

Deep fake videos: Everything you need to know about them, how to detect them, government guidelines and more

Deep fakes, a blend of "deep learning" and "fake," represent a concerning and rapidly evolving aspect of artificial intelligence manipulation.

On 7th November, the Ministry of Electronics and Information Technology of India (MeitY) issued an advisory to the social media companies following the controversy that erupted after a deep fake video of actress Rashmika Mandhani went viral. The ministry reaffirmed the existing guidelines. There are three main laws and guidelines that are in place for such fake videos. The first is Section 66D of the Information Technology Act, 2000 and the second is Rule 3(1)(b)(vii)  of IT Intermediary Rules 2021, and the third is Rule 3(2)(b) of the IT Intermediary Rules 2021.

According to Section 66D of the Information Technology Act, 2000, “Whoever, by means of any communication device or computer resource, cheats by personation, shall be punished with imprisonment of either description for a term which may extend to three years and shall also be liable to fine which may extend to one lakh rupees.”

Rule 3(1)(b)(vii) of the IT Intermediary Rules 2021 states that social media intermediaries must implement due diligence measures to ensure platform functionality and user safety. Rules, privacy policies, and user agreements must prohibit certain activities, including impersonation.

According to Rule 3(2)(b) of the IT Intermediary Rules 2021, “The intermediary shall, within twenty-four hours from the receipt of a complaint made by an individual or any person on his behalf under this sub-rule, in relation to any content which is prima facie in the nature of any material which exposes the private area of such individual, shows such individual in full or partial nudity or shows or depicts such individual in any sexual act or conduct, or is in the nature of impersonation in an electronic form, including artificially morphed images of such individual, take all reasonable and practicable measures to remove or disable access to such content which is hosted, stored, published or transmitted by it.” This particular rule binds the intermediary, or the social media companies, to act on the complaint of deep fakes and remove them from the platform within 24 hours.

What are deep fakes?

Deep fakes, a blend of “deep learning” and “fake,” represent a concerning and rapidly evolving aspect of artificial intelligence manipulation. These sophisticated forgeries use digital software, machine learning, and face-swapping technologies to generate artificial videos that look strikingly similar to original videos of a person or an event. These videos can depict events, statements or actions that have never happened.

The term “deep fake” comes from using deep learning neural networks to swap one person’s face with another, creating a convincing illusion. Deep fakes involve training machine learning algorithms on extensive datasets of images and videos of a target individual. The algorithms analyse and learn the person’s facial expressions, gestures, and vocal nuances, enabling them to superimpose the target’s image onto another person’s body or environment. The results are incredibly realistic, making it difficult for viewers to distinguish the falseness of the content.

To put the cherry on top, the technology for deep fakes in today’s age uses Generative Adversarial Networks (GANs), which introduce another dimension to machine learning. This technology refines deep fakes in multiple iterations and makes them extremely hard to detect. GANs utilise extensive data to replicate reality with precision. There are several user-friendly apps available, even for beginners, that can do deep fakes within seconds with a press of a button. One of the famous apps, FaceApp, gained immense popularity as many social media influencers promoted it during trends on social media apps.

There are highly problematic apps, such as now-deleted DeepNude, that pose serious privacy concerns. These tools are still available on platforms like GitHub. Though the creators claim to have created these apps for recreational purposes, they pose a serious threat to the privacy of people across the globe even if they are not on any social media.

One of the most concerning aspects of deep fakes is that they can propagate false narratives, mislead the public and compromise trust in the visual media. Deep fakes pose a unique challenge compared to the traditional forms of misinformation. The main issue with deep fakes is that they are exceptionally difficult to identify as inauthentic. One has to look for a lot of signs of computer-generated images, which, with time, are getting fine-tuned, making it harder to differentiate from the original video. Their deceptive nature risks various domains, including politics, journalism, and personal privacy.

To counter the threat of deep fakes, there is ongoing research that focuses on developing robust detection methods and enhancing media literacy among the public. However, with the pace at which AI technology is advancing, it will not be easy to catch up with the deep fakes created, especially during General Elections in a country like India.

To counter the threat posed by deep fakes, ongoing research is focused on developing robust detection methods and enhancing media literacy among the public. In today’s time, it is essential to stay vigilant and critical when consuming content on social media. Distrust is growing in the authenticity of the visual media, and it is essential to understand how the mechanism of deep fakes works to create a balance to ensure safeguarding the integrity of information and responsible consumption.

How to detect deep fakes?

According to MIT, when it comes to deep fakes or AI-generated media, there is no single tell-tale sign to define if the press is original or fake. Nonetheless, several characters in deep fakes give them away, all you need to do is be vigilant before sharing the content.

First of all, pay attention to the face. High-end deep fake manipulations are almost always facial transformations. If you look at them closely, there are signs of unnaturally smooth or overly wrinkled skin. Deep fakes can exhibit irregularities on these dimensions.

Always look at the eyes and the eyebrows. Look for shadows that appear on the subject. If the shade appears in irregular places or the shade is flattened in the image or the video, there is a chance that the video is fake. The same goes for the glasses. If the subject is wearing glasses, look for the glare and see if its movement is natural. Deep fakes often fail to train the glare properly as it does not represent the natural physics of lighting in the real world, at least as of now.

Consider the facial hair. In most of the deep fakes, the facial hair will categorically look unreal. Though deep fakes can add or remove facial hair, including moustaches, sideburns or beards, these transformations are visibly unreal, making them look unnatural.

Check for moles on the subject’s face. In most of the cases, they will not look genuine.

The deep fakes are based on lip-syncing, and they often fail to determine the right amount of blinking. The subject in deep fakes is often seen either blinking too much or not blinking at all.

Pay attention to the lip movement. The deep fakes may look real at first, but if you look closely, you will find discrepancies in the lip movement. In general, the words spoken and lip movement do not perfectly match.

Deep fakes are not that good at generating tongue movement and teeth. The dull shadows around the eyes are also one of the features to check while authenticating the videos.

Please note that there are no perfect tools to detect deep fakes. Several companies, including Intel, Microsoft and others, are working on such tools, but it will take time to perfect them. Till then, make sure to authenticate the source of the videos and look for signs of deep fake before sharing any content online.

Join OpIndia's official WhatsApp channel

  Support Us  

Whether NDTV or 'The Wire', they never have to worry about funds. In name of saving democracy, they get money from various sources. We need your support to fight them. Please contribute whatever you can afford

Anurag
Anuraghttps://lekhakanurag.com
B.Sc. Multimedia, a journalist by profession.

Related Articles

Trending now

- Advertisement -