On 24th February, Union Minister Rajeev Chandrasekhar warned Google over malicious responses its AI chatbot Gemini is generating. In a post on X, Chandrasekhar warned the tech giant not to run experiments of India’s ‘Digital Nagriks’ with “unreliable” platforms, algorithms and models. He emphasised that safety and trust are the legal obligations of a digital platform, and saying, “Sorry, AI tool is not reliable”, will not exempt the company from the law.
Govt has said this bfr- I repeat for attn of @GoogleIndia.
— Rajeev Chandrasekhar 🇮🇳 (@Rajeev_GoI) February 24, 2024
➡️Our DigitalNagriks are NOT to be experimented on with "unreliable" platforms/algos/model⛔️
➡️Safety & Trust is platforms legal obligation✅️
➡️"Sorry Unreliable" does not exempt from law⛔️https://t.co/LcjJZmZ3Qp
The minister’s response came to a post published by X users Sreemoy Talukdar and Arnab Ray, where they drew attention towards biased responses generated by Gemini AI. Ray published three screenshots where he asked if Prime Minister Narendra Modi, Ukraine’s President Zelenskyy and former US President Donald Trump were fascists. While Gemini gave a diplomatic answer in the case of Zelenskyy and Trump; the AI chatbot downright called PM Modi a “fascist” leader.
This #GeminiAI from @google is not just woke, it's downright malicious @GoogleIndia. The GOI should take note. https://t.co/AHLv2Frz0D
— Sreemoy Talukdar (@sreemoytalukdar) February 22, 2024
Responding to Talukdar, Chandrasekhar said, “These are direct violations of Rule 3(1)(b) of Intermediary Rules (IT rules) of the IT act and violations of several provisions of the Criminal code,” and later warned the company that platform being “unreliable” would not save them from consequences. As reported, MeitY has demanded an explanation from Google for the biased responses.
Recently, Google faced severe criticism for its AI chatbot Gemini (formerly Brad) as it failed to generate images of White persons and generated distorted pictures of historical figures. Responding to criticism, Google apologised and stopped image generation capabilities. In a blog post, the senior vice president of Goole, Prabhakar Raghavan, acknowledged that Gemini was not responding with the desired results and that the images generated were inaccurate or “even offensive”. He wrote, “We’ve acknowledged the mistake and temporarily paused image generation of people in Gemini while we work on an improved version.”
He added, “One thing to remember: Gemini is built as a creativity and productivity tool, and it may not always be reliable, especially when generating images or text about current events, evolving news or hot-button topics. It will make mistakes.”
AI tools bias towards India and Hindus
This is not the first time AI tools have been scrutinised for bias. In January 2023, ChatGPT faced criticism when the platform happily made jokes about Hindu Gods but refused to do the same with Christianity and Islam.