If you use social media platforms, you must have heard about ChatGPT, an AI-based platform to get answers to almost anything. Recently, there was a controversy around the platform. While it happily replied with jokes on Hinduism and Hindu Gods, the platform restrained from making jokes about Christianity and Islam or their Gods.
Maybe it fears ‘Sar Tan Se Juda,’ but it was enough to cause outrage on social media platforms. Recent changes have, however, made ChatGPT more sensitive. There are still many issues and some bias towards Hindus, but it appears the makers are listening and feeding the algorithms to be more susceptible. By the way, when we asked ChatGPT, “What is ‘Sar Tan Se Juda’?” it claimed the slogan was related to Farmer Protests in India. It means the farmers were not ready to separate their head (leadership) from the body (farmers) and they will stand united for their rights and demand.
Anyway, let us come back to ChatGPT and how it can be used. ChatGPT is a chatbot developed using a Large Language Model by OpenAI based on GPT-3.5. GPT stands for Generative Pre-trained Transformer, which is an autoregressive language model. It uses deep learning to provide general human-like responses to the queries. In simple terms, ChatGPT can be defined as a deep learning program that gathers information from different resources and responds to questions using human-like responses.
Who created ChatGPT?
San Francisco-based artificial intelligence company OpenAI Inc created ChatGPT. It is a non-profit parent company of the for-profit OpenAI LP. The company is best known for DALL-E, a deep-learning model for generating images from text-based instructions. Microsoft is a partner and investor in the company. OpenAI and Microsoft together developed the Azure AI Platform.
How are Large Language Models trained?
Large Language Models are trained using a large amount of data. The system uses this data to predict responses as accurately as possible and creates understandable sentences. The more data is fed to the system, the more accurate it gets. As LLMs predict the next word in the sentence based on the information it has, it can be called a type of auto-complete. However, the scale at which LLMs perform is much more sophisticated and massive than simple auto-complete. The scale at which LLMs are trained gives them the ability to create significant responses based on small sentences or word(s).
For example, you can ask it to write a poem on ‘cat’ or just write ‘cat’, and it will respond accordingly.
The limitations of ChatGPT
The algorithms on which ChatGPT works restrict it from answering “toxic” questions. For example, it will not respond to questions that could be offensive to religion or a person. It will not make jokes or say hurtful things about religion or someone.
Furthermore, it is impossible to always get an accurate answer from the system. As stated above, it linked ‘Sar Tan Se Juda’ to farmers’ protests. We have to ask it in a complete sentence to get the correct answer. When we asked, “What is Gustakh e Nabi ki ek saza sar tan se juda sar tan se juda,” it could only throw the correct answer.
Also, if you ask questions like “Which mammal lays the largest egg,” it gives a weird response and says it’s an elephant. A Twitter user reported the issue.
didn’t know this, TIL pic.twitter.com/7yqJBB1lxS
— Fiora (@FioraAeterna) December 5, 2022
However, it gave a better answer when we asked the same question. It proves the system is learning or the team in the background consistently makes it “better” suited for the world.
The anti-Hindu bias
Recently, since ChatGPT became famous, it has been accused of being anti-Hindu. The responses it threw at users were shocking, to say at least. Though things have changed now, earlier, the system was okay with making jokes about Hindu Gods and Hinduism but not about Islam or Christianity.
Twitter user Mahesh Vikram Hegde explained how jokes on Hindu deities were allowed on the platform but not on Jesus and Prophet Muhammad.
ChatGPT is a chatbot launched by OpenAI
— Mahesh Vikram Hegde 🇮🇳 (@mvmeet) January 7, 2023
ChatGPT is allowed to comment on Hindu deities
But it is not permitted to speak on Isl@m & Christi@nity
Amazing hatred towards Hinduism! pic.twitter.com/ev0LTrhPU6
The system was modified when we checked. It no longer tells jokes about Bhagwan Krishna.
Another prominent Twitter user Arun Pudur noted how jokes on Bhagwan Ram were okay.
How Woke is #ChatGPT ?
— Arun Pudur (@arunpudur) January 9, 2023
When it asked to make a Joke about Muslim & Christian Gods, it said it is against its Policy, but it had no problem mocking Hindu Gods.
Still, think AI is unbiased and can’t be programmed to target certain communities? pic.twitter.com/eynsJ4kQlt
However, jokes on Bhagwan Ram have also stopped now.
CoHNA (Coalition of Hindus of North America) asked interesting questions. It asked if Hinduphobia, Islamophobia, and Antisemitism were real. Look at the answers.
One of the 1st questions we tested a few days ago was “Is Hinduphobia real?” We found the #AI response to be biased and sending mixed messages, especially when compared to its clear measured responses to questions against #Antisemitism and #Islamophobia. 2/n pic.twitter.com/1NfOuqgmBg
— CoHNA (Coalition of Hindus of North America) (@CoHNAOfficial) January 6, 2023
In our case, it gave small answers but the difference between the answers for Hinduphobia, Islamophobia, and Antisemitism was evident. In both Islamophobia and Antisemitism, it mentioned any kind of discrimination is unacceptable that was missing from Hinduphobia.
When we tried getting a response from ChatGPT, it always said that it could not speak against religion or a person. It also refused to make jokes against Elon Musk. However, when we asked it about Swastika and if it is different from Hakenkreuz, ChatGPT suggested Swastika is a Nazi symbol, and there is no difference between Swastika and Hakenkreuz.
When asked, what is Swastika.
When asked what is Hakenkreuz.
When asked if there is a difference between Swastika and Hakenkreuz.
CoHNA also raised the issue of Swastika.
On #Swastika, the #AI responses go above & beyond to ensure it remains associated as a primary symbol of Nazis, by falsely suggesting #Hakenkreuz (Hooked Cross) is just “another symbol”. For the cross it rightfully advises nuances. Why is @OpenAI creating #doublestandards ?4/n pic.twitter.com/vA5ZBAgzpm
— CoHNA (Coalition of Hindus of North America) (@CoHNAOfficial) January 6, 2023
Now, because the information on the internet is majorly anti-Hindu, especially when it comes to Swastika, ChatGPT failed to find the correct answer. Hindu leadership is trying to change to perspective towards Swastika and detach it from the untruthful association of Hakenkreuz.
Ethics in AI
It is a matter of fact that human programmers create Artificial Intelligence. It is impossible to be 100 percent non-biased while feeding information to the system. This is why there are ethics in place for creating AI-based systems.
Ethics is a set of moral principles that help humans to differentiate between right and wrong. In AI, ethics is a set of guidelines that advise on the design and outcomes of the queries. As humans have biases, these can be inherited from the data that we feed to AI. As data is the foundation of all machine learning algorithms, giving AI a structure that avoids bias is essential. However, it is a learning process; no company can claim to have created a perfect AI system. This is why you will get different responses depending on your query time and other factors.
Established principles of AI ethics include respecting persons, following healthcare ethics, and dealing with issues with fairness and equality. As the information keeps getting fed into the system and algorithms improve, the system gets more sensitive toward religion, groups, and individuals. Interestingly, such control on the algorithms can also lead to unprecedented bias.
For example, we asked if Joe Biden was a good President, if Donald Trump was a good President and if Narendra Modi was a good Prime Minister, and the answers showed its bias clearly.
In the case of Biden, it did not mention if and why he was criticized.
However, in the case of Trump and PM Modi, a paragraph about their criticism was added.
AI is the future. However, the programmers behind AI machines will define how they will function. If not taken care of now, applications like ChatGPT will end up being like Wikipedia, which is known for its left-liberal stand and often plays against the interests of countries like India and people on the right end of the political spectrum.