Image created by AI

Google Curtails AI Chatbot Gemini on Election Queries to Counter Misinformation

Published March 13, 2024
4 months ago


Google has implemented constraints on its artificial intelligence chatbot, Gemini, regarding inquiries related to the upcoming 2024 elections globally, in a concerted effort to minimize the potential disruption due to misinformation. company stresses that this update, declared amidst escalating advancements in generative AI technology such as image and video creation, is a preemptive measure to safeguard the integrity of elections worldwide.


The heightened concerns around misinformation have not only affected the public discourse but have also caught the attention of business leaders and regulators. This scrutiny has spurred governments to ponder over stringent regulations for AI technologies. As preparations for pivotal elections such as the US presidential race heat up, with a potential rematch between Joe Biden and Donald Trump, tech giants like Google are treading with caution. Google’s spokesperson delineated the company's approach on Tuesday, revealing that in light of the plethora of elections set for 2024 and an "abundance of caution," Gemini would be limited in returning election-related responses.


The influence of AI on election-related information dissemination has become a point of concern for countries beyond the US, such as South Africa and India, the latter being the world's most populous democracy. India, in particular, has urged technology companies to obtain governmental consent before publicly releasing AI instruments deemed "unreliable" or in their experimental phase and to visibly mark these tools as potentially erroneous.


Google's stringent approach comes after the company confronted challenges with Gemini when historical depictions of individuals generated by the chatbot were mired with inaccuracies. This prompted Google to suspend Gemini's image-generation feature in late February. CEO Sundar Pichai expressed dissatisfaction with the chatbot's "biased" and "completely unacceptable" responses, committing to address these issues.


The move is in line with trends amongst other tech giants, like Meta Platforms, which stated last month its intention to establish a specialized team to counter disinformation and the misuse of generative AI during the European parliament elections in June.


Google's decision has raised the question of how large technology corporations can effectively balance innovation in AI with the responsibility of minimizing potential harm, particularly in the delicate context of electoral politics. The ongoing debate around AI and misinformation is likely to intensify as the global community proceeds into an election-heavy year, and the need for responsible AI deployment becomes ever more pressing.



Leave a Comment

Rate this article:

Please enter email address.
Looks good!
Please enter your name.
Looks good!
Please enter a message.
Looks good!
Please check re-captcha.
Looks good!
Leave the first review