Image created by AI

Google's Gemini AI Embroiled in Bias Controversy; Revamp Imminent

Published February 27, 2024
1 years ago

In a world increasingly governed by the capabilities of artificial intelligence (AI), the emergence of racially biased technology raises profound ethical alarms. Google, one of the leading tech giants, is at the helm of such a controversy involving its AI tool, Gemini, which has shown a disturbing proclivity to generate historical images that exclude depictions of white people.


This startling revelation about Gemini, which comes from Alphabet's subdivision Google DeepMind, was made public during the Mobile World Congress in Barcelona, where CEO Demis Hassabis addressed the issue. The AI tool, which had initially been launched to create images of people, began showcasing an unintended bias, so much so that even figures traditionally known as white, such as the Pope and Vikings, were depicted as black or Asian.


This fault in the AI's judgment is not just perplexing but also fuels the debate on AI ethics and racial biases. Amidst these biases, an opposingly controversial trend was noted wherein a simple Google search for terms like "shoplifting" predominantly returned images of white individuals, further raising red flags about the content moderation and algorithmic fairness implemented by Google.


As a response to these social media flaggings and an anticipation of the broader implications such inconsistency might have, Hassabis confirmed Google's decision to pause the feature. The intention is to eradicate the issue and relaunch a revised version of Gemini within the coming weeks.


The gravity of the situation reflects not only in the ethical concerns it raises but also in its economic repercussions. Alphabet's shares dipped 4.5% on Tuesday morning following the controversy, trailing down the S&P 500 index profoundly. This blow comes after the tech giant’s recent scramble to compete with OpenAI's ChatGPT – backed by Microsoft – which marked a significant advancement in the domain of generative AI since November 2022.


Google had its tryst with misinformation in the past when a promotional video for its generative AI chatbot, Bard (then unnamed), inaccurately portrayed facts about an extra-solar planet. The mix-up led to an immediate hit to Alphabet's shares by a steep 9%, shaking investor confidence.


In the aftermath of Bard, which recently went through a rebranding phase to become Gemini, Google introduced subscription plans in an effort to fine-tune the AI’s reasoning capabilities. These plans are an attempt to make Gemini a more refined, accurate, and unbiased tool for generating information.


Yet the fundamental concern remains: Are the foundational elements of AI, especially in influential giants like Google, tainted with prejudice, skewed data, or 'wokeness' that deviates from factual accuracy? Critics and industry experts argue that these platforms have a significant responsibility to ensure the integrity and neutrality of their products.


In light of the biases surfacing in Gemini's algorithm, calls for an independent investigation have gained momentum. Such an inquiry could potentially unveil the root causes of the racial prejudices evident in the AI model, including who might have programmed such biases and how they passed initially unnoticed. With the vast influence Google wields over the internet's landscape, ensuring that their AI-driven products promote factual knowledge, rather than perpetuating harmful stereotypes, becomes not just good practice, but a human obligation.


To Sunday AI, conformity to ethical AI development is a prerogative that cannot be ignored. The implications of AI systems that reinforce racial biases are too significant to overlook, and robust solutions are critical for the future credibility and efficacy of products like Gemini.



Leave a Comment

Rate this article:

Please enter email address.
Looks good!
Please enter your name.
Looks good!
Please enter a message.
Looks good!
Please check re-captcha.
Looks good!
Leave the first review