Google pledges to fix Gemini’s inaccurate and biased image generation
Google’s Gemini model has come under fire for its production of historically-inaccurate and racially-skewed images, reigniting concerns about bias in AI systems.
The controversy arose as users on social media platforms flooded feeds with examples of Gemini generating pictures depicting racially-diverse Nazis, black medieval English kings, and other improbable scenarios.
Google Gemini Image generation model receives criticism for being ‘Woke’.
Gemini generated diverse images for historically specific prompts, sparking debates on accuracy versus inclusivity. pic.twitter.com/YKTt2YY265
— Darosham (@Darosham_) February 22, 2024
Meanwhile, critics also pointed out Gemini’s refusal to depict Caucasians, churches in San Francisco out of respect for indigenous sensitivities, and sensitive historical events like Tiananmen Square in 1989.
In response to the backlash, Jack Krawczyk, the product lead for Google’s Gemini Experiences, acknowledged the issue and pledged to rectify it. Krawczyk took to social media platform X to reassure users:
For now, Google says it is pausing the image generation of people:
While acknowledging the need to address diversity in AI-generated content, some argue that Google’s response has been an overcorrection.
Marc Andreessen, the co-founder of Netscape and a16z, recently created an “outrageously safe” parody AI model called Goody-2 LLM that refuses to answer questions deemed problematic. Andreessen warns of a broader trend towards censorship and bias in commercial AI systems, emphasising the potential consequences of such developments.
Addressing the broader implications, experts highlight the centralisation of AI models under a few major corporations and advocate for the development of open-source AI models to promote diversity and mitigate bias.
Yann LeCun, Meta’s chief AI scientist, has stressed the importance of fostering a diverse ecosystem of AI models akin to the need for a free and diverse press:
Bindu Reddy, CEO of Abacus.AI, has similar concerns about the concentration of power without a healthy ecosystem of open-source models:
As discussions around the ethical and practical implications of AI continue, the need for transparent and inclusive AI development frameworks becomes increasingly apparent.
(Photo by Matt Artz on Unsplash)
See also: Reddit is reportedly selling data for AI training
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.