TLDR:
- Google’s AI tool Gemini faced backlash for generating inaccurate images and making politically correct but absurd responses.
- The root of the problem lies in biased data and the complexity of human history and culture.
In the last few days, Google’s AI tool Gemini has faced criticism for its inaccurate image generation and politically correct but absurd responses. Initially, the tool inaccurately included a black man in images of the US Founding Fathers and German soldiers from World War Two. Google apologized and paused the tool but faced further criticism for its responses to questions about political correctness. This problem arises from the biased data that AI tools are trained on, leading to embarrassing mistakes and oversimplified output. Fixing the image generator is a complex task that may take more time than anticipated.
The tech sector, including Google’s competitor OpenAI, struggles with bias in AI and correcting it requires human input. Google’s attempts to offset biases have backfired, creating new issues. Despite having a lead in AI technology, Google’s missteps with Gemini have raised concerns about the company’s approach to AI development.