Google has temporarily suspended its AI tools following a scandal regarding historical inaccuracies that sparked strong reactions from the public. This setback marks the latest obstacle in Alphabet-owned company's efforts to keep pace with its competitors OpenAI and Microsoft.
Earlier this month, Google began offering image generation through its Gemini AI model, but over the past few days, several users on social media have pointed out that the model returned historical images that were sometimes inaccurate. Earlier this week, Gemini AI generated images of Vikings, knights, founding figures of the United States, and even Nazi soldiers with people of different races.
Artificial intelligence programs learn from the information available, and researchers have warned that AI tends to replicate racism, sexism, and other biases inherent in its creators and society at large.
In this case, Google may have overcorrected in its efforts to address discrimination, as some users issued command after command in a failed attempt to make the AI produce images of white individuals.
"We acknowledge that Gemini offers inaccuracies in some historical image generation," Google said on Wednesday, (21/2/2024).
Since the launch of OpenAI's ChatGPT in November 2022, Google has been racing to produce AI software that rivals what has been introduced by the Microsoft-backed company.
When Google released the generative AI chatbot Bard a year ago, the company shared inaccurate information about an image of a planet beyond Earth's solar system in a promotional video, causing its stock to plummet by as much as 9%.
Bard was rebranded as Gemini earlier this month, and Google launched a paid subscription package, which users can opt for to gain better reasoning abilities from the AI model.
"Historical context has more nuances, and we will further adjust to accommodate that," said Jack Krawczyk, Senior Product Director for Gemini at Google.