Google blames users for badly inaccurate ‘AI Overview’ results
A recent artificial intelligence feature called “AI Overview” unveiled by search monolith Google has been providing inaccurate and dangerous summaries in response to user searches, and Google doesn't seem to have a definitive solution to the problem.
As of this writing, Google has disabled some queries about its “AI Overview” feature after it became widely known that the system was producing inaccurate and harmful results.
Reports began circulating throughout social and news networking communities that when a user asked the search engine how to keep cheese on a pizza, the AI system reportedly returned text suggesting that the user should use glue. In another disturbing display, the AI system pointed to a non-existent dog statue as evidence, telling users that at least two dogs belonged to hoteliers.
Although many of the erroneous results may seem funny or harmless, the main concern is that the consumer-facing model that generates the content of the “AI overview” produces the same external confidence as inaccurate and inaccurate results.
And, so far, Meghann Farnsworth, a Google representative, told The Verge in an email that the company has decided to remove questions that result in inaccurate results from the system as it collects data. Basically, Google seems to be playing metaphorical whack-a-mole with the AI problem.
Confusingly, Google appears to be creating queries against the people responsible for the queries.
By Farnsworth:
“Most of the examples we've looked at are unusual queries, and we've also seen examples that have been doctored or that we haven't been able to reproduce.”
It's not currently clear how users should avoid making “unusual queries”, and as is common with large language models, Google's AI system tends to give different answers to the same questions when asked multiple times.
Cointelegraph reached out to Google for further clarification and did not receive an immediate response.
Although it seems that Google's AI system still needs some development to solve its problems, Elon Musk, the founder of a rival AI company, said that these machines could be used in 2015. He believes they will exceed human capacity before 2025 is over.
Cointelegraph recently reported that Musk told conference-goers at the recent VivaTech 2024 event in Paris that he believed xAI could reach Google's OpenAI and DeepMind by the end of 2024.
Related: Political correctness in AI systems is a big concern: Elon Musk