Google's newly released AI-powered search results are garnering a lot of attention—for the wrong reasons. After the tech behemoth announced a new host AI-powered tools Last week, as part of a new “Gemini era,” its trademark web search results underwent a major overhaul, with natural language responses to queries appearing on top of web pages.
“Last year, we answered billions of questions as part of the search experience,” Google CEO Sundar Pichai told the audience. “People are using it to search in completely new ways and ask new kinds of questions, longer and more complex questions, even searching through photos, and using it to answer the best the web has to offer.” “
But the answers can be incomplete, wrong, or dangerous—whether it's delving into eugenics or failing to identify a poisonous mushroom.
Most of the “AI overview” answers are taken from social media and even fake pages where wrong answers are the whole point. Google users have shared countless problematic responses they've received from Google AI.
When Google says, “I'm feeling depressed,” one of the ways to deal with depression is to “jump off the Golden Gate Bridge.”
Another asked, “If I run off a cliff, can I stay in the air until I look down?” he asked. Citing a cartoon-inspired Reddit thread, Google's “AI Overview” confirmed gravity-defying capabilities.
The strong representation of Reddit threads among the examples follows an agreement earlier this year that Google will use Reddit data to make it easier to “find and access the communities and conversations people want.” Earlier this month, ChatGPT developer OpenAI announced that it would similarly feed data from Reddit.
In another nonsensical answer, one Google user asked, “How many stones should a child eat?” he asked. “At least one small rock a day,” Google quoted a Berkeley geologist as saying.
Many of the most absurd and illustrated examples have been removed or edited by Google. Google did not immediately respond to a request for comment from Decrypt.
An ongoing issue with generative AI models is the desire to generate answers, or “hallucinate”. Illusions are logically characterized as false because the AI is doing something that is not true. However, in cases like the Reddit-sourced answers, the AI didn't lie — it simply took the information the sources provided correctly.
Thanks to a more than decade-old Reddit comment, Google says that adding glue to cheese is a good way to keep pizza from sliding.
An overview of Google AI suggests adding glue to make cheese stick to pizza and the source is an 11-year-old Reddit comment from user f*xsmith 😂 pic.twitter.com/uDPAbsAKeO
— Peter Yang (@petergyang) May 23, 2024
OpenAI's flagship AI model, ChatGPT, has a long history of fabricating facts, including last April when law professor Jonathan Turlin was wrongly accused of sexual assault, noting a trip he didn't make.
AI's overconfidence has declared everything on the Internet as fact, laid the blame for the crisis at the feet of a former Google executive, and condemned the company's culpability through antitrust law.
The feature sparked jokes and jokes as users scoured Google for pop culture queries.
But the recommended daily allowance for rock in children has been updated, noting instead that “curiosity, emotional processing problems, or eating disorders” may be responsible for such consumption.