John,
There’s something funny going on at the top of that Google Search page.
It’s called “AI Overview,” and it’s supposed to be Artificial Intelligence’s best summary of what the internet is saying about your query.
In reality, AI Overview has a tendency to hallucinate as if it were on psilocybin. It can’t tell the difference between lies and facts, and it doesn’t account for irony or humor. It treats satire as if it were legitimate news, and it may pull things out of thin air just because the words can be found together in the data it trains on.
Recently reported results that have gone viral include the assertion that eating rocks is good for you, or that glue makes an excellent pizza sauce.
It’s not all fun and games, though. AI Overview also has a propensity to disseminate harmful conspiracy theories. Asked how many U.S. presidents have been Muslim, for example, the reply is “The United States has had one Muslim president, Barack Hussein Obama.”
The Head of Search at Google, Liz Reid, has some explaining to do. It turns out, she says, that the rock-eating advice comes straight from the satirical pages of The Onion, which reported that U.C. Berkeley geologists have found the American diet “‘severely lacking’ in the proper amount of sediment.”
Experiments are all well and good, but “AI Overview” is the first thing that pops up when you do a search looking for something you can actually rely on. AI Overview is coming to be seen as a joke at best, misleading and dangerous at worst.
Tell Google CEO Sundar Pichai it’s time to take responsibility for the bizarre, chronic inaccuracy of AI Overview. A general warning that AI Overview is experimental is not enough: it should be dropped altogether from Google’s search results until it actually works.
What happens if we ask AI Overview itself about the controversy?
AI Overview does admit that it has “had some problems,” listing specific examples where it has given incorrect information, culled from inadequate sources. But in conclusion, AI Overview insists its errors are minimal, as it claims that “potentially harmful content only appears in response to less than one in every 7 million unique queries.”
How did it come up with that strange -- and frankly, unbelievable -- statistic? Ask AI Overview to explain, and it quits.
Google reassures us that “users can also turn off AI Overviews if they're concerned about their accuracy.” But when we click on More Information to find out how, we are taken to a Google FAQ page (not an AI summary) that says,
“Turning off the ‘AI Overviews and more’ experiment in Search Labs will not disable all AI Overviews. AI Overviews are a core Google Search feature, like knowledge panels. Features can’t be turned off.”
The page goes on to tell us we can filter out AI Overview results after we have performed a search, but this means the unreliable information has already reached our screens.
Tell Google to remove AI Overview from its search results, and to return to what Search does so well: identifying relevant internet sources that we need to check for accuracy ourselves.
Google should probably just admit the AI Overview experiment has revealed mainly that the internet is full of questionable information, rather than presenting these summaries as if they were facts.
Thanks for taking action today.
- Amanda
Amanda Ford, Director
Democracy for America
Advocacy Fund
|