Uncategorized

The search feature has been screwed up

Artificial Intelligence Overviews: Where Do They Come from? When Google Gets It, It’s Wrong with LLMs

There was nothing in the post about rolling back the summaries. Google says it will continue to monitor feedback from users and adjust the features as needed.

The feature needs to be less reliant on user generated content from sites such as Reddit, and to offer an artificial intelligence overview less often in situations users haven’t found it helpful.

A tech industry-wide comeback was inspired by Openai’s release of the chatbot chat G.T. in November 2022, as part of a generative artificial intelligence upgrade to its most popular and lucrative product. Microsoft, a partner of Openai, used its technology to upgrade its search engine, Bing, two months after the debut of the chatGPt. The upgraded Bing was beset by AI-generated errors and odd behavior but the company’s CEO, Satya Nadella, said that the move was designed to challenge Google, saying “I want people to know we made them dance.”

It’s not just people who were tricked by fake reports on social media. The New York Times clarified their reporting about Artificial Intelligence Overviews and that they never told users to jump off the Golden Gate Bridge. Some people have implied that we returned dangerous results for topics like leaving dogs in cars, smoking while pregnant, and depression. Those deals never came to fruition.

According toReid, the error was due to a sense of humor failure. “We saw AI Overviews that featured sarcastic or troll-y content from discussion forums,” she wrote. “Forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice, like using glue to get cheese to stick to pizza.”

There aren’t many sources for a search engine to draw on if you want to ask about rock eating. According to Reid, the AI tool found an article from The Onion, a satirical website, that had been reposted by a software company, and it misinterpreted the information as factual.

When using You.com for search, he says, his company has developed about a dozen tricks to keep LLMs from messing up.

Socher says wrangling LLMs takes considerable effort because the underlying technology has no real understanding of the world and because the web is riddled with untrustworthy information. “In some cases it is better to actually not just give you an answer, or to show you multiple different viewpoints,” he says.

It takes a lot of effort to make a prototype that does not tell you to eat rocks, that’s why Richard Socher launched an Artificial Language program last year.

After advising people to eat rocks, put glue on pizza, and making other suggestions in its new generative artificial intelligence search feature, the company admitted Thursday that it needed to make adjustments. The episode shows how dangerous a drive to develop generative artificial intelligence is, as well as some of the limitations of that technology.

Google’s AI Overviews feature draws on Gemini, a large language model like the one behind OpenAI’s ChatGPT, to generate written answers to some search queries by summarizing information found online. The software can use the ability to read text to put a convincing gloss on errors and untruths, despite the impressive lumms’ fluency with text. Using the technology to summarize online information promises can make search results easier to digest but it is hazardous when online sources are contractionary or when people may use the information to make important decisions.