Google To “Ground” Its AI Models In Truth

Google is launching a new initiative aimed at enhancing the reliability of generative AI by leveraging its search engine to provide grounding for AI-generated content. Part of a broader set of updates at Cloud Next, the update is designed to mitigate the issue of “hallucinations” or inaccuracies in AI outputs by offering users more current information and verified sources.

Google also announced that it’s integrating its search capabilities into Vertex (its AI solution for businesses), allowing for the use of both public internet data and proprietary company data to improve AI query results. This move is in line with similar efforts by Microsoft with its Bing search engine and Copilot chatbot, underscoring a growing industry trend towards enhancing AI reliability through grounding.

Limiting hallucinations for LLMs is a good goal, unless you need the probabilistic output to enhance the outcome you are seeking. Said differently, sometimes a little nonsense is a feature, not a bug. Generally speaking, however, the more accurate and deterministic the output, the better for most business use cases.


More By This Author:

AI May Be Yum Brands’ Secret Sauce
United States V Apple
NVIDIA’s Digital Humans Bring AI Game Characters To Life

Disclosure: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it.

How did you like this article? Let us know so we can better customize your reading experience.

Comments

Leave a comment to automatically be entered into our contest to win a free Echo Show.
Or Sign in with