hallucinations

1 posts

google

Deeper insights into retrieval augmented generation: The role of sufficient context (opens in new tab)

Google Research has introduced "sufficient context" as a critical new metric for evaluating Retrieval Augmented Generation (RAG) systems, arguing that simple relevance is an inadequate measure of performance. By focusing on whether a retrieved context contains all the necessary information to definitively answer a query, researchers developed an LLM-based autorater that classifies context sufficiency with 93% accuracy. This framework reveals that many RAG failures, specifically hallucinations, occur because models fail to abstain from answering when information is incomplete or contradictory. ## Defining and Measuring Sufficient Context * Sufficient context is defined as containing all information necessary to provide a definitive answer, while insufficient context is relevant but incomplete, inconclusive, or contradictory. * The researchers developed an "autorater" using Gemini 1.5 Pro, utilizing chain-of-thought prompting and 1-shot examples to evaluate query-context pairs. * In benchmarks against human expert "gold standard" labels, the autorater achieved 93% accuracy, outperforming specialized models like FLAMe (fine-tuned PaLM 24B) and NLI-based methods. * Unlike traditional metrics, this approach does not require ground-truth answers to evaluate the quality of the retrieved information. ## RAG Failure Modes and Abstention Challenges * State-of-the-art models (Gemini, GPT, Claude) perform exceptionally well when provided with sufficient context but struggle when context is lacking. * The primary driver of hallucinations in RAG systems is the "abstention" problem, where a model attempts to answer a query based on insufficient context rather than stating "I don't know." * Analyzing model responses through the lens of sufficiency allows developers to distinguish between "knowledge" (the model knows the answer internally) and "grounding" (the model correctly uses the provided context). ## Implementation in Vertex AI * The insights from this research have been integrated into the Vertex AI RAG Engine via a new LLM Re-Ranker feature. * The re-ranker prioritizes retrieved snippets based on their likelihood of providing a sufficient answer, significantly improving retrieval metrics such as normalized Discounted Cumulative Gain (nDCG). * By filtering for sufficiency during the retrieval phase, the system reduces the likelihood that the LLM will be forced to process misleading or incomplete data. To minimize hallucinations and improve the reliability of RAG applications, developers should move beyond keyword-based relevance and implement re-ranking stages that specifically evaluate context sufficiency. Ensuring that an LLM has the "right" to answer based on the provided data—and training it to abstain when that data is missing—is essential for building production-grade generative AI tools.