synthetic-data

3 posts

google

A picture's worth a thousand (private) words: Hierarchical generation of coherent synthetic photo albums (opens in new tab)

Researchers at Google have developed a hierarchical method for generating differentially private (DP) synthetic photo albums, providing a way to share representative datasets while protecting sensitive individual information. By utilizing an intermediate text representation and a two-stage generation process, the approach maintains thematic coherence across multiple images in an album—a significant challenge for traditional synthetic data methods. This framework allows organizations to apply standard, non-private analytical techniques to safe synthetic substitutes rather than modifying every individual analysis method for differential privacy. ## The Hierarchical Generation Process * The workflow begins by converting original photo albums into structured text; an AI model generates detailed captions for each image and a summary for the entire album. * Two large language models (LLMs) are privately fine-tuned using DP-SGD: the first is trained to produce album summaries, and the second generates individual photo captions based on those summaries. * Synthetic data is then produced hierarchically, where the model first generates a global album summary to serve as context, followed by a series of individual photo captions that remain consistent with that context. * The final step uses a text-to-image AI model to transform the private, synthetic text captions back into a set of coherent images. ## Benefits of Intermediate Text Representations * Text summarization is inherently privacy-enhancing because it is a "lossy" operation, meaning the text description is unlikely to capture the exact unique details of an original photo. * Using text as a midpoint allows for more efficient resource management, as generated albums can be filtered and curated at the text level before undergoing the computationally expensive process of image generation. * The hierarchical approach ensures that photos within a synthetic album share the same characters and themes, as every caption in a set is derived from the same contextual summary. * Training two separate models with shorter context windows is significantly more efficient than training one large model, because the computational cost of self-attention scales quadratically with the length of the context. This hierarchical, text-mediated approach demonstrates that high-level semantic information and thematic coherence can be preserved in synthetic datasets without sacrificing individual privacy. Organizations should consider this workflow—translating complex multi-modal data into structured text before synthesis—to scale differentially private data generation for advanced modeling and analysis.

google

Benchmarking LLMs for global health (opens in new tab)

Google Research has introduced a benchmarking pipeline and a dataset of over 11,000 synthetic personas to evaluate how Large Language Models (LLMs) handle tropical and infectious diseases (TRINDs). While LLMs excel at standard medical exams like the USMLE, this study reveals significant performance gaps when models encounter the regional context shifts and localized health data common in low-resource settings. The research concludes that integrating specific environmental context and advanced reasoning techniques is essential for making LLMs reliable decision-support tools for global health. ## Development of the TRINDs Synthetic Dataset * Researchers created a dataset of 11,000+ personas covering 50 tropical and infectious diseases to address the lack of rigorous evaluation data for out-of-distribution medical tasks. * The process began with "seed" templates based on factual data from the WHO, CDC, and PAHO, which were then reviewed by clinicians for clinical relevance. * The dataset was expanded using LLM prompting to include diverse demographic, clinical, and consumer-focused augmentations. * To test linguistic distribution shifts, the seed set was manually translated into French to evaluate how language changes impact diagnostic accuracy. ## Identifying Critical Performance Drivers * Evaluations of Gemini 1.5 models showed that accuracy on TRINDs is lower than reported performance on standard U.S. medical benchmarks, indicating a struggle with "out-of-distribution" disease types. * Contextual information is the primary driver of accuracy; the highest performance was achieved only when specific symptoms were combined with location and risk factors. * The study found that symptoms alone are often insufficient for an accurate diagnosis, emphasizing that LLMs require localized environmental data to differentiate between similar tropical conditions. * Linguistic shifts pose a significant challenge, as model performance dropped by approximately 10% when processing the French version of the dataset compared to the English version. ## Optimization and Reasoning Strategies * Implementing Chain-of-Thought (CoT) prompting—where the model is directed to explain its reasoning step-by-step—led to a significant 10% increase in diagnostic accuracy. * Researchers utilized an LLM-based "autorater" to scale the evaluation process, scoring answers as correct if the predicted diagnosis was meaningfully similar to the ground truth. * In tests regarding social biases, the study found no statistically significant difference in performance across race or gender identifiers within this specific TRINDs context. * Performance remained stable even when clinical language was swapped for consumer-style descriptions, suggesting the models are robust to variations in how patients describe their symptoms. To improve the utility of LLMs for global health, developers should prioritize the inclusion of regional risk factors and location-specific data in prompts. Utilizing reasoning-heavy strategies like Chain-of-Thought and expanding multilingual training sets are critical steps for bridging the performance gap in underserved regions.

google

Generating synthetic data with differentially private LLM inference (opens in new tab)

Researchers at Google have developed an inference-only method for generating differentially private (DP) synthetic data that avoids the high costs and data requirements associated with private fine-tuning. By prompting off-the-shelf large language models (LLMs) with sensitive examples in parallel and aggregating their outputs, the approach can generate thousands of high-quality synthetic data points while maintaining rigorous privacy guarantees. This method allows synthetic data to serve as a secure interface for model development, enabling teams to collaborate without requiring specialized knowledge of differential privacy. ## Differentially Private Prediction and Aggregation The core of this method relies on "private prediction," where privacy is applied to the model's output rather than the model itself. * Sensitive data points are distributed across multiple independent prompts, ensuring that no single individual's record can significantly influence the final output. * The LLM generates next-token predictions for each prompt in parallel, which are then aggregated to mask individual contributions. * The researchers designed a DP token sampling algorithm that treats the standard LLM "softmax" sampling process as a version of the exponential mechanism, a mathematical framework used to select the best option from a set while maintaining privacy. ## Enhancing Efficiency via KV Caching Previous attempts at private prediction were computationally expensive because they required a fresh batch of sensitive examples for every single token generated. * A new privacy analysis allows the system to reuse a fixed batch of sensitive examples across an entire generation sequence. * By maintaining the same context for each generation step, the system becomes compatible with standard inference optimization techniques like KV (Key-Value) caching. * This improvement enables the generation of synthetic data at a scale two to three orders of magnitude larger than prior methods. ## Optimizing Privacy Spend with Public Drafters To preserve the "privacy budget"—the limited amount of information that can be released before privacy is compromised—the method introduces a public drafter model. * The drafter model predicts the next token based solely on previously generated synthetic text, without ever seeing the sensitive data. * Using the sparse vector technique, the system only consumes the privacy budget when the public drafter’s suggestion disagrees with the private aggregate of the sensitive data. * This is particularly useful for structured data, where the drafter can handle formatting and syntax tokens, saving the privacy budget for the actual content. By leveraging off-the-shelf models like Gemma, this approach provides a scalable way to transform sensitive datasets into useful synthetic versions. These synthetic datasets are high-quality enough to replace real data in downstream machine learning tasks, such as in-context learning or fine-tuning models like BERT, without the risk of leaking individual user information.