differential-privacy

10 posts

google

A differentially private framework for gaining insights into AI chatbot use (opens in new tab)

Google Research has introduced Urania, a novel framework designed to extract high-level usage insights from AI chatbot conversations while maintaining rigorous differential privacy (DP) guarantees. Unlike previous heuristic methods that rely on simple redaction or LLM-based PII stripping, this pipeline ensures that no individual user's data can be reconstructed from the resulting summaries. By combining DP clustering and keyword extraction with LLM-based summarization, the system provides a formal, auditable approach to understanding platform trends without compromising sensitive information. ## Limitations of Heuristic Privacy * Existing frameworks often rely on large language models to manually strip personally identifiable information (PII) from text before analysis. * These heuristic protections are difficult to formalize or audit, and their effectiveness may diminish as models evolve or face sophisticated prompt injection attacks. * The Urania framework addresses these weaknesses by using mathematical privacy budgets (the epsilon parameter) to measure and limit the influence of any single user's data on the final output. ## The Differentially Private Pipeline * **DP Clustering**: The framework first converts conversation data into numerical embeddings. These are grouped using a DP clustering algorithm, ensuring that cluster centers reflect broad trends rather than specific individual inputs. * **DP Keyword Extraction**: The system identifies keywords for each cluster and generates a histogram of their frequency. By adding mathematical noise to these counts, the framework masks individual contributions and ensures that only keywords common to many users are retained. * **Keyword Generation Methods**: The researchers explored three methods for extraction: LLM-guided selection of relevant terms, a differentially private version of TF-IDF, and an LLM-guided approach that selects terms from a pre-defined list of public keywords. * **LLM Summarization**: In the final stage, an LLM generates a high-level summary of the cluster using only the noisy, anonymized keywords. Because the LLM never sees the raw conversation text, the "post-processing" property of DP guarantees that the final summary remains private. ## Privacy and Utility Trade-offs * The framework was tested against a non-private baseline (Simple-CLIO) to evaluate how privacy constraints affect the quality of the insights generated. * Stronger privacy settings (lower epsilon values) inherently result in a utility trade-off, as the added noise can obscure some niche usage patterns. * Despite these trade-offs, the framework provides a robust defense against data leakage, as the summarization model is structurally prevented from seeing sensitive original text, making it resilient to prompt injection. This framework offers a scalable way for platform providers to analyze chatbot usage patterns and enforce safety policies while providing mathematical certainty regarding user privacy. For organizations handling sensitive conversation data, moving from heuristic redaction to formal DP pipelines like Urania provides a more robust and auditable path for service improvement.

google

Differentially private machine learning at scale with JAX-Privacy (opens in new tab)

Google DeepMind and Google Research have announced the release of JAX-Privacy 1.0, a high-performance library designed to scale differentially private (DP) machine learning. By leveraging JAX’s native parallelization and functional programming model, the toolkit enables researchers to train large-scale foundation models while maintaining rigorous privacy guarantees. This version introduces modular components for advanced algorithms and empirical auditing, making private training both computationally efficient and verifiable across distributed environments. ### Scaling Differential Privacy with JAX * The library is built directly on the JAX ecosystem, integrating seamlessly with Flax for neural network architectures and Optax for optimization. * It utilizes JAX’s `vmap` for automatic vectorization and `shard_map` for single-program multiple-data (SPMD) parallelization, allowing DP primitives to scale across multiple accelerators. * By using just-in-time (JIT) compilation, the library mitigates the traditional performance overhead associated with per-example gradient clipping and noise addition. ### Core Components and Advanced Algorithms * The toolkit provides fundamental building blocks for implementing standard DP algorithms like DP-SGD and DP-FTRL, including specialized modules for data batch construction. * It supports state-of-the-art methods such as DP matrix factorization, which improves performance by injecting correlated noise across training iterations. * Features like micro-batching and padding are included to handle the massive, variable-sized batches often required to achieve an optimal balance between privacy and model utility. ### Verification and Privacy Auditing * JAX-Privacy incorporates rigorous privacy accounting based on Rényi Differential Privacy to provide precise tracking of privacy budgets. * The library includes tools for empirical auditing, allowing developers to validate their privacy guarantees through techniques like membership inference attacks and data poisoning. * The design ensures correctness in distributed settings, specifically focusing on consistent noise generation and gradient synchronization across clusters. JAX-Privacy 1.0 is a robust solution for researchers and engineers who need to deploy production-grade private models. Its modular architecture and integration with high-performance computing primitives make it a primary choice for training foundation models on sensitive datasets without compromising on scalability or security.

google

Toward provably private insights into AI use (opens in new tab)

Google Research has introduced Provably Private Insights (PPI), a framework designed to analyze generative AI usage patterns while providing mathematical guarantees of user privacy. By integrating Large Language Models (LLMs) with differential privacy and trusted execution environments (TEEs), the system enables developers to derive aggregate trends from unstructured data without exposing individual user content. This approach ensures that server-side processing remains limited to privacy-preserving computations that are fully auditable by external parties. ### The Role of LLMs in Structured Summarization The system employs "data expert" LLMs to transform unstructured generative AI data into actionable, structured insights. * The framework utilizes open-source Gemma 3 models to perform specific analysis tasks, such as classifying transcripts into topics or identifying user frustration levels. * This "structured summarization" occurs entirely within a TEE, ensuring that the model processes raw data in an environment inaccessible to human operators or external processes. * Developers can update LLM prompts frequently to answer new research questions without compromising the underlying privacy architecture. ### Confidential Federated Analytics (CFA) Infrastructure The PPI system is built upon Confidential Federated Analytics, a technique that isolates data through hardware-based security and cryptographic verification. * User devices encrypt data and define specific authorized processing steps before uploading it to the server. * A TEE-hosted key management service only releases decryption keys to processing steps that match public, open-source code signatures. * System integrity is verified using Rekor, a public, tamper-resistant transparency log that allows external parties to confirm that the code running in the TEE is exactly what was published. ### Anonymization via Differential Privacy Once the LLM extracts features from the data, the system applies differential privacy (DP) to ensure that the final output does not reveal information about any specific individual. * The extracted categories are aggregated into histograms, with DP noise added to the final counts to prevent the identification of single users. * Because the privacy guarantee is applied at the aggregation stage, the system remains secure even if a developer uses a prompt specifically designed to isolate a single user's data. * All aggregation algorithms are open-source and reproducibly buildable, allowing for end-to-end verifiability of the privacy claims. By open-sourcing the PPI stack through the Google Parfait project and deploying it in applications like Pixel Recorder, this framework establishes a new standard for transparent data analysis. Developers should look to integrate similar TEE-based federated analytics to balance the need for product insights with the necessity of provable, hardware-backed user privacy.

google

A picture's worth a thousand (private) words: Hierarchical generation of coherent synthetic photo albums (opens in new tab)

Researchers at Google have developed a hierarchical method for generating differentially private (DP) synthetic photo albums, providing a way to share representative datasets while protecting sensitive individual information. By utilizing an intermediate text representation and a two-stage generation process, the approach maintains thematic coherence across multiple images in an album—a significant challenge for traditional synthetic data methods. This framework allows organizations to apply standard, non-private analytical techniques to safe synthetic substitutes rather than modifying every individual analysis method for differential privacy. ## The Hierarchical Generation Process * The workflow begins by converting original photo albums into structured text; an AI model generates detailed captions for each image and a summary for the entire album. * Two large language models (LLMs) are privately fine-tuned using DP-SGD: the first is trained to produce album summaries, and the second generates individual photo captions based on those summaries. * Synthetic data is then produced hierarchically, where the model first generates a global album summary to serve as context, followed by a series of individual photo captions that remain consistent with that context. * The final step uses a text-to-image AI model to transform the private, synthetic text captions back into a set of coherent images. ## Benefits of Intermediate Text Representations * Text summarization is inherently privacy-enhancing because it is a "lossy" operation, meaning the text description is unlikely to capture the exact unique details of an original photo. * Using text as a midpoint allows for more efficient resource management, as generated albums can be filtered and curated at the text level before undergoing the computationally expensive process of image generation. * The hierarchical approach ensures that photos within a synthetic album share the same characters and themes, as every caption in a set is derived from the same contextual summary. * Training two separate models with shorter context windows is significantly more efficient than training one large model, because the computational cost of self-attention scales quadratically with the length of the context. This hierarchical, text-mediated approach demonstrates that high-level semantic information and thematic coherence can be preserved in synthetic datasets without sacrificing individual privacy. Organizations should consider this workflow—translating complex multi-modal data into structured text before synthesis—to scale differentially private data generation for advanced modeling and analysis.

google

VaultGemma: The world's most capable differentially private LLM (opens in new tab)

VaultGemma represents a significant milestone in privacy-preserving AI as the most capable large language model trained from scratch using differential privacy (DP). By establishing new scaling laws specifically for DP training, researchers have optimized the complex trade-offs between compute, privacy budgets, and model utility. The resulting 1-billion-parameter model demonstrates that high-performance generative AI can be achieved while maintaining rigorous mathematical guarantees against data memorization. ## Scaling Laws for Differentially Private Training * Performance in DP-trained models is primarily governed by the "noise-batch ratio," which measures the amount of random privacy noise relative to the size of the training data groups. * Research suggests that for any given compute and privacy budget, there exists an optimal training configuration that balances model size, iterations, and batch size to achieve the lowest possible training loss. * A critical finding indicates that DP training requires a departure from standard scaling practices, favoring significantly larger batch sizes and smaller model architectures than traditional non-DP training. ## Synergies in Privacy, Compute, and Data * Increasing the privacy budget (epsilon) in isolation leads to diminishing returns unless it is paired with a proportional increase in compute (FLOPs) or data (tokens). * Visualizations of the scaling laws show that different model sizes can provide similar utility if the number of training iterations and batch sizes are correctly adjusted. * The optimal configuration shifts between investing in larger models versus more iterations depending on the specific constraints of the data and privacy budgets. ## Training at Scale with Algorithmic Advancements * VaultGemma is built on the Gemma 2 architecture and utilizes a 1B parameter setup optimized for the unique constraints of DP. * To overcome hardware limitations when processing the massive batch sizes required for DP training, the team developed a "Virtual Batch" technique in JAX to aggregate gradients across multiple steps. * Training from scratch allows the model to outperform traditional DP-finetuned models, which often struggle to balance utility with the noise introduced during the fine-tuning process. ## Performance and Evaluation * VaultGemma achieves competitive results against standard 1B parameter models while providing formal privacy protections. * The model demonstrates superior privacy-utility trade-offs, proving that carefully scaled DP models can retain high levels of reasoning and language capability. * The release includes the model weights and a comprehensive technical report to assist the community in developing the next generation of private-by-design AI. VaultGemma provides a practical blueprint for developers who need to balance the power of large language models with strict data confidentiality requirements. By leveraging the provided scaling insights, organizations can now train models that are mathematically resistant to data leakage without sacrificing significant performance.

google

Securing private data at scale with differentially private partition selection (opens in new tab)

Google Research has introduced a novel parallel algorithm called MaxAdaptiveDegree (MAD) to enhance differentially private (DP) partition selection, a critical process for identifying common data items in massive datasets without compromising individual privacy. By utilizing an adaptive weighting mechanism, the algorithm optimizes the utility-privacy trade-off, allowing researchers to safely release significantly more data than previous non-adaptive methods. This breakthrough enables privacy-preserving analysis on datasets containing hundreds of billions of items, scaling up to three orders of magnitude larger than existing sequential approaches. ## The Role of DP Partition Selection * DP partition selection identifies a meaningful subset of unique items from large collections based on their frequency across multiple users. * The process ensures that no single individual's data can be identified in the final list by adding controlled noise and filtering out items that are not sufficiently common. * This technique is a foundational step for various machine learning tasks, including extracting n-gram vocabularies for language models, analyzing private data streams, and increasing efficiency in private model fine-tuning. ## The Weight, Noise, and Filter Paradigm * The standard approach to private partition selection begins by computing a "weight" for each item, typically representing its frequency, while ensuring "low sensitivity" so no single user has an outsized impact. * Random Gaussian noise is added to these weights to obfuscate exact counts, preventing attackers from inferring the presence of specific individuals. * A threshold determined by DP parameters is then applied; only items whose noisy weights exceed this threshold are included in the final output. ## Improving Utility via Adaptive Weighting * Traditional non-adaptive methods often result in "wastage," where highly popular items receive significantly more weight than necessary to cross the selection threshold. * The MaxAdaptiveDegree (MAD) algorithm introduces adaptivity by identifying items with excess weight and rerouting that weight to "under-allocated" items sitting just below the threshold. * This strategic reallocation allows a larger number of less-frequent items to be safely released, significantly increasing the utility of the dataset without compromising privacy or computational efficiency. ## Scalability and Parallelization * Unlike sequential algorithms that process data one piece at a time, MAD is designed as a parallel algorithm to handle the scale of modern user-based datasets. * The algorithm can process datasets with hundreds of billions of items by breaking the problem down into smaller parts computed simultaneously across multiple processors. * Google has open-sourced the implementation on GitHub to provide the research community with a tool that maintains robust privacy guarantees even at a massive scale. Researchers and data scientists working with large-scale sensitive datasets should consider implementing the MaxAdaptiveDegree algorithm to maximize the amount of shareable data while strictly adhering to user-level differential privacy standards.

google

Beyond billion-parameter burdens: Unlocking data synthesis with a conditional generator (opens in new tab)

The CTCL (Data Synthesis with ConTrollability and CLustering) framework provides a lightweight alternative to the computationally expensive process of fine-tuning billion-parameter models for differentially private synthetic data generation. By utilizing a 140-million parameter generator and a universal topic model, the system achieves high-quality distribution matching while remaining accessible for resource-constrained applications. This approach allows for the generation of unlimited synthetic samples without incurring additional privacy costs, consistently outperforming existing API-based and large-scale baselines under strict privacy guarantees. ### Pre-training Universal Components The framework relies on two core components developed using large-scale public corpora, which can be reused across different private domains: * **CTCL-Topic:** A universal topic model derived from Wikipedia documents. It uses BERTopic to embed and cluster data into approximately 1,000 distinct topics, each represented by 10 descriptive keywords. * **CTCL-Generator:** A conditional language model based on the 140M-parameter BART-base architecture. It was pre-trained on 430 million description–document pairs from the SlimPajama dataset, with descriptions generated by Gemma-2-2B to ensure the model can generate text based on specific input conditions. ### Learning the Private Domain Once the universal components are established, the framework learns the specific characteristics of a private dataset through a two-step process: * **Differentially Private (DP) Histograms:** The system captures high-level distributional information by creating a DP-protected histogram that represents the percentage of each topic present in the private corpus. * **DP Fine-Tuning:** Each document in the private dataset is associated with its corresponding keywords from the CTCL-Topic model. The CTCL-Generator is then fine-tuned on these keyword-document pairs using differential privacy to ensure individual data points are protected. ### Controllable Data Generation The final stage involves producing the synthetic dataset by sampling from the fine-tuned generator: * **Proportional Sampling:** The system generates data by targeting the exact topic proportions found in the private domain histogram. * **Keyword Conditioning:** For each topic, the model uses the associated 10 keywords as input to prompt the DP fine-tuned generator to produce relevant documents. * **Post-Processing Efficiency:** Because the generator is already fine-tuned with DP, the framework can generate an unlimited number of synthetic samples without further privacy budget expenditure, a significant advantage over iterative selection algorithms. CTCL offers a highly scalable and efficient solution for organizations needing to synthesize private text data without the infrastructure requirements of massive LLMs. Its ability to maintain topic-wise distribution through keyword conditioning makes it an ideal choice for specialized domains where maintaining the statistical utility of the data is as critical as protecting user privacy.

google

Fine-tuning LLMs with user-level differential privacy (opens in new tab)

Researchers from Google investigated scaling user-level differential privacy (DP) to the fine-tuning of large language models in datacenter environments. While traditional example-level DP protects individual data points, user-level DP provides a stronger guarantee by masking the presence of an entire user's dataset, which is critical for privacy-sensitive, domain-specific tasks. The study explores how the flexibility of datacenter training can be used to optimize sampling strategies and contribution bounds to minimize the noise typically required for these stringent privacy guarantees. ## Limitations of Example-Level Privacy * Standard differential privacy focuses on "example-level" protection, which prevents attackers from learning about specific individual data points. * In many real-world scenarios, a single user contributes many examples to a dataset; if an attacker can analyze these multiple points together, they may still learn private information about the user even under example-level DP. * User-level DP addresses this by ensuring a model remains essentially the same whether or not a specific user’s entire data collection was used during training. * While more robust, user-level DP is "strictly harder" to implement because it requires injecting significantly more noise into the training process, a problem that scales with the size of the model. ## Methodologies for User-Level DP Fine-Tuning * Both primary algorithms require a "contribution bound" during pre-processing, which strictly limits the number of examples any single user can provide to the training set. * Example-Level Sampling (ELS) involves sampling random individual examples for a batch and then applying a modified version of DP-SGD with high noise to compensate for the potential presence of multiple examples from the same user. * User-Level Sampling (ULS) involves sampling random users and including all of their (bounded) examples in a batch, which more closely resembles the structure of federated learning. * The datacenter environment offers a unique advantage over federated learning because researchers can perform precise queries on both individual examples and whole users, allowing for better optimization of the noise-to-utility ratio. ## Optimization and Datacenter Flexibility * The researchers focused on fine-tuning rather than full training because DP requires additional computation that is often unaffordable for base model training. * A central challenge in this research is determining the optimal "contribution bound"—if the bound is too low, valuable data is discarded, but if it is too high, more noise must be added to maintain privacy. * Because the datacenter allows for random sampling of any user at any time (unlike federated learning where devices must be online), the ULS algorithm can be tuned more effectively to achieve quality gains in the final model. To maximize the utility of LLMs fine-tuned on private data, developers should prioritize User-Level Sampling (ULS) strategies and carefully calibrate the contribution bounds of their datasets. By leveraging the controlled environment of a datacenter to optimize these parameters, it is possible to achieve high-performance models that respect user privacy more effectively than traditional example-level methods.

google

Differential privacy on trust graphs (opens in new tab)

Researchers from Google have introduced Trust Graph Differential Privacy (TGDP), a framework that models privacy based on varying trust relationships between users represented as vertices in a graph. By allowing users to share data with trusted neighbors who then aggregate and privatize the information, TGDP bridges the gap between the highly accurate central DP model and the high-privacy local DP model. This approach enables more practical and accurate data analysis in scenarios where users exhibit nuanced privacy preferences rather than binary trust assumptions. ## Defining Trust Graph DP * The model represents users as vertices and mutual trust as edges, ensuring that a user’s data remains statistically indistinguishable to any party they do not trust. * This guarantee holds even if non-trusted parties pool their data or collaborate with a user's trusted neighbors to attempt re-identification. * TGDP serves as a mathematical interpolation: a "star graph" topology corresponds to the central DP model, while a fully unconnected graph corresponds to the local DP model. ## Private Aggregation and Error Metrics * The research evaluates TGDP through the fundamental task of private aggregation, where the goal is to estimate the sum of all users' private values ($\Sigma x_i$). * Accuracy is quantified using mean-squared error, allowing researchers to establish theoretical upper and lower bounds for algorithm performance. * These bounds demonstrate that the utility of a privacy-preserving algorithm is directly tied to the specific structure of the trust relationships within the network. ## The Dominating Set Algorithm * The proposed algorithm utilizes the concept of a "dominating set"—a subset of users $T$ such that every user in the graph is either in $T$ or adjacent to someone in $T$. * In this mechanism, each user sends their raw data to a trusted neighbor within the dominating set. * The members of the dominating set aggregate the data they receive and add specific statistical noise to satisfy differential privacy before sharing the results. * This method reduces the total noise required compared to the local model, as the number of noise-adding entities is limited to the size of the dominating set rather than the entire population. By leveraging existing trust networks, TGDP provides a rigorous way to optimize the trade-off between privacy and utility. This framework suggests that identifying small dominating sets within a community can significantly improve the accuracy of data analytics and machine learning without requiring a single, universally trusted central curator.

google

Generating synthetic data with differentially private LLM inference (opens in new tab)

Researchers at Google have developed an inference-only method for generating differentially private (DP) synthetic data that avoids the high costs and data requirements associated with private fine-tuning. By prompting off-the-shelf large language models (LLMs) with sensitive examples in parallel and aggregating their outputs, the approach can generate thousands of high-quality synthetic data points while maintaining rigorous privacy guarantees. This method allows synthetic data to serve as a secure interface for model development, enabling teams to collaborate without requiring specialized knowledge of differential privacy. ## Differentially Private Prediction and Aggregation The core of this method relies on "private prediction," where privacy is applied to the model's output rather than the model itself. * Sensitive data points are distributed across multiple independent prompts, ensuring that no single individual's record can significantly influence the final output. * The LLM generates next-token predictions for each prompt in parallel, which are then aggregated to mask individual contributions. * The researchers designed a DP token sampling algorithm that treats the standard LLM "softmax" sampling process as a version of the exponential mechanism, a mathematical framework used to select the best option from a set while maintaining privacy. ## Enhancing Efficiency via KV Caching Previous attempts at private prediction were computationally expensive because they required a fresh batch of sensitive examples for every single token generated. * A new privacy analysis allows the system to reuse a fixed batch of sensitive examples across an entire generation sequence. * By maintaining the same context for each generation step, the system becomes compatible with standard inference optimization techniques like KV (Key-Value) caching. * This improvement enables the generation of synthetic data at a scale two to three orders of magnitude larger than prior methods. ## Optimizing Privacy Spend with Public Drafters To preserve the "privacy budget"—the limited amount of information that can be released before privacy is compromised—the method introduces a public drafter model. * The drafter model predicts the next token based solely on previously generated synthetic text, without ever seeing the sensitive data. * Using the sparse vector technique, the system only consumes the privacy budget when the public drafter’s suggestion disagrees with the private aggregate of the sensitive data. * This is particularly useful for structured data, where the drafter can handle formatting and syntax tokens, saving the privacy budget for the actual content. By leveraging off-the-shelf models like Gemma, this approach provides a scalable way to transform sensitive datasets into useful synthetic versions. These synthetic datasets are high-quality enough to replace real data in downstream machine learning tasks, such as in-context learning or fine-tuning models like BERT, without the risk of leaking individual user information.