jax

4 posts

google

Differentially private machine learning at scale with JAX-Privacy (opens in new tab)

Google DeepMind and Google Research have announced the release of JAX-Privacy 1.0, a high-performance library designed to scale differentially private (DP) machine learning. By leveraging JAX’s native parallelization and functional programming model, the toolkit enables researchers to train large-scale foundation models while maintaining rigorous privacy guarantees. This version introduces modular components for advanced algorithms and empirical auditing, making private training both computationally efficient and verifiable across distributed environments. ### Scaling Differential Privacy with JAX * The library is built directly on the JAX ecosystem, integrating seamlessly with Flax for neural network architectures and Optax for optimization. * It utilizes JAX’s `vmap` for automatic vectorization and `shard_map` for single-program multiple-data (SPMD) parallelization, allowing DP primitives to scale across multiple accelerators. * By using just-in-time (JIT) compilation, the library mitigates the traditional performance overhead associated with per-example gradient clipping and noise addition. ### Core Components and Advanced Algorithms * The toolkit provides fundamental building blocks for implementing standard DP algorithms like DP-SGD and DP-FTRL, including specialized modules for data batch construction. * It supports state-of-the-art methods such as DP matrix factorization, which improves performance by injecting correlated noise across training iterations. * Features like micro-batching and padding are included to handle the massive, variable-sized batches often required to achieve an optimal balance between privacy and model utility. ### Verification and Privacy Auditing * JAX-Privacy incorporates rigorous privacy accounting based on Rényi Differential Privacy to provide precise tracking of privacy budgets. * The library includes tools for empirical auditing, allowing developers to validate their privacy guarantees through techniques like membership inference attacks and data poisoning. * The design ensures correctness in distributed settings, specifically focusing on consistent noise generation and gradient synchronization across clusters. JAX-Privacy 1.0 is a robust solution for researchers and engineers who need to deploy production-grade private models. Its modular architecture and integration with high-performance computing primitives make it a primary choice for training foundation models on sensitive datasets without compromising on scalability or security.

google

Exploring a space-based, scalable AI infrastructure system design (opens in new tab)

Project Suncatcher is a Google moonshot initiative aimed at scaling machine learning infrastructure by deploying solar-powered satellite constellations equipped with Tensor Processing Units (TPUs). By leveraging the nearly continuous energy of the sun in specific orbits and utilizing high-bandwidth free-space optical links, the project seeks to bypass the resource constraints of terrestrial data centers. Early research suggests that a modular, tightly clustered satellite design can achieve the necessary compute density and communication speeds required for modern AI workloads. ### Data-Center Bandwidth via Optical Links * To match terrestrial performance, inter-satellite links must support tens of terabits per second using multi-channel dense wavelength-division multiplexing (DWDM) and spatial multiplexing. * The system addresses signal power loss (the link budget) by maintaining satellites in extremely close proximity—kilometers or less—compared to traditional long-range satellite deployments. * Initial bench-scale demonstrations have successfully achieved 800 Gbps each-way transmission (1.6 Tbps total) using a single transceiver pair, validating the feasibility of high-speed optical networking. ### Orbital Mechanics of Compact Constellations * The proposed system utilizes a sun-synchronous low-earth orbit (LEO) at an altitude of approximately 650 km to maximize solar exposure and minimize the weight of onboard batteries. * Researchers use Hill-Clohessy-Wiltshire equations and JAX-based differentiable models to manage the complex gravitational perturbations and atmospheric drag affecting satellites flying in tight 100–200m formations. * Simulations of 81-satellite clusters indicate that only modest station-keeping maneuvers are required to maintain stable, "free-fall" trajectories within the orbital plane. ### Hardware Resilience in Space Environments * The project specifically tests Google’s Trillium (v6e) Cloud TPUs to determine if terrestrial AI accelerators can survive the radiation found in LEO. * Hardware is subjected to 67MeV proton beams to analyze the impact of Total Ionizing Dose (TID) and Single Event Effects (SEEs) on processing reliability. * Preliminary testing indicates promising results for the radiation tolerance of high-performance accelerators, suggesting that standard TPU architectures may be viable for orbital deployment with minimal modification. While still in the research and development phase, Project Suncatcher suggests that the future of massive AI scaling may involve shifting infrastructure away from terrestrial limits and toward modular, energy-rich orbital environments. Organizations should monitor the progress of free-space optical communication and radiation-hardened accelerators as these technologies will be the primary gatekeepers for space-based computation.

google

Graph foundation models for relational data (opens in new tab)

Google researchers have introduced Graph Foundation Models (GFMs) as a solution to the limitations of traditional tabular machine learning, which often ignores the rich connectivity of relational databases. By representing tables as interconnected graphs where rows are nodes and foreign keys are edges, this approach enables a single model to generalize across entirely different schemas and feature sets. This shift allows for transferable graph representations that can perform inference on unseen tasks without the costly need for domain-specific retraining. ### Transforming Relational Schemas into Graphs The core methodology involves a scalable data preparation step that converts standard relational database structures into a single heterogeneous graph. This process preserves the underlying logic of the data while making it compatible with graph-based learning: * **Node Mapping:** Each unique table is treated as a node type, and every individual row within that table is converted into a specific node. * **Edge Creation:** Foreign key relationships are transformed into typed edges that connect nodes across different tables. * **Feature Integration:** Standard columns containing numerical or categorical data are converted into node features, while temporal data can be preserved as features on either nodes or edges. ### Overcoming the Generalization Gap A primary hurdle in developing GFMs is the lack of a universal tokenization method, unlike the word pieces used in language models or patches used in vision models. Traditional Graph Neural Networks (GNNs) are typically locked to the specific graph they were trained on, but GFMs solve this through several technical innovations: * **Schema Agnosticism:** The model avoids hard-coded embedding tables for specific node types, allowing it to interpret database schemas it has never encountered during training. * **Feature Interaction Learning:** Instead of training on "absolute" features (like specific price distributions), the model captures how different features interact with one another across diverse tasks. * **Generalizable Encoders:** The architecture uses transferable methods to derive fixed-size representations for nodes, whether they contain three continuous float features or dozens of categorical values. ### Scaling and Real-World Application To handle the requirements of enterprise-level data, the GFM framework is built to operate on a massive scale using Google’s specialized infrastructure: * **Massive Throughput:** The system utilizes JAX and TPU infrastructure to process graphs containing billions of nodes and edges. * **Internal Validation:** The model has been tested on complex internal Google tasks, such as spam detection in advertisements, which requires analyzing dozens of interconnected relational tables simultaneously. * **Performance Benefits:** By considering the connections between rows—a factor traditional tabular baselines like decision trees often ignore—the GFM provides superior downstream performance in high-stakes prediction services. Transitioning from domain-specific models to Graph Foundation Models allows organizations to leverage relational data more holistically. By focusing on the connectivity of data rather than just isolated table features, GFMs provide a path toward a single, generalist model capable of handling diverse enterprise tasks.

google

The evolution of graph learning (opens in new tab)

The evolution of graph learning has transformed from classical mathematical puzzles into a cornerstone of modern machine learning, enabling the modeling of complex relational data. By bridging the gap between discrete graph algorithms and neural networks, researchers have unlocked the ability to generate powerful embeddings that capture structural similarities. This progression, spearheaded by milestones like PageRank and DeepWalk, has established graph-based models as essential tools for solving real-world challenges ranging from traffic prediction to molecular analysis. **Foundations of Graph Theory and Classical Algorithms** * Graph theory originated in 1736 with Leonhard Euler’s analysis of the Seven Bridges of Königsberg, which established the mathematical framework for representing connections between entities. * Pre-deep learning efforts focused on structural properties, such as community detection and centrality, or solving discrete problems like shortest paths and maximum flow. * The 1996 development of PageRank by Google’s founders applied these principles at scale, treating the internet as a massive graph of nodes (pages) and edges (hyperlinks) to revolutionize information retrieval. **Bridging Graph Data and Neural Networks via DeepWalk** * A primary challenge in the field was the difficulty of integrating discrete graph structures into neural network architectures, which typically favor feature-based embeddings over relational ones. * Developed in 2014, DeepWalk became the first practical method to bridge this gap by utilizing a neural network encoder to create graph embeddings. * These embeddings convert complex relational data into numeric representations that preserve the structural similarity between objects, allowing graph data to be processed by modern machine learning pipelines. **The Rise of Graph Convolutional Networks and Message Passing** * Following the success of graph embeddings, the field moved toward Graph Convolutional Networks (GCNs) in 2016 to better handle non-Euclidean data. * Modern frameworks now utilize Message Passing Neural Networks (MPNNs), which allow nodes to aggregate information from their neighbors to learn more nuanced representations. * These advancements are supported by specialized libraries in TensorFlow and JAX, enabling the application of graph learning to diverse fields such as physics simulations, disease spread modeling, and fake news detection. To effectively model complex systems where relationships are as important as the entities themselves, practitioners should transition from traditional feature-based models to graph-aware architectures. Utilizing contemporary libraries like those available for JAX and TensorFlow allows for the integration of relational structure directly into the learning process, providing more robust insights into interconnected data.