Google Research / machine-learning

36 posts

google

NeuralGCM harnesses AI to better simulate long-range global precipitation (opens in new tab)

NeuralGCM represents a significant evolution in atmospheric modeling by combining traditional fluid dynamics with neural networks to solve the long-standing challenge of simulating global precipitation. By training the AI component directly on high-quality NASA satellite observations rather than biased reanalysis data, the model achieves unprecedented accuracy in predicting daily weather cycles and extreme rainfall events. This hybrid approach offers a faster, more precise tool for both medium-range weather forecasting and multi-decadal climate projections. ## The Limitations of Cloud Parameterization * Precipitation is driven by cloud processes occurring at scales as small as 100 meters, which is far below the kilometer-scale resolution of global weather models. * Traditional models rely on "parameterizations," or mathematical approximations, to estimate how these small-scale events affect the larger atmosphere. * Because these approximations are often simplified, traditional models struggle to accurately capture the complexity of water droplet formation and ice crystal growth, leading to errors in long-term forecasts. ## Training on Direct Satellite Observations * Unlike previous AI models trained on "reanalyses"—which are essentially simulations used to fill observational gaps—NeuralGCM is trained on NASA satellite-based precipitation data spanning 2001 to 2018. * The model utilizes a differentiable dynamical core, an architecture that allows the neural network to learn the effects of small-scale events directly from physical observations. * By bypassing the weaknesses inherent in reanalysis data, the model effectively creates a machine-learned parameterization that is more faithful to real-world cloud physics. ## Performance in Weather and Climate Benchmarks * At a resolution of 280 km, NeuralGCM outperforms leading operational models in medium-range forecasts (up to 15 days) and matches the precision of sophisticated multi-decadal climate models. * The model shows a marked improvement in capturing precipitation extremes, particularly for the top 0.1% of rainfall events. * Evaluation through WeatherBench 2 demonstrates that NeuralGCM accurately reproduces the diurnal (daily) weather cycle, a metric where traditional physics-based models frequently fall short. NeuralGCM provides a highly efficient and accessible framework for researchers and city planners who need to simulate long-range climate scenarios, such as 100-year storms or seasonal agricultural cycles. Its ability to maintain physical consistency while leveraging the speed of AI makes it a powerful candidate for the next generation of global atmospheric modeling.

google

Spotlight on innovation: Google-sponsored Data Science for Health Ideathon across Africa (opens in new tab)

Google Research, in partnership with several pan-African machine learning communities, recently concluded the Africa-wide Data Science for Health Ideathon to address regional medical challenges. By providing access to specialized open-source health models and technical mentorship, the initiative empowered local researchers to develop tailored solutions for issues ranging from maternal health to oncology. The event demonstrated that localized innovation, supported by high-performance AI foundations, can effectively bridge healthcare gaps in resource-constrained environments. ## Collaborative Framework and Objectives * The Ideathon was launched at the 2025 Deep Learning Indaba in Kigali, Rwanda, in collaboration with SisonkeBiotik, Ro’ya, and DS-I Africa. * The primary goal was to foster capacity building within the African AI community, moving beyond theoretical research toward the execution of practical healthcare tools. * Participants received hands-on training on Google’s specialized health models and were supported with Google Cloud Vertex AI compute credits and mentorship from global experts. * Submissions were evaluated based on their innovation, technical feasibility, and contextual relevance to African health systems. ## Technical Foundations and Google Health Models * Developers focused on a suite of open health AI models, including MedGemma for clinical reasoning, TxGemma for therapeutics, and MedSigLIP for medical vision-language tasks. * The competition utilized a two-phase journey: an initial "Idea Development" stage where teams defined clinical problems and outlined AI approaches, followed by a "Prototype & Pitch" phase. * Technical implementations frequently involved advanced techniques such as Retrieval-Augmented Generation (RAG) to ensure alignment with local medical protocols and WHO guidelines. * Fine-tuning methods, specifically Low-Rank Adaptation (LoRA), were utilized by teams to specialize large-scale models like MedGemma-27B-IT for niche datasets. ## Innovative Solutions for Regional Health * **Dawa Health:** This first-place winner developed an AI-powered cervical cancer screening tool that uses MedSigLIP to identify abnormalities in colposcopy images uploaded via WhatsApp, combined with Gemini RAG for clinical guidance. * **Solver (CerviScreen AI):** This team built a web application for automated cervical-cytology screening by fine-tuning MedGemma-27B-IT on the CRIC dataset to assist cytopathologists with annotated images. * **Mkunga:** A maternal health call center that adapts MedGemma and Gemini to provide advice in Swahili using Speech-to-Text (STT) and Text-to-Speech (TTS) technologies. * **HexAI (DermaDetect):** Recognized for the best proof-of-concept, this offline-first mobile app allows community health workers to triage skin conditions using on-device versions of MedSigLIP, specifically designed for low-connectivity areas. The success of the Ideathon underscores the importance of "local solutions for local priorities." By making sophisticated models like MedGemma and MedSigLIP openly available, the technical barrier to entry is lowered, allowing African developers to build high-impact, culturally and linguistically relevant medical tools. For organizations looking to implement AI in global health, this model of providing foundational tools and cloud resources to local experts remains a highly effective strategy for sustainable innovation.

google

From Waveforms to Wisdom: The New Benchmark for Auditory Intelligence (opens in new tab)

Google Research has introduced the Massive Sound Embedding Benchmark (MSEB) to unify the fragmented landscape of machine sound intelligence. By standardizing the evaluation of eight core auditory capabilities across diverse datasets, the framework reveals that current sound representations are far from universal and have significant performance "headroom" for improvement. Ultimately, MSEB provides an open-source platform to drive the development of general-purpose sound embeddings for next-generation multimodal AI. ### Diverse Datasets for Real-World Scenarios The benchmark utilizes a curated collection of high-quality, accessible datasets designed to reflect global diversity and complex acoustic environments. * **Simple Voice Questions (SVQ):** A foundational dataset featuring 177,352 short spoken queries across 17 languages and 26 locales, recorded in varying conditions like traffic and media noise. * **Speech-MASSIVE:** Used for multilingual spoken language understanding and intent classification. * **FSD50K:** A large-scale dataset for environmental sound event recognition containing 200 classes based on the AudioSet Ontology. * **BirdSet:** A massive-scale benchmark specifically for avian bioacoustics and complex soundscape recordings. ### Eight Core Auditory Capabilities MSEB is structured around "super-tasks" that represent the essential functions an intelligent auditory system must perform within a multimodal context. * **Retrieval and Reasoning:** These tasks simulate voice search and the ability of an assistant to find precise answers within documents based on spoken questions. * **Classification and Transcription:** Standard perception tasks that categorize sounds by environment or intent and convert audio signals into verbatim text. * **Segmentation and Clustering:** These involve identifying and localizing salient terms with precise timestamps and grouping sound samples by shared attributes without predefined labels. * **Reranking and Reconstruction:** Advanced tasks that reorder ambiguous text hypotheses to match spoken queries and test embedding quality by regenerating original audio waveforms. ### Unified Evaluation and Performance Goals The framework is designed to move beyond fragmented research by providing a consistent structure for evaluating different model architectures. * **Model Agnostic:** The open framework allows for the evaluation of uni-modal, cascade, and end-to-end multimodal embedding models. * **Objective Baselines:** By establishing clear performance goals, the benchmark highlights specific research opportunities where current state-of-the-art models fall short of their potential. * **Multimodal Integration:** Every task assumes sound is the critical input but incorporates other modalities, such as text context, to better simulate real-world AI interactions. By providing a comprehensive roadmap for auditory intelligence, MSEB encourages the community to move toward universal sound embeddings. Researchers can contribute to this evolving standard by accessing the open-source GitHub repository and utilizing the newly released datasets on Hugging Face to benchmark their own models.

google

Reducing EV range anxiety: How a simple AI model predicts port availability (opens in new tab)

Google Research has developed a lightweight AI model designed to predict the probability of EV charging port availability at specific future intervals, directly addressing the "range anxiety" experienced by electric vehicle drivers. By co-designing the model with deployment infrastructure, researchers found that a simple linear regression approach outperformed more complex architectures like neural networks and decision trees. The resulting system effectively predicts availability changes during high-turnover periods, providing more reliable navigation and planning data than traditional "no-change" assumptions. ### Model Architecture and Feature Selection * The development team prioritized a minimal feature set to ensure low-latency deployment and high speed in real-world navigational applications. * After testing various architectures, a straightforward linear regression model was selected for its robustness and superior performance in this specific predictive task. * The model was trained using real-time availability data from diverse geographical regions, specifically California and Germany, with an emphasis on larger charging stations that reflect high-traffic usage patterns. ### Temporal Feature Weights and Occupancy Trends * The model uses the hour of the day as a primary feature, treating each hour as an independent variable to capture specific daily cycles. * Learned numerical "weights" dictate the predicted rate of occupancy change: positive weights indicate ports are becoming occupied (e.g., during morning rush), while negative weights indicate ports are being freed up (e.g., during evening hours). * The system is designed to only deviate from the current occupancy state when the change rate is statistically significant or when a station's large size amplifies the likelihood of a status change. ### Performance Benchmarking and Validation * The model was evaluated against a "Keep Current State" baseline, which assumes future availability will be identical to the present status—a difficult baseline to beat since port status remains unchanged roughly 90% of the time over 30-minute windows. * Accuracy was measured using Mean Squared Error (MSE) and Mean Absolute Error (MAE) over 30-minute and 60-minute time horizons across 100 randomly selected stations. * Testing confirmed that the linear regression model provides its greatest value during infrequent but critical moments of high turnover, successfully identifying when a station is likely to become full or available. The success of this model demonstrates that sophisticated deep learning is not always the optimal solution for infrastructure challenges. By combining intuitive real-world logic—such as driver schedules and station capacity—with simple machine learning techniques, developers can create highly efficient tools that significantly improve the EV user experience without requiring massive computational overhead.

google

Real-time speech-to-speech translation (opens in new tab)

Google DeepMind and Google Core ML have developed an innovative end-to-end speech-to-speech translation (S2ST) model that enables real-time, voice-preserved communication with only a two-second delay. By replacing traditional cascaded pipelines with a streaming architecture trained on time-synchronized data, the system overcomes long-standing issues of high latency and accumulated errors. This advancement represents a significant shift toward natural, fluid cross-language dialogue that retains the original speaker's personality. ## Limitations of Cascaded S2ST Traditional real-time translation systems typically rely on a cascaded chain of three distinct AI models: Automatic Speech Recognition (ASR), Automatic Speech Translation (AST), and Text-to-Speech (TTS). This approach suffers from several critical drawbacks: * **High Latency:** Processing through three separate stages results in a 4–5 second delay, forcing users into unnatural, turn-based interactions. * **Error Propagation:** Inaccuracies in the initial transcription or translation phase accumulate, often leading to garbled or incorrect final audio output. * **Loss of Identity:** General-purpose TTS engines generate generic voices, stripping the communication of the original speaker’s unique vocal characteristics. ## Time-Synced Data Acquisition Pipeline To train an end-to-end model capable of low-latency output, researchers created a scalable pipeline that transforms raw audio into a specialized time-synchronized dataset. * **Alignment Multi-mapping:** The process uses forced alignment algorithms to map source audio to source text, source text to translated text, and finally, translated text to generated speech. * **Voice Preservation:** A custom TTS engine generates the target language audio while intentionally preserving the vocal characteristics of the original speaker. * **Strict Validation:** Automated filters discard any segments where alignments fail or where the translated audio cannot meet specific real-time delay requirements. * **Data Augmentation:** The training set is further refined using techniques such as sample rate reduction, denoising, and reverberation to ensure the model performs well in real-world environments. ## End-to-End Streaming Architecture The model’s architecture is designed for continuous audio streams, leveraging the AudioLM framework and fundamental transformer blocks to make real-time decisions. * **Streaming Encoder:** This component summarizes source audio data by focusing on the preceding 10-second window of input. * **Streaming Decoder:** This module predicts translated audio autoregressively, utilizing compressed encoder states and previous predictions to maintain flow. * **RVQ Audio Tokens:** The system represents audio as a 2D set of Residual Vector Quantization (RVQ) tokens, where the X-axis represents time and the Y-axis represents audio quality/fidelity. * **SpectroStream Integration:** By using SpectroStream codec technology, the model manages hierarchical audio representations, allowing it to prioritize the sequential output of audio segments for immediate playback. This technology effectively bridges the gap between high-quality translation and real-time responsiveness. For developers and researchers in the field, the transition from modular cascaded systems to end-to-end streaming architectures—supported by rigorous time-aligned datasets—is the recommended path for achieving truly seamless human-to-human cross-language communication.

google

Separating natural forests from other tree cover with AI for deforestation-free supply chains (opens in new tab)

Researchers from Google DeepMind and Google Research have developed "Natural Forests of the World 2020," an AI-powered global map that distinguishes natural ecosystems from commercial tree plantations. By utilizing high-resolution satellite data and machine learning, the project provides a critical 10-meter resolution baseline to support deforestation-free supply chain regulations like the EUDR. This tool enables governments and companies to monitor biodiversity-rich areas with unprecedented accuracy, ensuring that natural forests are protected from industrial degradation. **The Limitation of Traditional Tree Cover Maps** * Existing maps frequently conflate all woody vegetation into a generic "tree cover" category, leading to "apples-to-oranges" comparisons between different land types. * This lack of distinction makes it difficult to differentiate between the harvesting of short-term plantations and the permanent loss of ancient, biodiversity-rich natural forests. * Precise mapping is now a legal necessity due to regulations like the European Union Regulation on Deforestation-free Products (EUDR), which bans products from land deforested or degraded after December 31, 2020. **The MTSViT Modeling Approach** * To accurately identify forest types, researchers developed the Multi-modal Temporal-Spatial Vision Transformer (MTSViT). * Rather than relying on a single snapshot, the AI "observes" 1280 x 1280 meter patches over the course of a year to identify seasonal, spectral, and textural signatures. * The model integrates multi-modal data, including Sentinel-2 satellite imagery, topographical information (such as elevation and slope), and specific geographical coordinates. * This temporal-spatial analysis allows the AI to recognize the complex patterns of natural forests that distinguish them from the uniform, fast-growing structures of commercial plantations. **Dataset Scale and Global Validation** * The model was trained on a massive dataset comprising over 1.2 million global patches at 10-meter resolution. * The final map provides seamless global coverage, achieving a best-in-class validation accuracy of 92.2% against an independent global dataset. * The research was a collaborative effort involving the World Resources Institute and the International Institute for Applied Systems Analysis to ensure scientific rigor and practical utility. The "Natural Forests of the World 2020" dataset is publicly available via Google Earth Engine and other open repositories. Organizations should leverage this high-resolution baseline to conduct environmental due diligence, support government monitoring, and target conservation efforts in preparation for global climate milestones like COP30.

google

Differentially private machine learning at scale with JAX-Privacy (opens in new tab)

Google DeepMind and Google Research have announced the release of JAX-Privacy 1.0, a high-performance library designed to scale differentially private (DP) machine learning. By leveraging JAX’s native parallelization and functional programming model, the toolkit enables researchers to train large-scale foundation models while maintaining rigorous privacy guarantees. This version introduces modular components for advanced algorithms and empirical auditing, making private training both computationally efficient and verifiable across distributed environments. ### Scaling Differential Privacy with JAX * The library is built directly on the JAX ecosystem, integrating seamlessly with Flax for neural network architectures and Optax for optimization. * It utilizes JAX’s `vmap` for automatic vectorization and `shard_map` for single-program multiple-data (SPMD) parallelization, allowing DP primitives to scale across multiple accelerators. * By using just-in-time (JIT) compilation, the library mitigates the traditional performance overhead associated with per-example gradient clipping and noise addition. ### Core Components and Advanced Algorithms * The toolkit provides fundamental building blocks for implementing standard DP algorithms like DP-SGD and DP-FTRL, including specialized modules for data batch construction. * It supports state-of-the-art methods such as DP matrix factorization, which improves performance by injecting correlated noise across training iterations. * Features like micro-batching and padding are included to handle the massive, variable-sized batches often required to achieve an optimal balance between privacy and model utility. ### Verification and Privacy Auditing * JAX-Privacy incorporates rigorous privacy accounting based on Rényi Differential Privacy to provide precise tracking of privacy budgets. * The library includes tools for empirical auditing, allowing developers to validate their privacy guarantees through techniques like membership inference attacks and data poisoning. * The design ensures correctness in distributed settings, specifically focusing on consistent noise generation and gradient synchronization across clusters. JAX-Privacy 1.0 is a robust solution for researchers and engineers who need to deploy production-grade private models. Its modular architecture and integration with high-performance computing primitives make it a primary choice for training foundation models on sensitive datasets without compromising on scalability or security.

google

Introducing Nested Learning: A new ML paradigm for continual learning (opens in new tab)

Google Research has introduced Nested Learning, a paradigm that treats machine learning models as systems of interconnected, multi-level optimization problems rather than separate architectures and training rules. By unifying structure and optimization through varying update frequencies, this approach aims to mitigate "catastrophic forgetting," the tendency for models to lose old knowledge when acquiring new skills. The researchers validated this framework through "Hope," a self-modifying architecture that outperforms current state-of-the-art models in long-context memory and language modeling. ### The Nested Learning Paradigm This framework shifts the view of machine learning from a single continuous process to a set of coherent, nested optimization problems. Each component within a model is characterized by its own "context flow"—the specific set of information it learns from—and its own update frequency. * The paradigm argues that architecture (structure) and optimization (training rules) are fundamentally the same concept, differing only by their level of computational depth and update rates. * Associative memory is used as a core illustrative concept, where the training process (backpropagation) is modeled as a system mapping data points to local error values. * By defining an update frequency rate for each component, researchers can order these problems into "levels," allowing for a more unified and efficient learning system inspired by the human brain's neuroplasticity. ### Deep Optimizers and Refined Objectives Nested Learning provides a principled way to improve standard optimization algorithms by viewing them through the lens of associative memory modules. * Existing momentum-based optimizers often rely on simple dot-product similarity, which fails to account for how different data samples relate to one another. * By replacing these simple similarities with standard loss metrics, such as L2 regression loss, the researchers derived new formulations for momentum that are more resilient to imperfect or noisy data. * This approach turns the optimizer itself into a deeper learning component with its own internal optimization objective. ### Continuum Memory Systems and the "Hope" Architecture The paradigm addresses the limitations of Large Language Models (LLMs), which are often restricted to either their immediate input window or static pre-trained knowledge. * The researchers developed "Hope," a proof-of-concept architecture that utilizes multi-time-scale updates for its internal components. * While standard Transformers act primarily as short-term memory, the Nested Learning approach allows for "continuum memory" that manages long-context information more effectively. * Experimental results show that this self-modifying architecture achieves superior performance in language modeling compared to existing state-of-the-art models. By recognizing that every part of a model is essentially an optimizer operating at a different frequency, Nested Learning offers a path toward AI that can adapt to new experiences in real-time. This structural shift moves away from the "static pre-training" bottleneck and toward systems capable of true human-like neuroplasticity and lifelong learning.

google

DS-STAR: A state-of-the-art versatile data science agent (opens in new tab)

DS-STAR is an advanced autonomous data science agent developed to handle the complexity and heterogeneity of real-world data tasks, ranging from statistical analysis to visualization. By integrating a specialized file analysis module with an iterative planning and verification loop, the system can interpret unstructured data and refine its reasoning steps dynamically based on execution feedback. This architecture allows DS-STAR to achieve state-of-the-art performance on major industry benchmarks, effectively bridging the gap between natural language queries and executable, verified code. ## Comprehensive Data File Analysis The framework addresses a major limitation of current agents—the over-reliance on structured CSV files—by implementing a dedicated analysis stage for diverse data formats. * The system automatically scans a directory to extract context from heterogeneous formats, including JSON, unstructured text, and markdown files. * A Python-based analysis script generates a textual summary of the data structure and content, which serves as the foundational context for the planning phase. * This module ensures the agent can navigate complex, multi-file environments where critical information is often spread across non-relational sources. ## Iterative Planning and Verification Architecture DS-STAR utilizes a sophisticated loop involving four specialized roles to mimic the workflow of a human expert conducting sequential analysis. * **Planner and Coder:** A Planner agent establishes high-level objectives, which a Coder agent سپس translates into executable Python scripts. * **LLM-based Verification:** A Verifier agent acts as a judge, assessing whether the generated code and its output are sufficient to solve the problem or if the reasoning is flawed. * **Dynamic Routing:** If the Verifier identifies gaps, a Router agent guides the refinement process by adding new steps or correcting errors, allowing the cycle to repeat for up to 10 rounds. * **Intermediate Review:** The agent reviews intermediate results before proceeding to the next step, similar to how data scientists use interactive environments like Google Colab. ## Benchmarking and State-of-the-Art Performance The effectiveness of the DS-STAR framework was validated through rigorous testing against existing agents like AutoGen and DA-Agent. * The agent secured the top rank on the public DABStep leaderboard, raising accuracy from 41.0% to 45.2% compared to previous best-performing models. * Performance gains were consistent across other benchmarks, including KramaBench (39.8% to 44.7%) and DA-Code (37.0% to 38.5%). * DS-STAR showed a significant advantage in "hard" tasks—those requiring the synthesis of information from multiple, varied data sources—demonstrating its superior versatility in complex environments. By automating the time-intensive tasks of data wrangling and verification, DS-STAR provides a robust template for the next generation of AI assistants. Organizations looking to scale their data science capabilities should consider adopting iterative agentic workflows that prioritize multi-format data understanding and self-correcting execution loops.

google

Forecasting the future of forests with AI: From counting losses to predicting risk (opens in new tab)

Research from Google DeepMind and Google Research introduces ForestCast, a deep learning-based framework designed to transition forest management from retrospective loss monitoring to proactive risk forecasting. By utilizing vision transformers and pure satellite data, the team has developed a scalable method to predict future deforestation that matches or exceeds the accuracy of traditional models dependent on inconsistent manual inputs. This approach provides a repeatable, future-proof benchmark for protecting biodiversity and mitigating climate change on a global scale. ### Limitations of Traditional Forecasting * Existing state-of-the-art models rely on specialized geospatial maps, such as infrastructure development, road networks, and regional economic indicators. * These traditional inputs are often "patchy" and inconsistent across different countries, requiring manual assembly that is difficult to replicate globally. * Manual data sources are not future-proof; they tend to go out of date quickly with no guarantee of regular updates, unlike continuous satellite streams. ### A Scalable Pure-Satellite Architecture * The ForestCast model adopts a "pure satellite" approach, using only raw inputs from Landsat and Sentinel-2 satellites. * The architecture is built on vision transformers (ViTs) that process an entire tile of pixels in a single pass to capture critical spatial context and landscape-level trends. * The model incorporates a satellite-derived "change history" layer, which identifies previously deforested pixels and the specific year the loss occurred. * By avoiding socio-political or infrastructure maps, the method can be applied consistently to any region on Earth, allowing for meaningful cross-regional comparisons. ### Key Findings and Benchmark Release * Research indicates that "change history" is the most information-dense input; a model trained on this data alone performs almost as well as those using raw multi-spectral data. * The model successfully predicts tile-to-tile variation in deforestation amounts and identifies the specific pixels most likely to be cleared next. * Google has released the training and evaluation data as a public benchmark dataset, focusing initially on Southeast Asia to allow the machine learning community to verify and improve upon the results. The release of ForestCast provides a template for scaling predictive modeling to Latin America, Africa, and boreal latitudes. Conservationists and policymakers should utilize these forecasting tools to move beyond counting historical losses and instead direct resources toward "frontline" areas where the model identifies imminent risk of habitat conversion.