geospatial-analysis

3 posts

google

Separating natural forests from other tree cover with AI for deforestation-free supply chains (opens in new tab)

Researchers from Google DeepMind and Google Research have developed "Natural Forests of the World 2020," an AI-powered global map that distinguishes natural ecosystems from commercial tree plantations. By utilizing high-resolution satellite data and machine learning, the project provides a critical 10-meter resolution baseline to support deforestation-free supply chain regulations like the EUDR. This tool enables governments and companies to monitor biodiversity-rich areas with unprecedented accuracy, ensuring that natural forests are protected from industrial degradation. **The Limitation of Traditional Tree Cover Maps** * Existing maps frequently conflate all woody vegetation into a generic "tree cover" category, leading to "apples-to-oranges" comparisons between different land types. * This lack of distinction makes it difficult to differentiate between the harvesting of short-term plantations and the permanent loss of ancient, biodiversity-rich natural forests. * Precise mapping is now a legal necessity due to regulations like the European Union Regulation on Deforestation-free Products (EUDR), which bans products from land deforested or degraded after December 31, 2020. **The MTSViT Modeling Approach** * To accurately identify forest types, researchers developed the Multi-modal Temporal-Spatial Vision Transformer (MTSViT). * Rather than relying on a single snapshot, the AI "observes" 1280 x 1280 meter patches over the course of a year to identify seasonal, spectral, and textural signatures. * The model integrates multi-modal data, including Sentinel-2 satellite imagery, topographical information (such as elevation and slope), and specific geographical coordinates. * This temporal-spatial analysis allows the AI to recognize the complex patterns of natural forests that distinguish them from the uniform, fast-growing structures of commercial plantations. **Dataset Scale and Global Validation** * The model was trained on a massive dataset comprising over 1.2 million global patches at 10-meter resolution. * The final map provides seamless global coverage, achieving a best-in-class validation accuracy of 92.2% against an independent global dataset. * The research was a collaborative effort involving the World Resources Institute and the International Institute for Applied Systems Analysis to ensure scientific rigor and practical utility. The "Natural Forests of the World 2020" dataset is publicly available via Google Earth Engine and other open repositories. Organizations should leverage this high-resolution baseline to conduct environmental due diligence, support government monitoring, and target conservation efforts in preparation for global climate milestones like COP30.

google

Forecasting the future of forests with AI: From counting losses to predicting risk (opens in new tab)

Research from Google DeepMind and Google Research introduces ForestCast, a deep learning-based framework designed to transition forest management from retrospective loss monitoring to proactive risk forecasting. By utilizing vision transformers and pure satellite data, the team has developed a scalable method to predict future deforestation that matches or exceeds the accuracy of traditional models dependent on inconsistent manual inputs. This approach provides a repeatable, future-proof benchmark for protecting biodiversity and mitigating climate change on a global scale. ### Limitations of Traditional Forecasting * Existing state-of-the-art models rely on specialized geospatial maps, such as infrastructure development, road networks, and regional economic indicators. * These traditional inputs are often "patchy" and inconsistent across different countries, requiring manual assembly that is difficult to replicate globally. * Manual data sources are not future-proof; they tend to go out of date quickly with no guarantee of regular updates, unlike continuous satellite streams. ### A Scalable Pure-Satellite Architecture * The ForestCast model adopts a "pure satellite" approach, using only raw inputs from Landsat and Sentinel-2 satellites. * The architecture is built on vision transformers (ViTs) that process an entire tile of pixels in a single pass to capture critical spatial context and landscape-level trends. * The model incorporates a satellite-derived "change history" layer, which identifies previously deforested pixels and the specific year the loss occurred. * By avoiding socio-political or infrastructure maps, the method can be applied consistently to any region on Earth, allowing for meaningful cross-regional comparisons. ### Key Findings and Benchmark Release * Research indicates that "change history" is the most information-dense input; a model trained on this data alone performs almost as well as those using raw multi-spectral data. * The model successfully predicts tile-to-tile variation in deforestation amounts and identifies the specific pixels most likely to be cleared next. * Google has released the training and evaluation data as a public benchmark dataset, focusing initially on Southeast Asia to allow the machine learning community to verify and improve upon the results. The release of ForestCast provides a template for scaling predictive modeling to Latin America, Africa, and boreal latitudes. Conservationists and policymakers should utilize these forecasting tools to move beyond counting historical losses and instead direct resources toward "frontline" areas where the model identifies imminent risk of habitat conversion.

google

Introducing Mobility AI: Advancing urban transportation (opens in new tab)

Google Research has introduced Mobility AI, a comprehensive program designed to provide transportation agencies with data-driven tools for managing urban congestion, road safety, and evolving transit patterns. By leveraging advancements in measurement, simulation, and optimization, the initiative translates decades of Google’s geospatial research into actionable technologies for infrastructure planning and real-time traffic management. The program aims to empower policymakers and engineers to mitigate gridlock and environmental impacts through high-resolution modeling and continuous monitoring of urban transportation systems. ### Measurement: Understanding Mobility Patterns The measurement pillar focuses on establishing a precise baseline of current transportation conditions using real-time and historical data. * **Congestion Functions:** Researchers utilize machine learning and floating car data to develop city-wide models that mathematically describe the relationship between vehicle volume and travel speeds, even on roads with limited data. * **Geospatial Foundation Models:** By applying self-supervised learning to movement patterns, the program creates embeddings that capture local spatial characteristics. This allows for better reasoning about urban mobility in data-sparse environments. * **Analytical Formulation:** Specific research explores how adjusting traffic signal timing influences the distribution of flow across urban networks, revealing patterns in how congestion propagates. ### Simulation: Forecasting and Scenario Analysis Mobility AI uses simulation technologies to create digital twins of cities, allowing planners to test interventions before implementing them physically. * **Traffic Simulation API:** This tool enables the modeling of complex "what-if" scenarios, such as the impact of closing a major bridge or reconfiguring lane assignments on a highway. * **High-Fidelity Calibration:** The simulations are calibrated using large-scale, real-world data to ensure that the virtual models accurately reflect local driver behavior and infrastructure constraints. * **Scalable Evaluation:** These digital environments provide a risk-free way to assess how new developments, such as the rise of autonomous vehicles or e-commerce logistics, will reshape existing traffic patterns. ### Optimization: Improving Urban Flow The optimization pillar focuses on applying AI to solve large-scale coordination problems, such as signal timing and routing efficiency. * **Project Green Light:** This initiative uses AI to provide traffic signal timing recommendations to city engineers, specifically targeting a reduction in stop-and-go traffic to lower greenhouse gas emissions. * **System-Wide Coordination:** Optimization algorithms work to balance the needs of multiple modes of transport, including public transit, cycling, and pedestrian infrastructure, rather than focusing solely on personal vehicles. * **Integration with Google Public Sector:** Research breakthroughs from this program are being integrated into Google Maps Platform and Google Public Sector tools to provide agencies with accessible, enterprise-grade optimization capabilities. Transportation agencies and researchers can leverage these foundational AI technologies to transition from reactive traffic management to proactive, data-driven policymaking. By participating in the Mobility AI program, public sector leaders can gain access to advanced simulation and measurement tools designed to build more resilient and efficient urban mobility networks.