How Polaroid Is Building its Next Era of Innovation | Figma Blog (opens in new tab)
How Headspace built an AI companion that fosters trust and transparency Maker Stories AI Design Branding Case study Config
How Headspace built an AI companion that fosters trust and transparency Maker Stories AI Design Branding Case study Config
The Amplify Initiative by Google Research addresses the critical lack of linguistic and cultural diversity in generative AI training data by establishing an open, community-based platform for localized data collection. By partnering with regional experts to co-create structured, high-quality datasets, the initiative aims to ensure AI models are both representative and effective in solving local challenges across health, finance, and education. This approach shifts data collection from a top-down model to a participatory framework that prioritizes responsible, locally respectful practices in the Global South. ## The Amplify Platform Framework The initiative is designed to bridge the gap between global AI capabilities and local needs through three core pillars: * **Participatory Co-creation:** Researchers and local communities collaborate to define specific data needs, ensuring the resulting datasets address region-specific problems like financial literacy or localized health misinformation. * **Open Access for Innovation:** The platform provides high-quality, multilingual datasets suitable for fine-tuning and evaluating models, specifically empowering developers in the Global South to build tools for their own communities. * **Author Recognition:** Contributors receive tangible rewards, including professional certificates, research acknowledgments, and data authorship attribution, creating a sustainable ecosystem for expert participation. ## Pilot Implementation in Sub-Saharan Africa To test the methodology, Google Research partnered with Makerere University’s AI Lab in Uganda to conduct an on-the-ground pilot program. * **Expert Onboarding:** The program trained 259 experts across Ghana, Kenya, Malawi, Nigeria, and Uganda through a combination of in-person workshops and app-based modules. * **Dataset Composition:** The pilot resulted in 8,091 annotated adversarial queries across seven languages, covering salient domains such as education and finance. * **Adversarial Focus:** By focusing on adversarial queries, the team captured localized nuances of potential AI harms, including regional stereotypes and specialized advice that generic models often miss. ## Technical Workflow and App-Based Methodology The initiative utilizes a structured technical pipeline to scale data collection while maintaining high quality and privacy. * **Privacy-Preserving Android App:** A dedicated app serves as the primary interface for training, data creation, and annotation, allowing experts to contribute from their own environments. * **Automated Validation:** The app includes built-in feedback loops that use automated checks to ensure queries are relevant and to prevent the submission of semantically similar or duplicate entries. * **Domain-Specific Annotation:** Experts are provided with specialized annotation topics tailored to their professional backgrounds, ensuring that the metadata for each query is technically accurate and contextually relevant. The Amplify Initiative provides a scalable blueprint for building inclusive AI by empowering experts in the Global South to define their own data needs. As the project expands to India and Brazil, it offers a vital resource for developers seeking to fine-tune models for local contexts and improve the safety and relevance of AI on a global scale.
Discord CEO and co-founder Jason Citron has announced his transition out of the chief executive role, moving into a position on the Board of Directors and acting as a strategic advisor. Humam Sakhnini, a veteran of the gaming and live-services industry, has been appointed as the new CEO to lead the company through its next phase of growth and its eventual transition into a public company. This leadership shift is intended to align Discord’s executive expertise with the demands of public market operations and large-scale business expansion. **Leadership Transition and Strategic Rationale** * Jason Citron will step down as CEO and move to a Board Member and Advisor role to focus on long-term strategy. * Humam Sakhnini is scheduled to officially begin his tenure as CEO on Monday, April 28. * The transition is framed as a proactive move to "hire out of a job," placing a leader with specific experience in public markets at the helm as Discord prepares for an eventual IPO. * Co-founder Stan Vishnevskiy and the existing executive team will remain in place to ensure continuity during the onboarding process. **Humam Sakhnini’s Industry Background** * Sakhnini brings over 15 years of experience in the gaming sector, specifically in scaling high-growth businesses and managing live services. * He previously served as the Chief Strategy Officer at Activision Blizzard, where he provided strategic guidance for massive franchises including *World of Warcraft* and *Call of Duty*. * He later led King (the creators of *Candy Crush*) as President, succeeding the original founders and overseeing the company’s growth and performance within the public market. * His leadership style emphasizes long-term value creation, collaborative creative environments, and a focus on the user experience. **Future Outlook and Operational Focus** * The company will continue to prioritize its core mission of connecting users through games while exploring and growing new business lines. * Leadership remains committed to maintaining Discord's culture of "giving a shit" about craft and customer experience while navigating more complex corporate milestones. * The transition aims to stabilize the platform’s infrastructure for hundreds of millions of users while building the financial and operational rigor required for a public listing. This transition marks a significant evolution for Discord as it moves from a founder-led startup to a mature organization eyeing the public markets. By installing a CEO with deep experience in the Activision Blizzard and King ecosystems, Discord is signaling a focus on professionalizing its operations and scaling its revenue models to meet the expectations of institutional investors.
Discord's initial message search architecture was designed to handle billions of messages using a sharded Elasticsearch configuration spread across two clusters. By sharding data by guilds and direct messages, the system prioritized fast querying and operational manageability for its growing user base. While this approach utilized lazy indexing and bulk processing to remain cost-effective, the rapid growth of the platform eventually revealed scalability limitations within the existing design. ### Sharding and Cluster Management * The system utilized Elasticsearch as the primary engine, with messages sharded across indices based on the logical namespace of the Discord server (guild) or direct message (DM). * This sharding strategy ensured that all messages for a specific guild were stored together, allowing for localized, high-speed query performance. * Infrastructure was split across two distinct Elasticsearch clusters to keep individual indices smaller and more manageable. ### Optimized Indexing via Bulk Queues * To minimize resource overhead, Discord implemented lazy indexing, only processing messages for search when necessary rather than indexing every message in real-time. * A custom message queue allowed background workers to aggregate messages into chunks, maximizing the efficiency of Elasticsearch’s bulk-indexing API. * This architecture allowed the system to remain performant and cost-effective by focusing compute power on active guilds rather than idling on unused data. For teams building large-scale search infrastructure, Discord's early experience suggests that sharding by logical ownership (like guilds) and utilizing bulk-processing queues can provide significant initial scalability. However, as data volume reaches the multi-billion message threshold, it is essential to monitor for architectural "cracks" where sharding imbalances or indexing delays may require a transition to more robust distributed systems.
Discord’s Patch Notes series serves as a transparent log of the platform's continuous improvements in performance, reliability, and general usability. By combining developer-led bug squishing with direct community feedback, the team aims to deliver a more stable experience across all supported platforms. ### Community-Driven Bug Tracking - Discord leverages the community-run r/DiscordApp subreddit to host a Bimonthly Bug Megathread. - This channel allows users to report specific issues directly to the Engineering team, ensuring that user-facing bugs are identified and prioritized for future sprints. ### Early Access and Beta Testing - iOS users have the option to join Discord’s TestFlight program to test upcoming features before they reach the general public. - This experimental environment allows power users to help "squish" bugs in a live mobile context, providing the engineering team with critical data before a global release. ### Commit and Rollout Procedures - The series clarifies that all listed fixes are officially committed and merged into the codebase. - Because Discord uses a staged rollout system, these changes may take time to propagate to individual platforms and users even after the notes are published. Users looking to contribute to the platform's stability should utilize the dedicated Reddit megathread for bug reporting or join the TestFlight program to provide early feedback on upcoming mobile builds.
The three Cs of Figma: A beginner’s guide to success Working Well Design Culture Tips & inspiration Career & education Collaboration
Google Research and DeepMind have introduced multimodal AMIE, an advanced research AI agent designed to conduct diagnostic medical dialogues that integrate text, images, and clinical documents. By building on Gemini 2.0 Flash and a novel state-aware reasoning framework, the system can intelligently request and interpret visual data such as skin photos or ECGs to refine its diagnostic hypotheses. This evolution moves AI diagnostic tools closer to real-world clinical practice, where visual evidence is often essential for accurate patient assessment and management. ### Enhancing AMIE with Multimodal Perception To move beyond text-only limitations, researchers integrated vision capabilities that allow the agent to process complex medical information during a conversation. * The system uses Gemini 2.0 Flash as its core component to interpret diverse data types, including dermatology images and laboratory reports. * By incorporating multimodal perception, the agent can resolve diagnostic ambiguities that cannot be addressed through verbal descriptions alone. * Preliminary testing with Gemini 2.5 Flash suggests that further scaling the underlying model continues to improve the agent's reasoning and diagnostic accuracy. ### Emulating Clinical Workflows via State-Aware Reasoning A key technical contribution is the state-aware phase transition framework, which helps the AI mimic the structured yet flexible approach used by experienced clinicians. * The framework orchestrates the conversation through three distinct phases: History Taking, Diagnosis & Management, and Follow-up. * The agent maintains a dynamic internal state that tracks known information about the patient and identifies specific "knowledge gaps." * When the system detects uncertainty, it strategically requests multimodal artifacts—such as a photo of a rash or an image of a lab result—to update its differential diagnosis. * Transitions between conversation phases are only triggered once the system assesses that the objectives of the current phase have been sufficiently met. ### Evaluation through Simulated OSCEs To validate the agent’s performance, the researchers developed a robust simulation environment to facilitate rapid iteration and standardized testing. * The system was tested using patient scenarios grounded in real-world datasets, including the SCIN dataset for dermatology and PTB-XL for ECG measurements. * Evaluation was conducted using a modified version of Objective Structured Clinical Examinations (OSCEs), the global standard for assessing medical students and professionals. * In comparative studies, AMIE's performance was measured against primary care physicians (PCPs) to ensure its behavior, accuracy, and tone aligned with clinical standards. This research demonstrates that multimodal AI agents can effectively navigate the complexities of a medical consultation by combining linguistic empathy with the technical ability to interpret visual clinical evidence. As these systems continue to evolve, they offer a promising path toward high-quality, accessible diagnostic assistance that mirrors the multimodal nature of human medicine.
Google Research has introduced a benchmarking pipeline and a dataset of over 11,000 synthetic personas to evaluate how Large Language Models (LLMs) handle tropical and infectious diseases (TRINDs). While LLMs excel at standard medical exams like the USMLE, this study reveals significant performance gaps when models encounter the regional context shifts and localized health data common in low-resource settings. The research concludes that integrating specific environmental context and advanced reasoning techniques is essential for making LLMs reliable decision-support tools for global health. ## Development of the TRINDs Synthetic Dataset * Researchers created a dataset of 11,000+ personas covering 50 tropical and infectious diseases to address the lack of rigorous evaluation data for out-of-distribution medical tasks. * The process began with "seed" templates based on factual data from the WHO, CDC, and PAHO, which were then reviewed by clinicians for clinical relevance. * The dataset was expanded using LLM prompting to include diverse demographic, clinical, and consumer-focused augmentations. * To test linguistic distribution shifts, the seed set was manually translated into French to evaluate how language changes impact diagnostic accuracy. ## Identifying Critical Performance Drivers * Evaluations of Gemini 1.5 models showed that accuracy on TRINDs is lower than reported performance on standard U.S. medical benchmarks, indicating a struggle with "out-of-distribution" disease types. * Contextual information is the primary driver of accuracy; the highest performance was achieved only when specific symptoms were combined with location and risk factors. * The study found that symptoms alone are often insufficient for an accurate diagnosis, emphasizing that LLMs require localized environmental data to differentiate between similar tropical conditions. * Linguistic shifts pose a significant challenge, as model performance dropped by approximately 10% when processing the French version of the dataset compared to the English version. ## Optimization and Reasoning Strategies * Implementing Chain-of-Thought (CoT) prompting—where the model is directed to explain its reasoning step-by-step—led to a significant 10% increase in diagnostic accuracy. * Researchers utilized an LLM-based "autorater" to scale the evaluation process, scoring answers as correct if the predicted diagnosis was meaningfully similar to the ground truth. * In tests regarding social biases, the study found no statistically significant difference in performance across race or gender identifiers within this specific TRINDs context. * Performance remained stable even when clinical language was swapped for consumer-style descriptions, suggesting the models are robust to variations in how patients describe their symptoms. To improve the utility of LLMs for global health, developers should prioritize the inclusion of regional risk factors and location-specific data in prompts. Utilizing reasoning-heavy strategies like Chain-of-Thought and expanding multilingual training sets are critical steps for bridging the performance gap in underserved regions.
Figma's 2025 AI report: Perspectives from designers and developers Insights AI Research Report Thought leadership Figma’s AI report tells us how designers and developers are navigating the changing landscape. Artwork by Saiman Chow Agentic AI is the fastest growing product categ…
Google Research, in collaboration with HHMI Janelia and Harvard, has introduced ZAPBench, a first-of-its-kind whole-brain activity dataset and benchmark designed to improve the accuracy of brain activity models. Using the larval zebrafish as a model organism, the project provides single-cell resolution recordings of approximately 70,000 neurons, capturing nearly the entire vertebrate brain in action. This resource allows researchers to bridge the gap between structural connectomics and dynamic functional activity to better understand how neural wiring generates complex behavior. ## Whole-Brain Activity in Larval Zebrafish * The dataset focuses on the six-day-old larval zebrafish because it is small, transparent, and capable of complex behaviors like motor learning, hunting, and memory. * Researchers used light-sheet microscopy to scan the brain in 3D slices, recording two hours of continuous activity. * The fish were engineered with GCaMP, a genetically encoded calcium indicator that emits light when neurons fire, allowing for the visualization of real-time neural impulses. * To correlate neural activity with behavior, the fish were placed in a virtual reality environment where stimuli—such as shifting water currents and light changes—were projected around them while tail muscle activity was recorded via electrodes. ## The ZAPBench Framework * ZAPBench standardizes the evaluation of machine learning models in neuroscience, following the tradition of benchmarks in fields like computer vision and language modeling. * The benchmark provides a high-quality dataset of 70,000 neurons, whereas previous efforts in other species often covered less than 0.1% of the brain. * It challenges models to predict how neurons will respond to specific visual stimuli and behavioral patterns. * Initial results presented at ICLR 2025 demonstrate that while simple linear models provide a baseline, advanced architectures like Transformers and Convolutional Neural Networks (CNNs) significantly improve prediction accuracy. ## Integrating Structure and Function * While previous connectomics projects mapped physical neural connections, ZAPBench adds the "dynamic" layer of how those connections are used over time. * The team is currently generating a comprehensive structural connectome for the exact same specimen used in the activity recordings. * This dual approach will eventually allow scientists to investigate the direct relationship between precise physical wiring and the resulting patterns of neural activity across an entire vertebrate brain. By providing an open-source dataset and standardized benchmark, ZAPBench enables the global research community to develop and compare more sophisticated models of neural dynamics, potentially leading to breakthroughs in how we simulate and understand vertebrate cognition.
The art of art direction Working Well Design Though they’re instrumental in shaping the look of everything a brand puts out, art directors rarely get the credit they deserve. Here, we hand the mics over to Maria Chimishkyan and Jefferson Cheng on Figma’s Brand Studio team and ea…
Google Research has introduced Mobility AI, a comprehensive program designed to provide transportation agencies with data-driven tools for managing urban congestion, road safety, and evolving transit patterns. By leveraging advancements in measurement, simulation, and optimization, the initiative translates decades of Google’s geospatial research into actionable technologies for infrastructure planning and real-time traffic management. The program aims to empower policymakers and engineers to mitigate gridlock and environmental impacts through high-resolution modeling and continuous monitoring of urban transportation systems. ### Measurement: Understanding Mobility Patterns The measurement pillar focuses on establishing a precise baseline of current transportation conditions using real-time and historical data. * **Congestion Functions:** Researchers utilize machine learning and floating car data to develop city-wide models that mathematically describe the relationship between vehicle volume and travel speeds, even on roads with limited data. * **Geospatial Foundation Models:** By applying self-supervised learning to movement patterns, the program creates embeddings that capture local spatial characteristics. This allows for better reasoning about urban mobility in data-sparse environments. * **Analytical Formulation:** Specific research explores how adjusting traffic signal timing influences the distribution of flow across urban networks, revealing patterns in how congestion propagates. ### Simulation: Forecasting and Scenario Analysis Mobility AI uses simulation technologies to create digital twins of cities, allowing planners to test interventions before implementing them physically. * **Traffic Simulation API:** This tool enables the modeling of complex "what-if" scenarios, such as the impact of closing a major bridge or reconfiguring lane assignments on a highway. * **High-Fidelity Calibration:** The simulations are calibrated using large-scale, real-world data to ensure that the virtual models accurately reflect local driver behavior and infrastructure constraints. * **Scalable Evaluation:** These digital environments provide a risk-free way to assess how new developments, such as the rise of autonomous vehicles or e-commerce logistics, will reshape existing traffic patterns. ### Optimization: Improving Urban Flow The optimization pillar focuses on applying AI to solve large-scale coordination problems, such as signal timing and routing efficiency. * **Project Green Light:** This initiative uses AI to provide traffic signal timing recommendations to city engineers, specifically targeting a reduction in stop-and-go traffic to lower greenhouse gas emissions. * **System-Wide Coordination:** Optimization algorithms work to balance the needs of multiple modes of transport, including public transit, cycling, and pedestrian infrastructure, rather than focusing solely on personal vehicles. * **Integration with Google Public Sector:** Research breakthroughs from this program are being integrated into Google Maps Platform and Google Public Sector tools to provide agencies with accessible, enterprise-grade optimization capabilities. Transportation agencies and researchers can leverage these foundational AI technologies to transition from reactive traffic management to proactive, data-driven policymaking. By participating in the Mobility AI program, public sector leaders can gain access to advanced simulation and measurement tools designed to build more resilient and efficient urban mobility networks.
Discord has introduced Nameplates, a new visual customization feature that expands how users can express their identity across the platform. This addition builds on the existing suite of profile tools, moving beyond the standard profile card to personalize how a user’s display name appears in various lists. By integrating these designs into the Discord Shop, the platform provides a new way for users to distinguish themselves within their communities. **Enhancing Global and Server Identity** * Nameplates serve as a decorative layer for display names, making them stand out in member lists and chat interfaces where profiles are listed. * This feature complements existing identity tools such as Avatar Decorations, Profile Effects, custom bios, and Nitro-exclusive profile colors. * Like other decorative elements, Nameplates can be used to coordinate a specific aesthetic or "setup" alongside server-specific profiles. **Platform Availability and Implementation** * Users can browse and purchase Nameplates through the Discord Shop specifically on the desktop application. * While purchasing is currently limited to the desktop client, the visual effects are cross-platform and will be visible to all users on both desktop and mobile devices. * The feature is designed to be highly visible, reaching beyond the confines of the traditional profile pop-out or full-page view. Users interested in further customizing their digital presence should check the desktop Shop to see how Nameplates integrate with their current Avatar Decorations and Profile Effects.
From screen to zine: Meet the makers using Figma for digital DIY Maker Stories Design Profiles & interviews Culture
The upcoming month of April marks a significant milestone for video game media, with a dense schedule of high-profile film and television adaptations set to debut. This surge in content is framed through the perspectives of contributors Veronica, Cody, and Emi, who evaluate the success of past adaptations while identifying gaming intellectual properties that are prime for future screen transitions. ### April Adaptation Release Calendar * **A Minecraft Movie:** The sandbox phenomenon makes its big-screen debut on April 4th, representing a major cinematic expansion for the franchise. * **The Last of Us Season 2:** Following its critically acclaimed debut, the second season of the HBO series is scheduled to premiere on April 13th. * **Until Dawn:** The cinematic horror-adventure game rounds out the month with a theatrical film release on April 25th. ### Community Insights and Future Outlook * Contributors Veronica, Cody, and Emi share their personal favorite adaptations, highlighting the creative choices that successfully bridge the gap between interactive and passive storytelling. * The discussion explores untapped gaming franchises that possess the narrative depth and world-building necessary to succeed beyond the console environment. As the pipeline between the gaming and film industries continues to accelerate, viewers should monitor these April releases to see if they can maintain the high production standards set by recent genre successes.