gemini

31 posts

line

A Business Trip to Japan After Only One (opens in new tab)

Joining the Developer Relations (DevRel) team at LINE Plus, a new employee was immediately thrust into a high-stakes business trip to Japan just one week after onboarding to support major global tech events. This immersive experience allowed the recruit to rapidly grasp the company’s engineering culture by facilitating cross-border collaboration and managing large-scale technical conferences. Ultimately, the journey highlights how a proactive onboarding strategy and a culture of creative freedom enable DevRel professionals to bridge the gap between complex engineering feats and community engagement. ### Global Collaboration at Tech Week * The trip centered on participating in **Tech-Verse**, a global conference featuring simultaneous interpretation in Korean, English, and Japanese, where the focus was on maintaining operational detail across diverse technical sessions. * Operational support was provided for **Hack Day**, an in-house hackathon that brought together engineers from various countries to collaborate on rapid prototyping and technical problem-solving. * The experience facilitated direct coordination with DevRel teams from Japan, Thailand, Taiwan, and Vietnam, establishing a unified approach to technical branding and regional community support. * Post-event responsibilities included translating live experiences into digital assets, such as "Shorts" video content and technical blog recaps, to maintain engagement after the physical event concluded. ### Modernizing Internal Technical Sharing * The **Tech Talk** series, a long-standing tradition with over 78 sessions, was used as a platform to experiment with "B-grade" humorous marketing—including quirky posters and cup holders—to drive offline participation in a remote-friendly work environment. * To address engineer feedback, the format shifted from passive lectures to **hands-on practical sessions** focusing on AI implementation. * Specific technical workshops demonstrated how to use tools like **Claude Code** and **ChatGPT** to automate workflows, such as generating weekly reports by integrating **Jira tickets with internal Wikis**. * Preparation for these sessions involved creating detailed environment setup guides and troubleshooting protocols to ensure a seamless experience for participating developers. ### Scaling AI Literacy via AI Campus Day * The **AI Campus Day** was a large-scale event designed for over 3,000 participants, aimed at lowering the barrier to entry for AI adoption across all departments. * The "Event & Operation" role involved creating interactive AI photo zones using **Gemini** to familiarize employees with new internal AI tools in a low-pressure setting. * Event production utilized AI-driven assets, including AI-generated voices and icons, to demonstrate the practical utility of these tools within standard business communication and video guides. * The success of the event relied on "participation design," ensuring that even non-technical staff could engage with AI concepts through hands-on play and peer mentoring. For organizations looking to strengthen their technical culture, this experience suggests that integrating new hires into high-impact global projects immediately can be a powerful onboarding tool. Providing DevRel teams the psychological safety to experiment with unconventional marketing and hands-on technical workshops is essential for maintaining developer engagement in a hybrid work era.

line

We held AI Campus Day to improve (opens in new tab)

LY Corporation recently hosted "AI Campus Day," a large-scale internal event designed to bridge the gap between AI theory and practical workplace application for over 3,000 employees. By transforming their office into a learning campus, the company successfully fostered a culture of "AI Transformation" through peer-led mentorship and task-specific experimentation. The event demonstrated that internal context and hands-on participation are far more effective than traditional external lectures for driving meaningful AI literacy and productivity gains. ## Hands-on Experience and Technical Support * The curriculum featured 10 specialized sessions across three tracks—Common, Creative, and Engineering—to ensure relevance for every job function. * Sessions ranged from foundational prompt engineering for non-developers to advanced technical topics like building Model Context Protocol (MCP) servers for engineers. * To ensure smooth execution, the organizers provided comprehensive "Session Guides" containing pre-configured account settings and specific prompt templates. * The event utilized a high support ratio, with 26 teaching assistants (TAs) available to troubleshoot technical hurdles in real-time and dedicated Slack channels for sharing live AI outputs. ## Peer-Led Mentorship and Internal Context * Instead of hiring external consultants, the program featured 10 internal "AI Mentors" who shared how they integrated AI into their actual daily workflows at LY Corporation. * Training focused exclusively on company-approved tools, including ChatGPT Enterprise, Gemini, and Claude Code, ensuring all demonstrations complied with internal security protocols. * Internal mentors were able to provide specific "company context" that external lecturers lack, such as integrating AI with existing proprietary systems and data. * A rigorous three-stage quality control process—initial flow review, final end-to-end dry run, and technical rehearsal—was implemented to ensure the educational quality of mentor-led sessions. ## Gamification and Cultural Engagement * The event was framed as a "festival" rather than a mandatory training, using campus-themed motifs like "enrollment" and "school attendance" to reduce psychological barriers. * A "Stamp Rally" system encouraged participation by offering tiered rewards, including welcome kits, refreshments, and subscriptions to premium AI tools. * Interactive exhibition booths allowed employees to experience AI utility firsthand, such as an AI photo zone using Gemini to generate "campus-style" portraits and an AI Agent Contest booth. * Strong executive support played a crucial role, with leadership encouraging staff to pause routine tasks for the day to focus entirely on AI experimentation and "playing" with new technologies. To effectively scale AI literacy within a large organization, it is recommended to move away from passive, one-size-fits-all lectures. Success lies in leveraging internal experts who understand the specific security and operational constraints of the business, and creating a low-pressure environment where employees can experiment with hands-on tasks relevant to their specific roles.

google

Gemini provides automated feedback for theoretical computer scientists at STOC 2026 (opens in new tab)

Google Research launched an experimental program for the STOC 2026 conference using a specialized Gemini model to provide automated, rigorous feedback on theoretical computer science submissions. By identifying critical logical errors and proof gaps within a 24-hour window, the tool demonstrated that advanced AI can serve as a powerful pre-vetting collaborator for high-level mathematical research. The overwhelmingly positive reception from authors indicates that AI can effectively augment the human peer-review process by improving paper quality before formal submission. ## Advanced Reasoning via Inference Scaling - The tool utilized an advanced version of Gemini 2.5 Deep Think specifically optimized for mathematical rigor. - It employed inference scaling methods, allowing the model to explore and combine multiple possible solutions and reasoning traces simultaneously. - This non-linear approach to problem-solving helps the model focus on the most salient technical issues while significantly reducing the likelihood of hallucinations. ## Structured Technical Feedback - Feedback was delivered in a structured format that included a high-level summary of the paper's core contributions. - The model provided a detailed analysis of potential mistakes, specifically targeting errors within lemmas, theorems, and logical proofs. - Authors also received a categorized list of minor corrections, such as inconsistent variable naming and typographical errors. ## Identified Technical Issues and Impact - The pilot saw high engagement, with over 80% of STOC 2026 submitters opting in for the AI-generated review. - The tool successfully identified "critical bugs" and calculation errors that had previously evaded human authors for months. - Survey results showed that 97% of participants found the feedback helpful, and 81% reported that the tool improved the overall clarity and readability of their work. ## Expert Verification and Hallucinations - Because the users were domain experts, they were able to act as a filter, distinguishing between deep technical insights and occasional model hallucinations. - While the model sometimes struggled to parse complex notation or interpret figures, authors valued the "neutral tone" and the speed of the two-day turnaround. - The feedback was used as a starting point for human verification, allowing researchers to refine their arguments rather than blindly following the model's output. ## Future Outlook and Educational Potential - Beyond professional research, 75% of surveyed authors see significant educational value in using the tool to train students in mathematical rigor. - The experiment's success has led to 88% of participants expressing interest in having continuous access to such a tool throughout their entire research and drafting process. The success of the STOC 2026 pilot suggests that researchers should consider integrating specialized LLMs early in the drafting phase to catch "embarrassing" or logic-breaking errors. While the human expert remains the final arbiter of truth, these tools provide a necessary layer of automated verification that can accelerate the pace of scientific discovery.

kakao

What the AI TOP 1 (opens in new tab)

The Kakao AI Native Strategy team successfully developed a complex competition system for the "AI TOP 100" event in just two weeks by replacing traditional waterfall methodologies with an AI-centric approach. By utilizing tools like Cursor and Claude Code, the team shifted the developer’s role from manual coding to high-level orchestration and validation. This experiment demonstrates that AI does not replace developers but rather redefines the "standard" of productivity, moving the focus from execution speed to strategic decision-making. ### Rapid Prototyping as the New Specification * The team eliminated traditional, lengthy planning documents and functional specifications. * Every team member was tasked with creating a working prototype using AI based on their own interpretation of the project goals. * One developer produced six different versions of the system independently, allowing the team to "see" ideas rather than read about them. * Final requirements were established by reviewing and merging the best features of these functional prototypes, significantly reducing communication overhead. ### AI-Native Development and 99% Delegation * The majority of the codebase (over 99%) was generated by AI tools like Claude Code and Cursor, with developers focusing on intent and review. * One developer recorded an extreme usage of 200 million tokens in a single day to accelerate system completion. * The high productivity of AI allowed a single frontend developer to manage the entire UI for both the preliminary and main rounds, a task that typically requires a much larger team. * The development flow moved away from linear "think-code-test" patterns to a "dialogue-based" implementation where ideas were instantly turned into code. ### PoC-Driven Development (PDD) * The team adopted a "Proof of Concept (PoC) Driven Development" model to handle high uncertainty and tight deadlines. * Abstract concepts were immediately fed into AI to generate functional PoC code and architectural drafts. * The human role shifted from "writing from scratch" to "judging and selecting" the most viable outputs generated by the AI. * This approach allowed the team to bypass resource limitations by prioritizing speed and functional verification over perfectionist documentation. ### Human Governance and the Role of Experience * Internal conflicts occasionally arose when different AI models suggested equally "logical" but conflicting architectural solutions. * Senior developers played a critical role in breaking these deadlocks by applying real-world experience regarding long-term maintainability and system constraints. * While AI provided the "engine" for speed, human intuition remained the "steering wheel" to ensure the system met specific organizational standards. * The project highlighted that as AI handles more of the implementation, a developer’s ability to judge code quality and architectural fit becomes their most valuable asset. This project serves as a blueprint for the future of software engineering, where AI is treated as a peer programmer rather than a simple tool. To stay competitive, development teams should move away from rigid waterfall processes and embrace a PoC-centric workflow that leverages AI to collapse the distance between ideation and deployment.

google

Generative UI: A rich, custom, visual interactive user experience for any prompt (opens in new tab)

Google Research has introduced a novel Generative UI framework that enables AI models to dynamically construct bespoke, interactive user experiences—including web pages, games, and functional tools—in response to any natural language prompt. This shift from static, predefined interfaces to AI-generated environments allows for highly customized digital spaces that adapt to a user's specific intent and context. Evaluated through human testing, these custom-generated interfaces are strongly preferred over traditional, text-heavy LLM outputs, signaling a fundamental evolution in human-computer interaction. ### Product Integration in Gemini and Google Search The technology is currently being deployed as an experimental feature across Google’s main AI consumer platforms to enhance how users visualize and interact with data. * **Dynamic View and Visual Layout:** These experiments in the Gemini app use agentic coding capabilities to design and code a complete interactive response for every prompt. * **AI Mode in Google Search:** Available for Google AI Pro and Ultra subscribers, this feature uses Gemini 3’s multimodal understanding to build instant, bespoke interfaces for complex queries. * **Contextual Customization:** The system differentiates between user needs, such as providing a simplified interface for a child learning about the microbiome versus a data-rich layout for an adult. * **Task-Specific Tools:** Beyond text, the system generates functional applications like fashion advisors, event planners, and science simulations for topics like RNA transcription. ### Technical Architecture and Implementation The Generative UI implementation relies on a multi-layered approach centered around the Gemini 3 Pro model to ensure the generated code is both functional and accurate. * **Tool Access:** The model is connected to server-side tools, including image generation and real-time web search, to enrich the UI with external data. * **System Instructions:** Detailed guidance provides the model with specific goals, formatting requirements, and technical specifications to avoid common coding errors. * **Agentic Coding:** The model acts as both a designer and a developer, writing the necessary code to render the UI on the fly based on its interpretation of the user’s prompt. * **Post-Processing:** Outputs undergo a series of automated checks to address common issues and refine the final visual experience before it reaches the browser. ### The Shift from Static to Generative Interfaces This research represents a move away from the traditional software paradigm where users must navigate a fixed catalog of applications to find the tool they need. * **Prompt-Driven UX:** Interfaces are generated from prompts as simple as a single word or as complex as multi-paragraph instructions. * **Interactive Comprehension:** By building simulations on the fly, the system creates a dynamic environment optimized for deep learning and task completion. * **Preference Benchmarking:** Research indicates that when generation speed is excluded as a factor, users significantly prefer these custom-built visual tools over standard, static AI responses. To experience this new paradigm, users can select the "Thinking" option from the model menu in Google Search’s AI Mode or engage with the Dynamic View experiment in the Gemini app to generate tailored tools for specific learning or productivity tasks.

google

StreetReaderAI: Towards making street view accessible via context-aware multimodal AI (opens in new tab)

StreetReaderAI is a research prototype designed to make immersive street-level imagery accessible to the blind and low-vision community through multimodal AI. By integrating real-time scene analysis with context-aware geographic data, the system transforms visual mapping data into an interactive, audio-first experience. This framework allows users to virtually explore environments and plan routes with a level of detail and independence previously unavailable through traditional screen readers. ### Navigation and Spatial Awareness The system offers an immersive, first-person exploration interface that mimics the mechanics of accessible gaming. * Users navigate using keyboard shortcuts or voice commands, taking "virtual steps" forward or backward and panning their view in 360 degrees. * Real-time audio feedback provides cardinal and intercardinal directions, such as "Now facing North," to maintain spatial orientation. * Distance tracking informs the user how far they have traveled between panoramic images, while "teleport" features allow for quick jumps to specific addresses or landmarks. ### Context-Aware AI Describer At the core of the tool is a subsystem backed by Gemini that synthesizes visual and geographic data to generate descriptions. * The AI Describer combines the current field-of-view image with dynamic metadata about nearby roads, intersections, and points of interest. * Two distinct modes cater to different user needs: a "Default" mode focusing on pedestrian safety and navigation, and a "Tour Guide" mode that provides historical and architectural details. * The system utilizes Gemini to proactively predict and suggest follow-up questions relevant to the specific scene, such as details about crosswalks or building entrances. ### Interactive Dialogue and Session Memory StreetReaderAI utilizes the Multimodal Live API to facilitate real-time, natural language conversations about the environment. * The AI Chat agent maintains a large context window of approximately 1,048,576 tokens, allowing it to retain a "memory" of up to 4,000 previous images and interactions. * This memory allows users to ask retrospective spatial questions, such as "Where was that bus stop I just passed?", with the agent providing relative directions based on the user's current location. * By tracking every pan and movement, the agent can provide specific details about the environment that were captured in previous steps of the virtual walk. ### User Evaluation and Practical Application Testing with blind screen reader users confirmed the system's utility in practical, real-world scenarios. * Participants successfully used the prototype to evaluate potential walking routes, identifying critical environmental features like the presence of benches or shelters at bus stops. * The study highlighted the importance of multimodal inputs—combining image recognition with structured map data—to provide a more accurate and reliable description than image analysis alone could offer. While StreetReaderAI remains a proof-of-concept, it demonstrates that the integration of multimodal LLMs and spatial data can bridge significant accessibility gaps in digital mapping. Future implementation of these technologies could transform how visually impaired individuals interact with the world, turning static street imagery into a functional tool for independent mobility and exploration.

google

Google Earth AI: Unlocking geospatial insights with foundation models and cross-modal reasoning (opens in new tab)

Google Earth AI introduces a framework of geospatial foundation models and reasoning agents designed to solve complex, planetary-scale challenges through cross-modal reasoning. By integrating Gemini-powered orchestrators with specialized imagery, population, and environmental models, the system deconstructs multifaceted queries into actionable multi-step plans. This approach enables a holistic understanding of real-world events, such as disaster response and disease forecasting, by grounding AI insights in diverse, grounded geospatial data. ## Geospatial Reasoning Agents * Utilizes Gemini models as intelligent orchestrators to manage complex queries that require data from multiple domains. * The agent deconstructs a high-level question—such as predicting hurricane landfalls and community vulnerability—into a sequence of smaller, executable tasks. * It executes these plans by autonomously calling specialized foundation models, querying vast datastores, and utilizing geospatial tools to fuse disparate data points into a single, cohesive answer. ## Remote Sensing and Imagery Foundations * Employs vision-language models and open-vocabulary object detection trained on a large corpus of high-resolution overhead imagery paired with text descriptions. * Enables "zero-shot" capabilities, allowing users to find specific objects like "flooded roads" or "building damage" using natural language without needing to retrain the model for specific classes. * Technical evaluations show a 16% average improvement on text-based image search tasks and more than double the baseline accuracy for detecting novel objects in a zero-shot setting. ## Population Dynamics and Mobility * Focuses on the interplay between people and places using globally-consistent embeddings across 17 countries. * Includes monthly updated embeddings that capture shifting human activity patterns, which are essential for time-sensitive forecasting. * Research conducted with the University of Oxford showed that incorporating these population embeddings into a Dengue fever forecasting model in Brazil improved the R² metric from 0.456 to 0.656 for long-range 12-month predictions. ## Environmental and Disaster Forecasting * Integrates established Google research into weather nowcasting, flood forecasting, and wildfire boundary mapping. * Provides the reasoning agent with the data necessary to evaluate environmental risks alongside population density and infrastructure imagery. * Aims to provide Search and Maps users with real-time, accurate alerts regarding natural disasters grounded in planetary-scale environmental data. Developers and enterprises looking to solve high-level geospatial problems can now express interest in accessing these capabilities through Google Earth and Google Cloud. By leveraging these foundation models, organizations can automate the analysis of satellite imagery and human mobility data to better prepare for environmental and social challenges.

google

Teaching Gemini to spot exploding stars with just a few examples (opens in new tab)

Researchers have demonstrated that Google’s Gemini model can classify cosmic events with 93% accuracy, rivaling specialized machine learning models while providing human-readable explanations. By utilizing few-shot learning with only 15 examples per survey, the model addresses the "black box" limitation of traditional convolutional neural networks used in astronomy. This approach enables scientists to efficiently process the millions of alerts generated by modern telescopes while maintaining a transparent and interactive reasoning process. ## Bottlenecks in Modern Transient Astronomy * Telescopes like the Vera C. Rubin Observatory are expected to generate up to 10 million alerts per night, making manual verification impossible. * The vast majority of these alerts are "bogus" signals caused by satellite trails, cosmic rays, or instrumental artifacts rather than real supernovae. * Existing specialized models often provide binary "real" or "bogus" labels without context, forcing astronomers to either blindly trust the output or spend hours on manual verification. ## Multimodal Few-Shot Learning for Classification * The research utilized few-shot learning, providing Gemini with only 15 annotated examples for three major surveys: Pan-STARRS, MeerLICHT, and ATLAS. * Input data consisted of image triplets—a "new" alert image, a "reference" image of the same sky patch, and a "difference" image—each 100x100 pixels in size. * The model successfully generalized across different telescopes with varying pixel scales, ranging from 0.25" per pixel for Pan-STARRS to 1.8" per pixel for ATLAS. * Beyond simple labels, Gemini generates a textual description of observed features and an interest score to help astronomers prioritize follow-up observations. ## Expert Validation and Self-Assessment * A panel of 12 professional astronomers evaluated the model using a 0–5 coherence rubric, confirming that Gemini’s logic aligned with expert reasoning. * The study found that Gemini can effectively assess its own uncertainty; low self-assigned "coherence scores" were strong indicators of likely classification errors. * This ability to flag its own potential mistakes allows the model to act as a reliable partner, alerting scientists when a specific case requires human intervention. The transition from "black box" classifiers to interpretable AI assistants allows the astronomical community to scale with the data flood of next-generation telescopes. By combining high-accuracy classification with transparent reasoning, researchers can maintain scientific rigor while processing millions of cosmic events in real time.

google

XR Blocks: Accelerating AI + XR innovation (opens in new tab)

XR Blocks is an open-source, cross-platform framework designed to bridge the technical gap between mature AI development ecosystems and high-friction extended reality (XR) prototyping. By providing a modular architecture and high-level abstractions, the toolkit enables creators to rapidly build and deploy intelligent, immersive web applications without managing low-level system integration. Ultimately, the framework empowers developers to move from concept to interactive prototype across both desktop simulators and mobile XR devices using a unified codebase. ### Core Design Principles * **Simplicity and Readability:** Drawing inspiration from the "Zen of Python," the framework prioritizes human-readable abstractions where a developer’s script reflects a high-level description of the experience rather than complex boilerplate code. * **Creator-Centric Workflow:** The architecture is designed to handle the "plumbing" of XR—such as sensor fusion, AI model integration, and cross-platform logic—allowing creators to focus entirely on user interaction and experience. * **Pragmatic Modularity:** Rather than attempting to be a perfect, all-encompassing system, XR Blocks favors an adaptable and simple architecture that can evolve alongside the rapidly changing fields of AI and spatial computing. ### The Reality Model Abstractions * **The Script Primitive:** Acts as the logical center of an application, separating the "what" of an interaction from the "how" of its underlying technical implementation. * **User and World:** Provides built-in support for tracking hands, gaze, and avatars while allowing the system to query the physical environment for depth, estimated lighting conditions, and object recognition. * **AI and Agents:** Facilitates the integration of intelligent assistants, such as the "Sensible Agent," which can provide proactive, context-aware suggestions within the XR environment. * **Virtual Interfaces:** Offers tools to augment blended reality with virtual UI elements that respond to the user's physical context. ### Technical Implementation and Integration * **Web-Based Foundation:** The framework is built upon accessible, standard technologies including WebXR, three.js, and LiteRT (formerly TFLite) to ensure a low barrier to entry for web developers. * **Advanced AI Support:** It features native integration with Gemini for high-level reasoning and context-aware applications. * **Cross-Platform Deployment:** Developers can prototype depth-aware, physics-based interactions in a desktop simulator and deploy the exact same code to Android XR devices. * **Open-Source Resources:** The project includes a comprehensive suite of templates and live demos covering specific use cases like depth mapping, gesture modeling, and lighting estimation. By lowering the barrier to entry for intelligent XR development, XR Blocks serves as a practical starting point for researchers and developers aiming to explore the next generation of human-centered computing. Interested creators can access the source code on GitHub to begin building immersive, AI-driven applications that function seamlessly across the web and specialized XR hardware.