안녕하세요. LY Corporation의 Hirano입니다. Yahoo! 파이낸스에서 프런트엔드 영역을 개발하고 있으며 스크럼 마스터도 담당하고 있습니다. 또한 LY Corporation 전체 엔지니어를 대상으로 AI 활용을 촉진하기 위한 전사 워크숍 Orchestration Development Workshop(참고)을 운영하는 Orchestration 길드 멤버로도 활동하고 있습니다. Orchestration 길드는 CTO가 선발한 엔지니어가 모여 현장에서 AI를 더욱 적극적으로 활용할 수…
Introducing Anthropic’s Claude Opus 4.7 model in Amazon Bedrock Today, we’re announcing Claude Opus 4.7 in Amazon Bedrock, Anthropic’s most intelligent Opus model for advancing performance across coding, long-running agents, and professional work. Claude Opus 4.7 is powered by A…
Google Research has introduced a novel Generative UI framework that enables AI models to dynamically construct bespoke, interactive user experiences—including web pages, games, and functional tools—in response to any natural language prompt. This shift from static, predefined interfaces to AI-generated environments allows for highly customized digital spaces that adapt to a user's specific intent and context. Evaluated through human testing, these custom-generated interfaces are strongly preferred over traditional, text-heavy LLM outputs, signaling a fundamental evolution in human-computer interaction.
### Product Integration in Gemini and Google Search
The technology is currently being deployed as an experimental feature across Google’s main AI consumer platforms to enhance how users visualize and interact with data.
* **Dynamic View and Visual Layout:** These experiments in the Gemini app use agentic coding capabilities to design and code a complete interactive response for every prompt.
* **AI Mode in Google Search:** Available for Google AI Pro and Ultra subscribers, this feature uses Gemini 3’s multimodal understanding to build instant, bespoke interfaces for complex queries.
* **Contextual Customization:** The system differentiates between user needs, such as providing a simplified interface for a child learning about the microbiome versus a data-rich layout for an adult.
* **Task-Specific Tools:** Beyond text, the system generates functional applications like fashion advisors, event planners, and science simulations for topics like RNA transcription.
### Technical Architecture and Implementation
The Generative UI implementation relies on a multi-layered approach centered around the Gemini 3 Pro model to ensure the generated code is both functional and accurate.
* **Tool Access:** The model is connected to server-side tools, including image generation and real-time web search, to enrich the UI with external data.
* **System Instructions:** Detailed guidance provides the model with specific goals, formatting requirements, and technical specifications to avoid common coding errors.
* **Agentic Coding:** The model acts as both a designer and a developer, writing the necessary code to render the UI on the fly based on its interpretation of the user’s prompt.
* **Post-Processing:** Outputs undergo a series of automated checks to address common issues and refine the final visual experience before it reaches the browser.
### The Shift from Static to Generative Interfaces
This research represents a move away from the traditional software paradigm where users must navigate a fixed catalog of applications to find the tool they need.
* **Prompt-Driven UX:** Interfaces are generated from prompts as simple as a single word or as complex as multi-paragraph instructions.
* **Interactive Comprehension:** By building simulations on the fly, the system creates a dynamic environment optimized for deep learning and task completion.
* **Preference Benchmarking:** Research indicates that when generation speed is excluded as a factor, users significantly prefer these custom-built visual tools over standard, static AI responses.
To experience this new paradigm, users can select the "Thinking" option from the model menu in Google Search’s AI Mode or engage with the Dynamic View experiment in the Gemini app to generate tailored tools for specific learning or productivity tasks.