Recreating the User's (opens in new tab)
The development of NSona, an LLM-based multi-agent persona platform, addresses the persistent gap between user research and service implementation by transforming static data into real-time collaborative resources. By recreating user voices through a multi-party dialogue system, the project demonstrates how AI can serve as an active participant in the daily design and development process. Ultimately, the initiative highlights a fundamental shift in cross-functional collaboration, where traditional role boundaries dissolve in favor of a shared starting point centered on AI-driven user empathy.
Bridging UX Research and Daily Collaboration
- The project was born from the realization that traditional UX research often remains isolated from the actual development cycle, leading to a loss of insight during implementation.
- NSona transforms static user research data into dynamic "persona bots" that can interact with project members in real-time.
- The platform aims to turn the user voice into a "live" resource, allowing designers and developers to consult the persona during the decision-making process.
Agent-Centric Engineering and Multi-Party UX
- The system architecture is built on an agent-centric structure designed to handle the complexities of specific user behaviors and motivations.
- It utilizes a Multi-Party dialogue framework, enabling a collaborative environment where multiple AI agents and human stakeholders can converse simultaneously.
- Technical implementation focused on bridging the gap between qualitative UX requirements and LLM orchestration, ensuring the persona's responses remained grounded in actual research data.
Service-Specific Evaluation and Quality Metrics
- The team moved beyond generic LLM benchmarks to establish a "Service-specific" evaluation process tailored to the project's unique UX goals.
- Model quality was measured by how vividly and accurately it recreated the intended persona, focusing on the degree of "immersion" it triggered in human users.
- Insights from these evaluations helped refine the prompt design and agent logic to ensure the AI's output provided genuine value to the product development lifecycle.
Redefining Cross-Functional Collaboration
- The AI development process reshaped traditional Roles and Responsibilities (RNR); designers became prompt engineers, while researchers translated qualitative logic into agentic structures.
- Front-end developers evolved their roles to act as critical reviewers of the AI, treating the model as a subject of critique rather than a static asset.
- The workflow shifted from a linear "relay" model to a concentric one, where all team members influence the product's core from the same starting point.
To successfully integrate AI into the product lifecycle, organizations should move beyond using LLMs as simple tools and instead view them as a medium for interdisciplinary collaboration. By building multi-agent systems that reflect real user data, teams can ensure that the "user's voice" is not just a research summary, but a tangible participant in the development process.