cursor

3 posts

toss

Tax Refund Automation: AI (opens in new tab)

At Toss Income, QA Manager Suho Jung successfully automated complex E2E testing for diverse tax refund services by leveraging AI as specialized virtual team members. By shifting from manual coding to a "human-as-orchestrator" model, a single person achieved the productivity of a four-to-five-person automation team within just five months. This approach overcame the inherent brittleness of testing long, React-based flows that are subject to frequent policy changes and external system dependencies. ### Challenges in Tax Service Automation The complexity of tax refund services presented unique hurdles that made traditional manual automation unsustainable: * **Multi-Step Dependencies:** Each refund flow averages 15–20 steps involving internal systems, authentication providers, and HomeTax scraping servers, where a single timing glitch can fail the entire test. * **Frequent UI and Policy Shifts:** Minor UI updates or new tax laws required total scenario reconfigurations, making hard-coded tests obsolete almost immediately. * **Environmental Instability:** Issues such as "Target closed" errors during scraping, differing domain environments, and React-specific hydration delays caused constant test flakiness. ### Building an AI-Driven QA Team Rather than using AI as a simple autocomplete tool, the project assigned specific "personas" to different AI models to handle distinct parts of the lifecycle: * **SDET Agent (Claude Sonnet 4.5):** Acted as the lead developer, responsible for designing the Page Object Model (POM) architecture, writing test logic, and creating utility functions. * **Documentation Specialist:** Automatically generated daily retrospectives and updated technical guides by analyzing daily git commits. * **Git Master:** Managed commit history and PR descriptions to ensure high-quality documentation of the project’s evolution. * **Pair Programmers (Cursor & Codex):** Handled real-time troubleshooting, type errors, and comparative analysis of different test scripts. ### Technical Solutions for React and Policy Logic The team implemented several sophisticated technical strategies to ensure test stability: * **React Interaction Readiness:** To solve "Element is not clickable" errors, they developed a strategy that waits not just for visibility, but for event handlers to bind to the DOM (Hydration). * **Safe Interaction Fallbacks:** A standard `click` utility was created that attempts a Playwright click, then a native keyboard 'Enter' press, and finally a JS dispatch to ensure interactions succeed even during UI transitions. * **Dynamic Consent Flow Utility:** A specialized system was built to automatically detect and handle varying "Terms of Service" agreements across different sub-services (Tax Secretary, Hidden Refund, etc.) through a single unified function. * **Test Isolation:** Automated scripts were used to prevent `userNo` (test ID) collisions, ensuring 35+ complex scenarios could run in parallel without data interference. ### Integrated Feedback and Reporting The automation was integrated directly into internal communication channels to create a tight feedback loop: * **Messenger Notifications:** Every test run sends a report including execution time, test IDs, and environment data to the team's messenger. * **Automated Failure Analysis:** When a test fails, the AI automatically posts the error log, the specific failed step, a tracking EventID, and a screenshot as a thread reply for immediate debugging. * **Human-AI Collaboration:** This structure shifted the QA's role from writing code to discussing failures and policy changes within the messenger threads. The success of this 5-month experiment suggests that for high-complexity environments, the future of QA lies in "AI Orchestration." Instead of focusing on writing selectors, QA engineers should focus on defining problems and managing the AI agents that build the architecture.

kakao

What AI TOP 100 (opens in new tab)

The Kakao AI Native Strategy team successfully developed a complex competition system for the "AI TOP 100" event in just two weeks by replacing traditional waterfall methodologies with an AI-centric approach. By utilizing tools like Cursor and Claude Code, the team shifted the developer’s role from manual coding to high-level orchestration and validation. This experiment demonstrates that AI does not replace developers but rather redefines the "standard" of productivity, moving the focus from execution speed to strategic decision-making. ### Rapid Prototyping as the New Specification * The team eliminated traditional, lengthy planning documents and functional specifications. * Every team member was tasked with creating a working prototype using AI based on their own interpretation of the project goals. * One developer produced six different versions of the system independently, allowing the team to "see" ideas rather than read about them. * Final requirements were established by reviewing and merging the best features of these functional prototypes, significantly reducing communication overhead. ### AI-Native Development and 99% Delegation * The majority of the codebase (over 99%) was generated by AI tools like Claude Code and Cursor, with developers focusing on intent and review. * One developer recorded an extreme usage of 200 million tokens in a single day to accelerate system completion. * The high productivity of AI allowed a single frontend developer to manage the entire UI for both the preliminary and main rounds, a task that typically requires a much larger team. * The development flow moved away from linear "think-code-test" patterns to a "dialogue-based" implementation where ideas were instantly turned into code. ### PoC-Driven Development (PDD) * The team adopted a "Proof of Concept (PoC) Driven Development" model to handle high uncertainty and tight deadlines. * Abstract concepts were immediately fed into AI to generate functional PoC code and architectural drafts. * The human role shifted from "writing from scratch" to "judging and selecting" the most viable outputs generated by the AI. * This approach allowed the team to bypass resource limitations by prioritizing speed and functional verification over perfectionist documentation. ### Human Governance and the Role of Experience * Internal conflicts occasionally arose when different AI models suggested equally "logical" but conflicting architectural solutions. * Senior developers played a critical role in breaking these deadlocks by applying real-world experience regarding long-term maintainability and system constraints. * While AI provided the "engine" for speed, human intuition remained the "steering wheel" to ensure the system met specific organizational standards. * The project highlighted that as AI handles more of the implementation, a developer’s ability to judge code quality and architectural fit becomes their most valuable asset. This project serves as a blueprint for the future of software engineering, where AI is treated as a peer programmer rather than a simple tool. To stay competitive, development teams should move away from rigid waterfall processes and embrace a PoC-centric workflow that leverages AI to collapse the distance between ideation and deployment.

line

A month-long task in just (opens in new tab)

This blog post explores how LY Corporation reduced a month-long development task to just five days by leveraging "vibe coding" with Generative AI tools like ChatGPT and Cursor. By shifting from traditional, rigid documentation to an iterative, demo-first approach, developers can rapidly validate multiple UI/UX solutions for complex problems like restaurant menu registration. The author concludes that AI's ability to handle frequent re-work makes it more efficient to "build fast and iterate" than to aim for perfection through long-form specifications. ### Strategic Shift to Rapid Prototyping * Traditional development cycles (spec → design → dev → fix) are often too slow to keep up with market trends due to heavy documentation and impact analysis. * The "vibe coding" approach prioritizes creating "working demos" over perfect specifications to find "good enough" answers through rapid feedback loops. * AI reduces the psychological and logistical burden of "starting over," allowing developers to refine the context and quality of outputs through repeated interaction without the friction of manual re-documentation. ### Defining Requirements and Solution Ideation * Initial requirements are kept minimal, focusing only on the core mission, top priorities, and essential data structures (e.g., product name, image, description) to avoid limiting AI creativity. * ChatGPT is used to generate a wide range of solution candidates, which are then filtered into five distinct approaches: Stepper Wizards, Live Previews with Quick Add, Template/Cloning, Chat Input, and OCR-based photo scanning. * This stage emphasizes volume and variety, using AI-generated pros and cons to establish selection criteria and identify potential UX bottlenecks early in the process. ### Detailed Design and Multi-Solution Wireframing * Each of the five chosen solutions is expanded into detailed screen flows and UI elements, such as progress bars, bottom sheets, and validation logic. * Prompt engineering is used iteratively; if an AI-generated result lacks a specific feature like "temporary storage" or "mandatory field validation," the prompt is adjusted to regenerate the design instantly. * The focus remains on defining the "what" (UI elements) and "how" (user flow) through textual descriptions before moving to actual coding. ### Implementation with Cursor and Flutter * Cursor is utilized to generate functional code based on the refined wireframes, using Flutter as the framework to ensure rapid cross-platform development for both iOS and Android. * The development follows a "skeleton-first" approach: first creating a main navigation hub with five entry points, then populating each individual solution module one by one. * Technical architecture decisions, such as using Riverpod for state management or SQLite for data storage, are layered onto the demo post-hoc, reversing the traditional "stack-first" development order to prioritize functional validation. ### Recommendation To maximize efficiency, developers should treat AI as a partner for high-speed iteration rather than a one-shot tool. By focusing on creating functional demos quickly and refining them through direct feedback, teams can bypass the bottlenecks of traditional software requirements and deliver user-centric products in a fraction of the time.