claude

3 posts

toss

Tax Refund Automation: AI (opens in new tab)

At Toss Income, QA Manager Suho Jung successfully automated complex E2E testing for diverse tax refund services by leveraging AI as specialized virtual team members. By shifting from manual coding to a "human-as-orchestrator" model, a single person achieved the productivity of a four-to-five-person automation team within just five months. This approach overcame the inherent brittleness of testing long, React-based flows that are subject to frequent policy changes and external system dependencies. ### Challenges in Tax Service Automation The complexity of tax refund services presented unique hurdles that made traditional manual automation unsustainable: * **Multi-Step Dependencies:** Each refund flow averages 15–20 steps involving internal systems, authentication providers, and HomeTax scraping servers, where a single timing glitch can fail the entire test. * **Frequent UI and Policy Shifts:** Minor UI updates or new tax laws required total scenario reconfigurations, making hard-coded tests obsolete almost immediately. * **Environmental Instability:** Issues such as "Target closed" errors during scraping, differing domain environments, and React-specific hydration delays caused constant test flakiness. ### Building an AI-Driven QA Team Rather than using AI as a simple autocomplete tool, the project assigned specific "personas" to different AI models to handle distinct parts of the lifecycle: * **SDET Agent (Claude Sonnet 4.5):** Acted as the lead developer, responsible for designing the Page Object Model (POM) architecture, writing test logic, and creating utility functions. * **Documentation Specialist:** Automatically generated daily retrospectives and updated technical guides by analyzing daily git commits. * **Git Master:** Managed commit history and PR descriptions to ensure high-quality documentation of the project’s evolution. * **Pair Programmers (Cursor & Codex):** Handled real-time troubleshooting, type errors, and comparative analysis of different test scripts. ### Technical Solutions for React and Policy Logic The team implemented several sophisticated technical strategies to ensure test stability: * **React Interaction Readiness:** To solve "Element is not clickable" errors, they developed a strategy that waits not just for visibility, but for event handlers to bind to the DOM (Hydration). * **Safe Interaction Fallbacks:** A standard `click` utility was created that attempts a Playwright click, then a native keyboard 'Enter' press, and finally a JS dispatch to ensure interactions succeed even during UI transitions. * **Dynamic Consent Flow Utility:** A specialized system was built to automatically detect and handle varying "Terms of Service" agreements across different sub-services (Tax Secretary, Hidden Refund, etc.) through a single unified function. * **Test Isolation:** Automated scripts were used to prevent `userNo` (test ID) collisions, ensuring 35+ complex scenarios could run in parallel without data interference. ### Integrated Feedback and Reporting The automation was integrated directly into internal communication channels to create a tight feedback loop: * **Messenger Notifications:** Every test run sends a report including execution time, test IDs, and environment data to the team's messenger. * **Automated Failure Analysis:** When a test fails, the AI automatically posts the error log, the specific failed step, a tracking EventID, and a screenshot as a thread reply for immediate debugging. * **Human-AI Collaboration:** This structure shifted the QA's role from writing code to discussing failures and policy changes within the messenger threads. The success of this 5-month experiment suggests that for high-complexity environments, the future of QA lies in "AI Orchestration." Instead of focusing on writing selectors, QA engineers should focus on defining problems and managing the AI agents that build the architecture.

line

We held AI Campus Day to (opens in new tab)

LY Corporation recently hosted "AI Campus Day," a large-scale internal event designed to bridge the gap between AI theory and practical workplace application for over 3,000 employees. By transforming their office into a learning campus, the company successfully fostered a culture of "AI Transformation" through peer-led mentorship and task-specific experimentation. The event demonstrated that internal context and hands-on participation are far more effective than traditional external lectures for driving meaningful AI literacy and productivity gains. ## Hands-on Experience and Technical Support * The curriculum featured 10 specialized sessions across three tracks—Common, Creative, and Engineering—to ensure relevance for every job function. * Sessions ranged from foundational prompt engineering for non-developers to advanced technical topics like building Model Context Protocol (MCP) servers for engineers. * To ensure smooth execution, the organizers provided comprehensive "Session Guides" containing pre-configured account settings and specific prompt templates. * The event utilized a high support ratio, with 26 teaching assistants (TAs) available to troubleshoot technical hurdles in real-time and dedicated Slack channels for sharing live AI outputs. ## Peer-Led Mentorship and Internal Context * Instead of hiring external consultants, the program featured 10 internal "AI Mentors" who shared how they integrated AI into their actual daily workflows at LY Corporation. * Training focused exclusively on company-approved tools, including ChatGPT Enterprise, Gemini, and Claude Code, ensuring all demonstrations complied with internal security protocols. * Internal mentors were able to provide specific "company context" that external lecturers lack, such as integrating AI with existing proprietary systems and data. * A rigorous three-stage quality control process—initial flow review, final end-to-end dry run, and technical rehearsal—was implemented to ensure the educational quality of mentor-led sessions. ## Gamification and Cultural Engagement * The event was framed as a "festival" rather than a mandatory training, using campus-themed motifs like "enrollment" and "school attendance" to reduce psychological barriers. * A "Stamp Rally" system encouraged participation by offering tiered rewards, including welcome kits, refreshments, and subscriptions to premium AI tools. * Interactive exhibition booths allowed employees to experience AI utility firsthand, such as an AI photo zone using Gemini to generate "campus-style" portraits and an AI Agent Contest booth. * Strong executive support played a crucial role, with leadership encouraging staff to pause routine tasks for the day to focus entirely on AI experimentation and "playing" with new technologies. To effectively scale AI literacy within a large organization, it is recommended to move away from passive, one-size-fits-all lectures. Success lies in leveraging internal experts who understand the specific security and operational constraints of the business, and creating a low-pressure environment where employees can experiment with hands-on tasks relevant to their specific roles.

kakao

What AI TOP 100 (opens in new tab)

The Kakao AI Native Strategy team successfully developed a complex competition system for the "AI TOP 100" event in just two weeks by replacing traditional waterfall methodologies with an AI-centric approach. By utilizing tools like Cursor and Claude Code, the team shifted the developer’s role from manual coding to high-level orchestration and validation. This experiment demonstrates that AI does not replace developers but rather redefines the "standard" of productivity, moving the focus from execution speed to strategic decision-making. ### Rapid Prototyping as the New Specification * The team eliminated traditional, lengthy planning documents and functional specifications. * Every team member was tasked with creating a working prototype using AI based on their own interpretation of the project goals. * One developer produced six different versions of the system independently, allowing the team to "see" ideas rather than read about them. * Final requirements were established by reviewing and merging the best features of these functional prototypes, significantly reducing communication overhead. ### AI-Native Development and 99% Delegation * The majority of the codebase (over 99%) was generated by AI tools like Claude Code and Cursor, with developers focusing on intent and review. * One developer recorded an extreme usage of 200 million tokens in a single day to accelerate system completion. * The high productivity of AI allowed a single frontend developer to manage the entire UI for both the preliminary and main rounds, a task that typically requires a much larger team. * The development flow moved away from linear "think-code-test" patterns to a "dialogue-based" implementation where ideas were instantly turned into code. ### PoC-Driven Development (PDD) * The team adopted a "Proof of Concept (PoC) Driven Development" model to handle high uncertainty and tight deadlines. * Abstract concepts were immediately fed into AI to generate functional PoC code and architectural drafts. * The human role shifted from "writing from scratch" to "judging and selecting" the most viable outputs generated by the AI. * This approach allowed the team to bypass resource limitations by prioritizing speed and functional verification over perfectionist documentation. ### Human Governance and the Role of Experience * Internal conflicts occasionally arose when different AI models suggested equally "logical" but conflicting architectural solutions. * Senior developers played a critical role in breaking these deadlocks by applying real-world experience regarding long-term maintainability and system constraints. * While AI provided the "engine" for speed, human intuition remained the "steering wheel" to ensure the system met specific organizational standards. * The project highlighted that as AI handles more of the implementation, a developer’s ability to judge code quality and architectural fit becomes their most valuable asset. This project serves as a blueprint for the future of software engineering, where AI is treated as a peer programmer rather than a simple tool. To stay competitive, development teams should move away from rigid waterfall processes and embrace a PoC-centric workflow that leverages AI to collapse the distance between ideation and deployment.