api-design

3 posts

woowahan

For a Seamless User Experience: The Journey (opens in new tab)

To provide a seamless user experience, Baedal Minjok (Baemin) successfully integrated KakaoTalk brand vouchers directly into its ordering system, overcoming significant technical and organizational barriers between platforms. This project was driven by a mission to resolve long-standing customer friction and strategically capture external purchase demand within the Baemin ecosystem. By bridging the gap between Kakao’s gifting infrastructure and Baemin’s delivery network, the team successfully transformed a fragmented journey into a unified, user-centric service. ### Bridging User Friction and Business Growth - Addressed persistent Voice of Customer (VOC) complaints from users who found it inconvenient to use KakaoTalk vouchers through separate brand apps or physical store visits. - Aimed to capture untapped external traffic and convert it into active order volume within the Baemin platform, enhancing customer retention and "lock-in" effects. - Defined the project’s core essence as "connection," which served as a North Star for decision-making when technical constraints or business interests conflicted. ### Navigating Multi-Party Stakeholder Complexity - Coordinated a massive ecosystem involving Kakao (the platform), F&B brands, third-party voucher issuers, and internal Baemin backend teams. - Managed conflicting KPIs across organizations, balancing Kakao’s requirement for platform stability with voucher issuers' needs for settlement clarity. - Employed "context-aware communication" to bridge terminology gaps, such as reconciling Baemin’s "register and use" logic with the voucher companies' "inquiry and approval" workflows. ### Standardizing External Voucher Integration - Developed a standardized technical framework to accommodate diverse external voucher issuers while maintaining a consistent and simple interface for the end-user. - Resolved technical trade-offs regarding API response speeds, error-handling policies, and real-time validation across disparate systems. - Empowered Product Managers to act as "technical translators" and "captains," proactively managing complex dependency chains and prioritizing core features over secondary improvements to meet delivery timelines. The successful integration of KakaoTalk vouchers demonstrates that overcoming platform silos requires more than just technical API mapping; it requires a fundamental shift toward user-centric thinking. By prioritizing the "seamlessness" of the connection over individual platform boundaries, organizations can unlock significant new growth opportunities and deliver a superior digital experience.

toss

Toss Payments' Open API Ecosystem (opens in new tab)

Toss Payments treats its Open API not just as a communication tool, but as a long-term infrastructure designed to support over 200,000 merchants for decades. By focusing on resource-oriented design and developer experience, the platform ensures that its interfaces remain intuitive, consistent, and easy to maintain. This strategic approach prioritizes structural stability and clear communication over mere functionality, fostering a reliable ecosystem for both developers and businesses. ### Resource-Oriented Interface Design * The API follows a predictable path structure (e.g., `/v1/payments/{id}`) where the root indicates the version, followed by the domain and a unique identifier. * Request and response bodies utilize structured JSON with nested objects (like `card` or `cashReceipt`) to modularize data and reduce redundancy. * Consistency is maintained by reusing the same domain objects across different APIs, such as payment approval, inquiry, and cancellation, which minimizes the learning curve for external developers. * Data representation shifts from cryptic legacy codes (e.g., SC0010) to human-readable strings, supporting localization into multiple languages via the `Accept-Language` HTTP header. * Standardized error handling utilizes HTTP status codes paired with a JSON error object containing specific `code` and `message` fields, allowing developers to either display messages directly or implement custom logic. ### Asynchronous Communication via Webhooks * Webhooks are provided alongside standard APIs to handle asynchronous events where immediate responses are not possible, such as status changes in complex payment flows. * Event types are clearly categorized (e.g., `PAYMENT_STATUS_CHANGED`), and the payloads mirror the exact resource structures used in the REST APIs to simplify parsing. * The system ensures reliability by implementing an Exponential Backoff strategy for retries, preventing network congestion during recipient service outages. * A dedicated developer center allows merchants to register custom endpoints, monitor transmission history, and perform manual retries if automated attempts fail. ### External Ecosystem and Documentation Automation * Developer Experience (DX) is treated as the core metric for API quality, focusing on how quickly and efficiently a developer can integrate and operate the service. * To prevent the common issue of outdated manuals, Toss Payments uses a documentation automation system based on the OpenAPI Specification (OAS). * By utilizing libraries like `springdoc`, the platform automatically syncs the technical documentation with the actual server code, ensuring that parameters, schemas, and endpoints are always up-to-date and trustworthy. To ensure the longevity of a high-traffic Open API, organizations should prioritize automated documentation and resource-based consistency. Moving away from cryptic codes toward human-readable, localized data and providing robust asynchronous notification tools like webhooks are essential steps for building a developer-friendly infrastructure.

naver

Beyond the Side Effects of API- (opens in new tab)

JVM applications often suffer from initial latency spikes because the Just-In-Time (JIT) compiler requires a "warm-up" period to optimize frequently executed code into machine language. While traditional strategies rely on simulated API calls to trigger this optimization, these methods often introduce side effects like data pollution, log noise, and increased maintenance overhead. This new approach advocates for a library-centric warm-up that targets core execution paths and dependencies directly, ensuring high performance from the first real request without the risks of full-scale API simulation. ### Limitations of Traditional API-Based Warm-up * **Data and State Pollution:** Simulated API calls can inadvertently trigger database writes, send notifications, or pollute analytics data, requiring complex logic to bypass these side effects. * **Maintenance Burden:** As business logic and API signatures change, developers must constantly update the warm-up scripts or "dummy" requests to match the current application state. * **Operational Risk:** Relying on external dependencies or complex internal services during the warm-up phase can lead to deployment failures if the mock environment is not perfectly aligned with production. ### The Library-Centric Warm-up Strategy * **Targeted Optimization:** Instead of hitting the entry-point controllers, the focus shifts to warming up heavy third-party libraries and internal utility classes (e.g., JSON parsers, encryption modules, and DB drivers). * **Internal Execution Path:** By directly invoking methods within the application's service or infrastructure layer during the startup phase, the JIT compiler can reach "Tier 4" (C2) optimization for critical code blocks. * **Decoupled Logic:** Because the warm-up targets underlying libraries rather than specific business endpoints, the logic remains stable even when the high-level API changes. ### Implementation and Performance Verification * **Reflection and Hooks:** The implementation uses application startup hooks to execute intensive code paths, ensuring the JVM is "hot" before the load balancer begins directing traffic to the instance. * **JIT Compilation Monitoring:** Success is measured by tracking the number of JIT-compiled methods and the time taken to reach a stable state, specifically targeting the reduction of "cold" execution time. * **Latency Improvements:** Empirical data shows a significant reduction in P99 latency during the first few minutes of deployment, as the most CPU-intensive library functions are already pre-optimized. ### Advantages and Practical Constraints * **Safer Deployments:** Removing the need for simulated network requests makes the deployment process more robust and prevents accidental side effects in downstream systems. * **Granular Control:** Developers can selectively warm up only the most performance-sensitive parts of the application, saving startup time compared to a full-system simulation. * **Incomplete Path Coverage:** A primary limitation is that library-only warming may miss specific branch optimizations that occur only during full end-to-end request processing. To achieve the best balance between safety and performance, engineering teams should prioritize warming up shared infrastructure libraries and high-overhead utilities. While it may not cover 100% of the application's execution paths, a library-based approach provides a more maintainable and lower-risk foundation for JVM performance tuning than traditional request-based methods.