Sandboxing AI agents, 100x faster 2026-03-24 Kenton Varda Sunil Pai Ketan Gupta Last September we introduced Code Mode, the idea that agents should perform tasks not by making tool calls, but instead by writing code that calls APIs. We've shown that simply converting an MCP serv…
들어가며 안녕하세요. LINE Plus에서 Global E-Commerce 개발을 맡고 있는 장효택입니다. 기존 시스템을 새로운 환경으로 옮기거나 내재화하는 작업은 개발자에게 숙명과도 같습니다. 이때 가장 곤혹스러운 순간은 기존 로직의 근거가 되는 기획서가 없거나, 소스 코드조차 참조할 수 없는 블랙박스 상태일 때입니다. 저희는 외부 시스템에 의존하던 다양한 모듈을 내재화하는 과정에서 이 문제에 직면했습니다. ‘지금 우리가 만든 코드가 기존과 정말 동일하게 작동하는가?’라는 질문에 답하기 위해,…
이 글은 합병 이전 구 블로그에 게시했던 기사(최초 게시일: 2022년 2월 24일)를 현재 블로그로 이관한 것으로, 내용은 최초 게시 시점 기준입니다. LINE은 1:1 대화뿐 아니라 다자간 대화도 지원합니다. 그런데 LINE에는 서로 다른 용도로 개발된 두 가지 다자간 대화 기능인 '여러 명과의 대화'와 '그룹'이 있었습니다. 여러 명과의 대화(Room)는 일시적인 대화를 위해 설계됐습니다. 여러 명과의 대화를 만들 때에는 따로 방의 이름을 지정할 필요가 없으며, 친구를 여러 명과의 대화에…
Even seemingly simple engineering tasks — like updating an API — can become monumental undertakings when you’re dealing with millions of lines of code and thousands of engineers, especially if the changes are security-related. Nowhere is this more apparent than in mobile securit…
We deserve a better streams API for JavaScript 2026-02-27 James M Snell Handling data in streams is fundamental to how we build applications. To make streaming work everywhere, the WHATWG Streams Standard (informally known as "Web streams") was designed to establish a common API…
안녕하세요, 토스플레이스 Frontend Developer 이주함입니다. 저는 토스플레이스에서 자체 개발한 결제 단말기인 Toss Front(이하 프론트)의 외부 연동 SDK(Software Development Kit)를 개발하고 있습니다. 이 SDK를 활용하면 토스 서비스의 데이터를 연동해 내가 원하는 플러그인 앱을 개발하고, 프론트에서 동작하도록 연동할 수 있어요. 즉, 3rd-party의 연동을 통해 내부 개발이 아닌, 외부 연동사의 개발로 무한히 확장할 수 있는 구조입니다. 이 글에서는…
Cloudflare outage on February 20, 2026 2026-02-21 David Tuber Dzevad Trumic On February 20, 2026, at 17:48 UTC, Cloudflare experienced a service outage when a subset of customers who use Cloudflare’s Bring Your Own IP (BYOIP) service saw their routes to the Internet withdrawn vi…
Our Multi-Agent Architecture for Smarter Advertising Introduction When we kicked this off, we weren’t trying to ship an “AI feature.” We were trying to fix a structural problem in how our ads business actually runs in software. On the business side, we have multiple ways of buyi…
To provide a seamless user experience, Baedal Minjok (Baemin) successfully integrated KakaoTalk brand vouchers directly into its ordering system, overcoming significant technical and organizational barriers between platforms. This project was driven by a mission to resolve long-standing customer friction and strategically capture external purchase demand within the Baemin ecosystem. By bridging the gap between Kakao’s gifting infrastructure and Baemin’s delivery network, the team successfully transformed a fragmented journey into a unified, user-centric service.
### Bridging User Friction and Business Growth
- Addressed persistent Voice of Customer (VOC) complaints from users who found it inconvenient to use KakaoTalk vouchers through separate brand apps or physical store visits.
- Aimed to capture untapped external traffic and convert it into active order volume within the Baemin platform, enhancing customer retention and "lock-in" effects.
- Defined the project’s core essence as "connection," which served as a North Star for decision-making when technical constraints or business interests conflicted.
### Navigating Multi-Party Stakeholder Complexity
- Coordinated a massive ecosystem involving Kakao (the platform), F&B brands, third-party voucher issuers, and internal Baemin backend teams.
- Managed conflicting KPIs across organizations, balancing Kakao’s requirement for platform stability with voucher issuers' needs for settlement clarity.
- Employed "context-aware communication" to bridge terminology gaps, such as reconciling Baemin’s "register and use" logic with the voucher companies' "inquiry and approval" workflows.
### Standardizing External Voucher Integration
- Developed a standardized technical framework to accommodate diverse external voucher issuers while maintaining a consistent and simple interface for the end-user.
- Resolved technical trade-offs regarding API response speeds, error-handling policies, and real-time validation across disparate systems.
- Empowered Product Managers to act as "technical translators" and "captains," proactively managing complex dependency chains and prioritizing core features over secondary improvements to meet delivery timelines.
The successful integration of KakaoTalk vouchers demonstrates that overcoming platform silos requires more than just technical API mapping; it requires a fundamental shift toward user-centric thinking. By prioritizing the "seamlessness" of the connection over individual platform boundaries, organizations can unlock significant new growth opportunities and deliver a superior digital experience.
Toss Payments treats its Open API not just as a communication tool, but as a long-term infrastructure designed to support over 200,000 merchants for decades. By focusing on resource-oriented design and developer experience, the platform ensures that its interfaces remain intuitive, consistent, and easy to maintain. This strategic approach prioritizes structural stability and clear communication over mere functionality, fostering a reliable ecosystem for both developers and businesses.
### Resource-Oriented Interface Design
* The API follows a predictable path structure (e.g., `/v1/payments/{id}`) where the root indicates the version, followed by the domain and a unique identifier.
* Request and response bodies utilize structured JSON with nested objects (like `card` or `cashReceipt`) to modularize data and reduce redundancy.
* Consistency is maintained by reusing the same domain objects across different APIs, such as payment approval, inquiry, and cancellation, which minimizes the learning curve for external developers.
* Data representation shifts from cryptic legacy codes (e.g., SC0010) to human-readable strings, supporting localization into multiple languages via the `Accept-Language` HTTP header.
* Standardized error handling utilizes HTTP status codes paired with a JSON error object containing specific `code` and `message` fields, allowing developers to either display messages directly or implement custom logic.
### Asynchronous Communication via Webhooks
* Webhooks are provided alongside standard APIs to handle asynchronous events where immediate responses are not possible, such as status changes in complex payment flows.
* Event types are clearly categorized (e.g., `PAYMENT_STATUS_CHANGED`), and the payloads mirror the exact resource structures used in the REST APIs to simplify parsing.
* The system ensures reliability by implementing an Exponential Backoff strategy for retries, preventing network congestion during recipient service outages.
* A dedicated developer center allows merchants to register custom endpoints, monitor transmission history, and perform manual retries if automated attempts fail.
### External Ecosystem and Documentation Automation
* Developer Experience (DX) is treated as the core metric for API quality, focusing on how quickly and efficiently a developer can integrate and operate the service.
* To prevent the common issue of outdated manuals, Toss Payments uses a documentation automation system based on the OpenAPI Specification (OAS).
* By utilizing libraries like `springdoc`, the platform automatically syncs the technical documentation with the actual server code, ensuring that parameters, schemas, and endpoints are always up-to-date and trustworthy.
To ensure the longevity of a high-traffic Open API, organizations should prioritize automated documentation and resource-based consistency. Moving away from cryptic codes toward human-readable, localized data and providing robust asynchronous notification tools like webhooks are essential steps for building a developer-friendly infrastructure.
JVM applications often suffer from initial latency spikes because the Just-In-Time (JIT) compiler requires a "warm-up" period to optimize frequently executed code into machine language. While traditional strategies rely on simulated API calls to trigger this optimization, these methods often introduce side effects like data pollution, log noise, and increased maintenance overhead. This new approach advocates for a library-centric warm-up that targets core execution paths and dependencies directly, ensuring high performance from the first real request without the risks of full-scale API simulation.
### Limitations of Traditional API-Based Warm-up
* **Data and State Pollution:** Simulated API calls can inadvertently trigger database writes, send notifications, or pollute analytics data, requiring complex logic to bypass these side effects.
* **Maintenance Burden:** As business logic and API signatures change, developers must constantly update the warm-up scripts or "dummy" requests to match the current application state.
* **Operational Risk:** Relying on external dependencies or complex internal services during the warm-up phase can lead to deployment failures if the mock environment is not perfectly aligned with production.
### The Library-Centric Warm-up Strategy
* **Targeted Optimization:** Instead of hitting the entry-point controllers, the focus shifts to warming up heavy third-party libraries and internal utility classes (e.g., JSON parsers, encryption modules, and DB drivers).
* **Internal Execution Path:** By directly invoking methods within the application's service or infrastructure layer during the startup phase, the JIT compiler can reach "Tier 4" (C2) optimization for critical code blocks.
* **Decoupled Logic:** Because the warm-up targets underlying libraries rather than specific business endpoints, the logic remains stable even when the high-level API changes.
### Implementation and Performance Verification
* **Reflection and Hooks:** The implementation uses application startup hooks to execute intensive code paths, ensuring the JVM is "hot" before the load balancer begins directing traffic to the instance.
* **JIT Compilation Monitoring:** Success is measured by tracking the number of JIT-compiled methods and the time taken to reach a stable state, specifically targeting the reduction of "cold" execution time.
* **Latency Improvements:** Empirical data shows a significant reduction in P99 latency during the first few minutes of deployment, as the most CPU-intensive library functions are already pre-optimized.
### Advantages and Practical Constraints
* **Safer Deployments:** Removing the need for simulated network requests makes the deployment process more robust and prevents accidental side effects in downstream systems.
* **Granular Control:** Developers can selectively warm up only the most performance-sensitive parts of the application, saving startup time compared to a full-system simulation.
* **Incomplete Path Coverage:** A primary limitation is that library-only warming may miss specific branch optimizations that occur only during full end-to-end request processing.
To achieve the best balance between safety and performance, engineering teams should prioritize warming up shared infrastructure libraries and high-overhead utilities. While it may not cover 100% of the application's execution paths, a library-based approach provides a more maintainable and lower-risk foundation for JVM performance tuning than traditional request-based methods.
Viaduct, Five Years On: Modernizing the Data-Oriented Service Mesh A more powerful engine and a simpler API for our data-oriented mesh -- Listen Share By: Adam Miskiewicz, Raymie Stata In November 2020 we published a post about Viaduct, our data-oriented service mesh. Today, we’…