토스 / database-design

6 posts

toss

Metric Review, 실행을 이끌다 (opens in new tab)

안녕하세요, 토스플레이스에서 Data Platform Team을 이끌고 있는 박종익입니다. "인사이트는 분명히 나왔는데, 왜 실행은 느릴까요?" 데이터 조직에 있다 보면 이 질문을 자주 마주하게 됩니다. 분석은 쌓이고, 대시보드는 채워지는데 — 정작 제품이나 사업에 직접적인 변화가 일어나는 속도는 기대에 미치지 못하는 경우가 많아요. 저희도 같은 고민을 오랫동안 해왔습니다. 그 고민에서 시작한 것이 바로 Metric Review입니다. 오늘은 저희가 왜 Metric Review를 시작했고, 어떻…

toss

LLM을 이용한 서비스 취약점 분석 자동화 #2 (opens in new tab)

*이 글은 연구 개발망에서 진행된 내용을 바탕으로 합니다. 안녕하세요. 토스 Security Researcher 표상영입니다. 지난 글에서는 LLM을 이용해 서비스 취약점 분석을 자동화하면서 마주했던 문제점과 그에 대한 해결책들을 간단히 소개드렸습니다. 이전 글을 작성한 시점부터 벌써 3개월이 지났는데요. 불과 몇 달 사이에 AI의 취약점 분석 능력은 정말 높은 수준으로 올라왔습니다. 이렇게 가파른 기술 발전 속도에 따라, AI를 대하는 저의 자세와 생각도 많이 바뀌게 되었어요. 이번 글에서는…

toss

외국인 유저 리서치: 캐나다인 "B"씨는 왜 토스 인증에 실패했을까 (opens in new tab)

혹시 외국인이 보는 한국의 금융 시스템이 어떤지 아시나요? 미국의 유명 커뮤니티 Reddit에서 “Korean Banking”을 검색해 보면, 외국인들이 느끼는 한국 금융 시스템의 인상을 그대로 볼 수 있어요. 누군가의 도움 없이는 이해하기 어렵고, 전반적인 경험도 복잡하게 느껴진다고 해요. 그래서일까요? 토스에 가입했더라도 제대로 사용하지 못하는 외국인 사용자들이 많았어요. “모두를 위한 금융”이 토스의 비전이라면, 외국인이라고 해서 그 대상에서 제외되어서는 안된다고 생각했어요. 외국인도 편하…

toss

레거시 정산 개편기: 신규 시스템 투입 여정부터 대규모 배치 운영 노하우까지 (opens in new tab)

Toss Payments recently overhauled its 20-year-old legacy settlement system to overcome deep-seated technical debt and prepare for massive transaction growth. By shifting from monolithic SQL queries and aggregated data to a granular, object-oriented architecture, the team significantly improved system maintainability, traceability, and batch processing performance. The transition focused on breaking down complex dependencies and ensuring that every transaction is verifiable and reproducible. ### Replacing Monolithic SQL with Object-Oriented Logic * The legacy system relied on a "giant common query" filled with nested `DECODE`, `CASE WHEN`, and complex joins, making it nearly impossible to identify the impact of small changes. * The team applied a "Divide and Conquer" strategy, splitting the massive query into distinct domains and refined sub-functions. * Business logic was moved from the database layer into Kotlin-based objects (e.g., `SettlementFeeCalculator`), making business rules explicit and easier to test. * This modular approach allowed for "Incremental Migration," where specific features (like exchange rate conversions) could be upgraded to the new system independently. ### Improving Traceability through Granular Data Modeling * The old system stored data in an aggregated state (Sum), which prevented developers from tracing errors back to specific transactions or reusing data for different reporting needs. * The new architecture manages data at the minimum transaction unit (1:1), ensuring that every settlement result corresponds to a specific transaction. * "Setting Snapshots" were introduced to store the exact contract conditions (fee rates, VAT status) at the time of calculation, allowing the system to reconstruct the context of past settlements. * A state-based processing model was implemented to enable selective retries for failed transactions, significantly reducing recovery time compared to the previous "all-or-nothing" transaction approach. ### Optimizing High-Resolution Data and Query Performance * Managing data at the transaction level led to an explosion in data volume, necessitating specialized database strategies. * The team implemented date-based Range Partitioning and composite indexing on settlement dates to maintain high query speeds despite the increased scale. * To balance write performance and read needs, they created "Query-specific tables" that offload the processing burden from the main batch system. * Complex administrative queries were delegated to a separate high-performance data serving platform, maintaining a clean separation between core settlement logic and flexible data analysis. ### Resolving Batch Performance and I/O Bottlenecks * The legacy batch system struggled with long processing times that scaled poorly with transaction growth due to heavy I/O and single-threaded processing. * I/O was minimized by caching merchant contract information in memory at the start of a batch step, eliminating millions of redundant database lookups. * The team optimized the `ItemProcessor` in Spring Batch by implementing bulk lookups (using a Wrapper structure) to handle multiple records at once rather than querying the database for every individual item. This modernization demonstrates that scaling a financial system requires moving beyond "convenient" aggregations toward a granular, state-driven architecture. By decoupling business logic from the database and prioritizing data traceability, Toss Payments has built a foundation capable of handling the next generation of transaction volumes.

toss

레거시 결제 원장을 확장 가능한 시스템으로 (opens in new tab)

Toss Payments successfully modernized a 20-year-old legacy payment ledger by transitioning to a decoupled, MySQL-based architecture designed for high scalability and consistency. By implementing strategies like INSERT-only immutability and event-driven domain isolation, they overcame structural limitations such as the inability to handle split payments. Ultimately, the project demonstrates that robust system design must be paired with resilient operational recovery mechanisms to manage the complexities of large-scale financial migrations. ### Legacy Ledger Challenges * **Inconsistent Schemas:** Different payment methods used entirely different table structures; for instance, a table named `REFUND` unexpectedly contained only account transfer data rather than all refund types. * **Domain Coupling:** Multiple domains (settlement, accounting, and payments) shared the same tables and columns, meaning a single schema change required impact analysis across several teams. * **Structural Limits:** A rigid 1:1 relationship between a payment and its method prevented the implementation of modern features like split payments or "Dutch pay" models. ### New Ledger Architecture * **Data Immutability:** The system shifted from updating existing rows to an **INSERT-only** principle, ensuring a reliable audit trail and preventing database deadlocks. * **Event-Driven Decoupling:** Instead of direct database access, the system uses Kafka to publish payment events, allowing independent domains to consume data without tight coupling. * **Payment-Approval Separation:** By separating the "Payment" (the transaction intent) from the "Approval" (the specific financial method), the system now supports multiple payment methods per transaction. ### Safe Migration and Data Integrity * **Asynchronous Mirroring:** To maintain zero downtime, data was initially written to the legacy system and then asynchronously loaded into the new MySQL ledger. * **Resource Tuning:** Developers used dedicated migration servers within the same AWS Availability Zone to minimize latency and implemented **Bulk Inserts** to handle hundreds of millions of rows efficiently. * **Verification Batches:** A separate batch process ran every five minutes against a Read-Only (RO) database to identify and correct any data gaps caused by asynchronous processing failures. ### Operational Resilience and Incident Response * **Query Optimization:** During a load spike, the MySQL optimizer chose "Full Scans" over indexes; the team resolved this by implementing SQL hints and utilizing a 5-version Docker image history for rapid rollbacks. * **Network Cancellation:** To handle timeouts between Toss and external card issuers, the system uses specific logic to automatically send cancellation requests and synchronize states. * **Timeout Standardization:** Discrepancies between microservices were resolved by calculating the maximum processing time of approval servers and aligning all upstream timeout settings to prevent merchant response mismatches. * **Reliable Event Delivery:** While using the **Outbox pattern** for events, the team added log-based recovery (Elasticsearch and local disk) and idempotency keys in event headers to handle both missing and duplicate messages. For organizations tackling significant technical debt, this transition highlights that initial design is only half the battle. True system reliability comes from building "self-healing" structures—such as automated correction batches and standardized timeout chains—that can survive the unpredictable nature of live production environments.

toss

토스플레이스 사일로 QA로 일한다는 것 (opens in new tab)

Toss Place implements a dual-role QA structure where managers are embedded directly within product Silos from the initial planning stages to final deployment. This shift moves QA from a final-stage bottleneck to a proactive partner that enhances delivery speed and stability through deep historical context and early risk mitigation. Consequently, the organization has transitioned to a culture where quality is viewed as a shared team responsibility rather than a siloed functional task. ### Integrating QA into Product Silos * QA managers belong to both a central functional team and specific product units (Silos) to ensure they are involved in the entire product lifecycle. * Participation begins at the OKR design phase, allowing QA to align testing strategies with specific product intentions and business goals. * Early involvement enables accurate risk assessment and scope estimation, preventing the "shallow testing" that often occurs when QA only sees the final product. ### Optimizing Spec Reviews and Sanity Testing * The team introduced a structured flow consisting of Spec Reviews followed by Q&A sessions to reduce repetitive discussions and information gaps. * All specification changes are centralized in shared design tools (such as Deus) or messenger threads to ensure transparency across all roles. * "Sanity Test" criteria were established where developers and QA agree on "Happy Case" validations and minimum spec requirements before development begins, ensuring everyone starts from the same baseline. ### Collaborative Live Monitoring * Post-release checklists were developed to involve the entire Silo in live monitoring, overcoming the limitations of having a single QA manager per unit. * This collaborative approach encourages non-technical roles to interact with the live product, reinforcing the culture that quality is a collective team responsibility. ### Streamlining Issue Tracking and Communication * The team implemented a "Send to Notion" workflow to instantly capture messenger-based feedback and ideas into a structured, prioritized backlog. * To reduce communication fragmentation, they transitioned from Jira to integrated Messenger Lists and Canvases, which allowed for centralized discussions and faster issue resolution. * Backlogs are prioritized based on user experience impact and release urgency, ensuring that critical bugs are addressed while minor improvements are tracked for future cycles. The success of these initiatives demonstrates that QA effectiveness is driven by integration and autonomy rather than rigid adherence to specific tools. To achieve both high velocity and high quality, organizations should empower QA professionals to act as product peers who can flexibly adapt their processes to the unique needs and data-driven goals of their specific product teams.