database-design

4 posts

toss

Legacy Settlement Overhaul: From (opens in new tab)

Toss Payments recently overhauled its 20-year-old legacy settlement system to overcome deep-seated technical debt and prepare for massive transaction growth. By shifting from monolithic SQL queries and aggregated data to a granular, object-oriented architecture, the team significantly improved system maintainability, traceability, and batch processing performance. The transition focused on breaking down complex dependencies and ensuring that every transaction is verifiable and reproducible. ### Replacing Monolithic SQL with Object-Oriented Logic * The legacy system relied on a "giant common query" filled with nested `DECODE`, `CASE WHEN`, and complex joins, making it nearly impossible to identify the impact of small changes. * The team applied a "Divide and Conquer" strategy, splitting the massive query into distinct domains and refined sub-functions. * Business logic was moved from the database layer into Kotlin-based objects (e.g., `SettlementFeeCalculator`), making business rules explicit and easier to test. * This modular approach allowed for "Incremental Migration," where specific features (like exchange rate conversions) could be upgraded to the new system independently. ### Improving Traceability through Granular Data Modeling * The old system stored data in an aggregated state (Sum), which prevented developers from tracing errors back to specific transactions or reusing data for different reporting needs. * The new architecture manages data at the minimum transaction unit (1:1), ensuring that every settlement result corresponds to a specific transaction. * "Setting Snapshots" were introduced to store the exact contract conditions (fee rates, VAT status) at the time of calculation, allowing the system to reconstruct the context of past settlements. * A state-based processing model was implemented to enable selective retries for failed transactions, significantly reducing recovery time compared to the previous "all-or-nothing" transaction approach. ### Optimizing High-Resolution Data and Query Performance * Managing data at the transaction level led to an explosion in data volume, necessitating specialized database strategies. * The team implemented date-based Range Partitioning and composite indexing on settlement dates to maintain high query speeds despite the increased scale. * To balance write performance and read needs, they created "Query-specific tables" that offload the processing burden from the main batch system. * Complex administrative queries were delegated to a separate high-performance data serving platform, maintaining a clean separation between core settlement logic and flexible data analysis. ### Resolving Batch Performance and I/O Bottlenecks * The legacy batch system struggled with long processing times that scaled poorly with transaction growth due to heavy I/O and single-threaded processing. * I/O was minimized by caching merchant contract information in memory at the start of a batch step, eliminating millions of redundant database lookups. * The team optimized the `ItemProcessor` in Spring Batch by implementing bulk lookups (using a Wrapper structure) to handle multiple records at once rather than querying the database for every individual item. This modernization demonstrates that scaling a financial system requires moving beyond "convenient" aggregations toward a granular, state-driven architecture. By decoupling business logic from the database and prioritizing data traceability, Toss Payments has built a foundation capable of handling the next generation of transaction volumes.

toss

From Legacy Payment Ledger to Scalable (opens in new tab)

Toss Payments successfully modernized a 20-year-old legacy payment ledger by transitioning to a decoupled, MySQL-based architecture designed for high scalability and consistency. By implementing strategies like INSERT-only immutability and event-driven domain isolation, they overcame structural limitations such as the inability to handle split payments. Ultimately, the project demonstrates that robust system design must be paired with resilient operational recovery mechanisms to manage the complexities of large-scale financial migrations. ### Legacy Ledger Challenges * **Inconsistent Schemas:** Different payment methods used entirely different table structures; for instance, a table named `REFUND` unexpectedly contained only account transfer data rather than all refund types. * **Domain Coupling:** Multiple domains (settlement, accounting, and payments) shared the same tables and columns, meaning a single schema change required impact analysis across several teams. * **Structural Limits:** A rigid 1:1 relationship between a payment and its method prevented the implementation of modern features like split payments or "Dutch pay" models. ### New Ledger Architecture * **Data Immutability:** The system shifted from updating existing rows to an **INSERT-only** principle, ensuring a reliable audit trail and preventing database deadlocks. * **Event-Driven Decoupling:** Instead of direct database access, the system uses Kafka to publish payment events, allowing independent domains to consume data without tight coupling. * **Payment-Approval Separation:** By separating the "Payment" (the transaction intent) from the "Approval" (the specific financial method), the system now supports multiple payment methods per transaction. ### Safe Migration and Data Integrity * **Asynchronous Mirroring:** To maintain zero downtime, data was initially written to the legacy system and then asynchronously loaded into the new MySQL ledger. * **Resource Tuning:** Developers used dedicated migration servers within the same AWS Availability Zone to minimize latency and implemented **Bulk Inserts** to handle hundreds of millions of rows efficiently. * **Verification Batches:** A separate batch process ran every five minutes against a Read-Only (RO) database to identify and correct any data gaps caused by asynchronous processing failures. ### Operational Resilience and Incident Response * **Query Optimization:** During a load spike, the MySQL optimizer chose "Full Scans" over indexes; the team resolved this by implementing SQL hints and utilizing a 5-version Docker image history for rapid rollbacks. * **Network Cancellation:** To handle timeouts between Toss and external card issuers, the system uses specific logic to automatically send cancellation requests and synchronize states. * **Timeout Standardization:** Discrepancies between microservices were resolved by calculating the maximum processing time of approval servers and aligning all upstream timeout settings to prevent merchant response mismatches. * **Reliable Event Delivery:** While using the **Outbox pattern** for events, the team added log-based recovery (Elasticsearch and local disk) and idempotency keys in event headers to handle both missing and duplicate messages. For organizations tackling significant technical debt, this transition highlights that initial design is only half the battle. True system reliability comes from building "self-healing" structures—such as automated correction batches and standardized timeout chains—that can survive the unpredictable nature of live production environments.

toss

Working as a Toss Place Sil (opens in new tab)

Toss Place implements a dual-role QA structure where managers are embedded directly within product Silos from the initial planning stages to final deployment. This shift moves QA from a final-stage bottleneck to a proactive partner that enhances delivery speed and stability through deep historical context and early risk mitigation. Consequently, the organization has transitioned to a culture where quality is viewed as a shared team responsibility rather than a siloed functional task. ### Integrating QA into Product Silos * QA managers belong to both a central functional team and specific product units (Silos) to ensure they are involved in the entire product lifecycle. * Participation begins at the OKR design phase, allowing QA to align testing strategies with specific product intentions and business goals. * Early involvement enables accurate risk assessment and scope estimation, preventing the "shallow testing" that often occurs when QA only sees the final product. ### Optimizing Spec Reviews and Sanity Testing * The team introduced a structured flow consisting of Spec Reviews followed by Q&A sessions to reduce repetitive discussions and information gaps. * All specification changes are centralized in shared design tools (such as Deus) or messenger threads to ensure transparency across all roles. * "Sanity Test" criteria were established where developers and QA agree on "Happy Case" validations and minimum spec requirements before development begins, ensuring everyone starts from the same baseline. ### Collaborative Live Monitoring * Post-release checklists were developed to involve the entire Silo in live monitoring, overcoming the limitations of having a single QA manager per unit. * This collaborative approach encourages non-technical roles to interact with the live product, reinforcing the culture that quality is a collective team responsibility. ### Streamlining Issue Tracking and Communication * The team implemented a "Send to Notion" workflow to instantly capture messenger-based feedback and ideas into a structured, prioritized backlog. * To reduce communication fragmentation, they transitioned from Jira to integrated Messenger Lists and Canvases, which allowed for centralized discussions and faster issue resolution. * Backlogs are prioritized based on user experience impact and release urgency, ensuring that critical bugs are addressed while minor improvements are tracked for future cycles. The success of these initiatives demonstrates that QA effectiveness is driven by integration and autonomy rather than rigid adherence to specific tools. To achieve both high velocity and high quality, organizations should empower QA professionals to act as product peers who can flexibly adapt their processes to the unique needs and data-driven goals of their specific product teams.

line

Replacing the Payment System DB Handling (opens in new tab)

The LINE Billing Platform successfully migrated its large-scale payment database from Nbase-T to Vitess to handle high-traffic global transactions. While initially exploring gRPC for its performance reputation, the team transitioned to the MySQL protocol to ensure stability and reduce CPU overhead within their Java-based environment. This implementation demonstrates how Vitess can manage complex sharding requirements while maintaining high availability through automated recovery tools. ### Protocol Selection and Implementation - The team initially attempted to use the gRPC protocol but encountered `http2: frame too large` errors and significant CPU overhead during performance testing. - Manual mapping of query results to Java objects proved cumbersome with the Vitess gRPC client, leading to a shift toward the more mature and recommended MySQL protocol. - Using the MySQL protocol allowed the team to leverage standard database drivers while benefiting from Vitess's routing capabilities via VTGate. ### Keyspace Architecture and Data Routing - The system utilizes a dual-keyspace strategy: a "Global Keyspace" for unsharded metadata and a "Service Keyspace" for sharded transaction data. - The Global Keyspace manages sharding keys using a "sequence" table type to ensure unique, auto-incrementing identifiers across the platform. - The Service Keyspace is partitioned into $N$ shards using a hash-based Vindex, which distributes coin balances and transaction history. - VTGate automatically routes queries to the correct shard by analyzing the sharding key in the `WHERE` clause or `INSERT` statement, minimizing cross-shard overhead. ### MySQL Compatibility and Transaction Logic - Vitess maintains `REPEATABLE READ` isolation for single-shard transactions, while multi-shard transactions default to `READ COMMITTED`. - Advanced features like Two-Phase Commit (2PC) are available for handling distributed transactions across multiple shards. - Query execution plans are analyzed using `VEXPLAIN` and `VTEXPLAIN`, often managed through the VTAdmin web interface for better visibility. - Certain limitations apply, such as temporary tables only being supported in unsharded keyspaces and specific unsupported SQL cases documented in the Vitess core. ### Automated Operations and Monitoring - The team employs VTOrc (based on Orchestrator) to automatically detect and repair database failures, such as unreachable primaries or replication stops. - Monitoring is centralized via Prometheus, which scrapes metrics from VTOrc, VTGate, and VTTablet components at dedicated ports (e.g., 16000). - Real-time alerts are routed through Slack and email, using `tablet_alias` to specifically identify which MySQL node or VTTablet is experiencing issues. - A web-based recovery dashboard provides a history of automated fixes, allowing operators to track the health of the cluster over time. For organizations migrating high-traffic legacy systems to a cloud-native sharding solution, prioritizing the MySQL protocol over gRPC is recommended for better compatibility with existing application frameworks and reduced operational complexity.