kotlin

24 posts

line

Code Quality Improvement Techniques Part 30 (opens in new tab)

Code quality often suffers when functions share implicit dependencies, where the correct behavior of one relies on the state or validation provided by another. This "invisible" connection creates fragile code that is prone to runtime errors and logic mismatches during refactoring or feature expansion. To solve this, developers should consolidate related logic or make dependencies explicit to ensure consistency and safety. ## Problems with Implicit Function Dependencies When logic is split across separate functions—such as one for validation (`isContentValid`) and another for processing (`getMessageText`)—developers often rely on undocumented preconditions. * **Fragile Runtime Safety:** In the provided example, `getMessageText` throws a runtime error if called on invalid data, assuming the caller has already checked `isContentValid`. * **Maintenance Burden:** When new data types (e.g., a new message type) are added, developers must remember to update both functions to keep them in sync, increasing the risk of "forgotten" updates. * **Hidden Logic Flow:** Callers might not realize the two functions are linked, leading to improper usage where the transformation function is called without the necessary prior validation. ## Consolidating Logic for Single-Source Truth The most effective way to eliminate implicit dependencies is to merge filtering and transformation into a single function. This ensures that the code cannot reach a processing state without passing through the necessary logic. * **Nullable Returns:** By changing the transformation function to return a nullable type (`String?`), the function can signal that a piece of data is "invalid" or "empty" directly through its return value. * **Simplified Caller Logic:** The UI layer no longer needs to call two separate functions; it simply checks if the result of the transformation is null to determine visibility. * **Elimination of Redundant Branches:** This approach reduces the number of `when` or `if-else` blocks that need to be maintained across the codebase. ## Establishing Explicit Consistency In scenarios where separate functions for validation and transformation are required for clarity or architectural reasons, the validation logic should be defined in terms of the transformation. * **Dependent Validation:** Instead of writing a separate `when` block for `isContentValid`, the function should simply check if `getMessageText` returns a non-null value. * **Guaranteed Synchronization:** This structure makes the relationship between the two functions explicit and guarantees that if a message is deemed "valid," it will always produce a valid text output. * **Improved Documentation:** Defining functions this way serves as self-documenting code, showing future developers exactly how the two operations are linked. When functions share a "red thread" of logic, they should either be merged or structured so that one acts as the source of truth for the other. By removing the need for callers to remember implicit preconditions, you reduce the surface area for bugs and make the codebase significantly easier to extend.

line

Code Quality Improvement Techniques Part 29 (opens in new tab)

Complexity in software often arises from "Gordian Variables," where tangled data dependencies make the logic flow difficult to trace and maintain. By identifying and designing an ideal intermediate data structure, developers can decouple these dependencies and simplify complex operations. This approach replaces convoluted conditional checks with a clean, structured data flow that highlights the core business logic. ## The Complexity of Tangled Dependencies Synchronizing remote data with local storage often leads to fragmented logic when the relationship between data IDs and objects is not properly managed. * Initial implementations frequently use set operations like `subtract` on ID lists to determine which items to create, update, or delete. * This approach forces the program to re-access original data sets multiple times, creating a disconnected flow between identifying a change and executing it. * Dependency entanglements often necessitate "impossible" runtime error handling (e.g., `error("This must not happen")`) because the compiler cannot guarantee data presence within maps during the update phase. * Inconsistent processing patterns emerge, where "add" and "update" logic might follow one sequence while "delete" logic follows an entirely different one. ## Designing Around Intermediate Data Structures To untangle complex flows, developers should work backward from an ideal data representation that categorizes all possible states—additions, updates, and deletions. * The first step involves creating lookup maps for both remote and local entries to provide O(1) access to data objects. * A unified collection of all unique IDs from both sources serves as the foundation for a single, comprehensive transformation pass. * A specialized utility function, such as `partitionByNullity`, can transform a sequence of data pairs (`Pair<Remote?, Local?>`) into three distinct, non-nullable lists. * This transformation results in a `Triple` containing `createdEntries`, `updatedEntries` (as pairs), and `deletedEntries`, effectively separating data preparation from business execution. ## Improved Synchronization Flow Restructuring the function around categorized lists allows the primary synchronization logic to remain concise and readable. * The synchronization function becomes a sequence of two phases: data categorization followed by execution loops. * By using the `partitionByNullity` pattern, the code eliminates the need for manual null checks or "impossible" error branches during the update process. * The final implementation highlights the most important part of the code—the `forEach` blocks for adding, updating, and deleting—by removing the noise of ID-based lookups and set mathematics. When faced with complex data dependencies, prioritize the creation of a clean intermediate data structure over-optimizing individual logical branches. Designing a data flow that naturally represents the different states of your business logic will result in more robust, self-documenting, and maintainable code.

line

Code Quality Improvement Techniques Part (opens in new tab)

LY Corporation’s technical review highlights that making a class open for inheritance imposes a "tax" on its internal constraints, particularly immutability. While developers often use inheritance to create specialized versions of a class, doing so with immutable types can allow subclasses to inadvertently or intentionally break the parent class's guarantees. To ensure strict data integrity, the post concludes that classes intended to be immutable should be made final or designed around read-only interfaces rather than open for extension. ### The Risks of Open Immutable Classes * Kotlin developers often wrap `IntArray` in an `ImmutableIntList` to avoid the overhead of boxed types while ensuring the collection remains unchangeable. * If `ImmutableIntList` is marked as `open`, a developer might create a `MutableIntList` subclass that adds a `set` method to modify the internal `protected valueArray`, violating the "Immutable" contract of the parent type. * Even if the internal state is `private`, a subclass can override the `get` method to return dynamic or state-dependent values, effectively breaking the expectation that the data remains constant. * These issues demonstrate that any class with a "fundamental" name should be carefully guarded against unexpected inheritance in different modules or packages. ### Establishing Safe Inheritance Hierarchies * Mutable objects should not inherit from immutable objects, as this inherently violates the immutability constraints established by the parent. * Conversely, immutable objects should not inherit from mutable ones; this often leads to runtime errors (such as `UnsupportedOperationException`) when a user attempts to call modification methods like `add` or `set` on an immutable instance. * The most effective design pattern is to use a "read-only" (unmodifiable) interface as a common parent, similar to how Kotlin distinguishes between `List` and `MutableList`. * In this structure, mutable classes can inherit from the read-only parent without issue (adding new methods), and immutable classes can inherit from the read-only parent while adding stricter internal constraints. To maintain high code quality and prevent logic errors, developers should default to making classes final when immutability is a core requirement. If shared functionality is needed across different types of lists, utilize composition or a shared read-only interface to ensure that the "immutable" label remains a truthful guarantee.

line

Code Quality Improvement Techniques Part 27 (opens in new tab)

Over-engineering through excessive Dependency Injection (DI) can introduce unnecessary complexity and obscure a system's logic. While DI is a powerful tool for modularity, applying it to simple utility functions or data models often creates a maintenance burden without providing tangible benefits. Developers should aim to balance flexibility with simplicity by only injecting dependencies that serve a specific architectural purpose. ### The Risks of Excessive Dependency Injection Injecting every component, including simple formatters and model factories, can lead to several technical issues that degrade code maintainability: * **Obscured Logic Flow:** When utilities are hidden behind interfaces and injected via constructors, tracing the actual execution path requires navigating through multiple callers and implementation files, making the code harder to read. * **Increased Caller Responsibility:** Requiring dependencies for every small component forces the calling class to manage a "bloated" set of objects, often leading to a chain reaction where high-level classes must resolve dozens of unrelated dependencies. * **Data Inconsistency:** Injecting multiple utilities that rely on a shared state (like a `Locale`) creates a risk where a caller might accidentally pass mismatched configurations to different components, breaking the expected association between values. ### Valid Use Cases for Dependency Injection DI should be reserved for scenarios where the benefits of abstraction outweigh the cost of complexity. Proper use cases include: * **Lifecycle and Scope Management:** Sharing objects with specific lifecycles, such as those managing global state or cross-cutting concerns. * **Dependency Inversion:** Breaking circular dependencies between modules or ensuring the code adheres to specific architectural boundaries (e.g., Clean Architecture). * **Implementation Switching:** Enabling the replacement of components for different environments, such as swapping a real network repository for a mock implementation during unit testing or debugging. * **Decoupling for Build Performance:** Separating implementations into different modules to improve incremental build speeds or to isolate proprietary third-party libraries. ### Strategies for Refactoring and Simplification To improve code quality, developers should identify "transparent" dependencies that can be internalized or simplified: * **Direct Instantiation:** For simple data models like `NewsSnippet`, replace factory functions with direct constructor calls to clarify the intent and reduce boilerplate. * **Internalize Simple Utilities:** Classes like `TimeTextFormatter` or `StringTruncator` that perform basic logic can be maintained as private properties within the class or as stateless `object` singletons rather than being injected. * **Selective Injection:** Reserve constructor parameters for complex objects (e.g., repositories that handle network or database access) and environment-dependent values (e.g., a user's `Locale`). The core principle for maintaining a clean codebase is to ensure every injected dependency has a clear, documented purpose. By avoiding the trap of "injecting everything by default," developers can create systems that are easier to trace, test, and maintain.

line

Code Quality Improvement Techniques Part 26 (opens in new tab)

The quality of code documentation depends heavily on the hierarchy of information, specifically prioritizing high-level intent in the very first sentence. By focusing on abstract summaries rather than implementation details at the start, developers can ensure that readers understand a function's purpose instantly without parsing through sequential logic. This principle of "summary-first" communication enhances readability and developer productivity across documentation comments, inline explanations, and TODO tasks. ### Strategies for Effective Documentation * **Prioritize the first sentence:** Documentation comments should be written so that the overview is understandable from the first sentence alone. * **Increase abstraction levels:** Avoid simply repeating what the code does (e.g., "split by period and remove empty strings"). Instead, describe the result in domain terms, such as "returns a list of words grouped by sentences." * **Identify the most important element:** Since the primary goal of most functions is to produce a result, the summary should lead with what is being returned rather than how it is calculated. * **Layer the details:** Technical specifics—such as specific delimiters like `SENTENCE_SEPARATOR` ('.') or `WORD_SEPARATOR_REGEX` ([ ,]+)—and exclusion rules for empty strings should follow the initial summary. * **Use concrete examples:** For complex transformations or edge cases, include a sample input and output (e.g., showing how `" a bc. .d,,."` maps to `[["a", "bc"], ["d"]]`) to clarify boundary conditions. ### Prioritizing Intent in Non-Documentation Comments * **Focus on the "Why" for workarounds:** In inline comments, especially for "temporary fixes" or bug workarounds, the reason for the code's existence is more important than the action it performs. For instance, leading with "This is to avoid a bug in Device X" is more helpful than "Resetting the value to its previous state." * **Lead with the goal in TODOs:** When writing TODO comments, state the ideal state or the required change first. Explanations regarding current limitations or why the change cannot be made immediately should be relegated to the following sentences. * **Improve scannability:** Structuring comments this way allows developers to scan the codebase and understand the motivation behind complex logic without needing to read the entire comment block. To maintain a clean and maintainable codebase, always choose the most critical piece of information—whether it is the function's return value, a bug's context, or a future goal—and place it at the very beginning of your comments.

toss

Legacy Settlement Overhaul: From (opens in new tab)

Toss Payments recently overhauled its 20-year-old legacy settlement system to overcome deep-seated technical debt and prepare for massive transaction growth. By shifting from monolithic SQL queries and aggregated data to a granular, object-oriented architecture, the team significantly improved system maintainability, traceability, and batch processing performance. The transition focused on breaking down complex dependencies and ensuring that every transaction is verifiable and reproducible. ### Replacing Monolithic SQL with Object-Oriented Logic * The legacy system relied on a "giant common query" filled with nested `DECODE`, `CASE WHEN`, and complex joins, making it nearly impossible to identify the impact of small changes. * The team applied a "Divide and Conquer" strategy, splitting the massive query into distinct domains and refined sub-functions. * Business logic was moved from the database layer into Kotlin-based objects (e.g., `SettlementFeeCalculator`), making business rules explicit and easier to test. * This modular approach allowed for "Incremental Migration," where specific features (like exchange rate conversions) could be upgraded to the new system independently. ### Improving Traceability through Granular Data Modeling * The old system stored data in an aggregated state (Sum), which prevented developers from tracing errors back to specific transactions or reusing data for different reporting needs. * The new architecture manages data at the minimum transaction unit (1:1), ensuring that every settlement result corresponds to a specific transaction. * "Setting Snapshots" were introduced to store the exact contract conditions (fee rates, VAT status) at the time of calculation, allowing the system to reconstruct the context of past settlements. * A state-based processing model was implemented to enable selective retries for failed transactions, significantly reducing recovery time compared to the previous "all-or-nothing" transaction approach. ### Optimizing High-Resolution Data and Query Performance * Managing data at the transaction level led to an explosion in data volume, necessitating specialized database strategies. * The team implemented date-based Range Partitioning and composite indexing on settlement dates to maintain high query speeds despite the increased scale. * To balance write performance and read needs, they created "Query-specific tables" that offload the processing burden from the main batch system. * Complex administrative queries were delegated to a separate high-performance data serving platform, maintaining a clean separation between core settlement logic and flexible data analysis. ### Resolving Batch Performance and I/O Bottlenecks * The legacy batch system struggled with long processing times that scaled poorly with transaction growth due to heavy I/O and single-threaded processing. * I/O was minimized by caching merchant contract information in memory at the start of a batch step, eliminating millions of redundant database lookups. * The team optimized the `ItemProcessor` in Spring Batch by implementing bulk lookups (using a Wrapper structure) to handle multiple records at once rather than querying the database for every individual item. This modernization demonstrates that scaling a financial system requires moving beyond "convenient" aggregations toward a granular, state-driven architecture. By decoupling business logic from the database and prioritizing data traceability, Toss Payments has built a foundation capable of handling the next generation of transaction volumes.

woowahan

Test Automation with AI: (opens in new tab)

This blog post explores how a development team at Woowahan Tech successfully automated the creation of 100 unit tests in just 30 minutes by combining a custom IntelliJ plugin with Amazon Q. The author argues that while full AI automation often fails in complex multi-module environments, a hybrid approach using "compile-guaranteed templates" ensures high success rates and maintains operational stability. This strategy allows developers to bypass repetitive setup tasks while leveraging AI for logic implementation within a strictly defined, valid structure. ### Evaluating AI Assistants for Testing * The team compared various AI tools including GitHub Copilot, Cursor, and Amazon Q to determine which best fit their existing IntelliJ-based workflow. * Amazon Q was selected for its superior understanding of the entire project context and its ability to integrate seamlessly as a plugin without requiring a switch to a new IDE. * Initial manual use of AI assistants highlighted repetitive patterns: developers had to constantly specify team conventions (Kotest FunSpec, MockK) and manually fix build errors in 15% of the generated code. * On average, it took 10 minutes per class to generate and refine tests manually, prompting the team to seek a more automated solution via a custom plugin. ### The Pitfalls of Full Automation * The first version of the custom plugin attempted to generate complete test files by gathering class metadata through PSI (Program Structure Interface) and sending it to the Gemini API. * Pilot tests revealed a 90% compilation failure rate, as the AI frequently generated incorrect imports, hallucinated non-existent fields, or used mismatched data types. * A critical issue was the "loss of existing tests," where the AI-generated output would completely overwrite previous work rather than appending to it. * In complex multi-module projects, the AI struggled to identify the correct classes when multiple modules contained identical class names, leading to significant manual correction time. ### Shifting to Compile-Guaranteed Templates * To overcome the limitations of full automation, the team pivoted to a "template first" approach where the plugin generates a valid, compilable shell for the test. * The plugin handles the complex infrastructure of the test file, including correct imports, MockK setups, and empty test stubs for every method in the target class. * This approach reduces the AI's "hallucination surface" by providing it with a predefined structure, allowing tools like Amazon Q to focus solely on filling in the implementation details. * By automating the 1-minute setup and letting the AI handle the 2-minute implementation phase, the team achieved a 97% success rate across 100 test cases. ### Practical Conclusion For teams looking to improve test coverage in large-scale repositories, the most effective strategy is to use IDE plugins to automate context gathering and boilerplate generation. By providing the AI with a structurally sound template, developers can eliminate compilation errors and significantly reduce the time spent on manual refinement, ensuring that even complex edge cases are covered with minimal effort.

line

Code Quality Improvement Techniques Part (opens in new tab)

Effective code review communication relies on a "conclusion-first" approach to minimize cognitive load and ensure clarity for the developer. By stating proposed changes or specific requests before providing the underlying rationale, reviewers help authors understand the primary goal of the feedback immediately. This practice improves development productivity by making review comments easier to parse and act upon without repeated reading. ### Optimizing Review Comment Structure * Place the core suggestion or requested code change at the very beginning of the comment to establish immediate context. * Follow the initial request with a structured explanation, utilizing headers or numbered lists to organize multiple supporting arguments. * Clearly distinguish between the "what" (the requested change) and the "why" (the technical justification) to prevent the intended action from being buried in a long technical discussion. * Use visual formatting to help the developer quickly validate the logic behind the suggestion once they understand the proposed change. ### Immutability and Data Class Design * Prefer the use of `val` over `var` in Kotlin `data class` structures to ensure object immutability. * Using immutable properties prevents bugs associated with unintended side effects that occur when mutable objects are shared across different parts of an application. * Instead of reassigning values to a mutable property, utilize the `copy()` function to create a new instance with updated state, which results in more robust and predictable code. * Avoid mixing `var` properties with `data class` features, as this can lead to confusion regarding whether to modify the existing instance or create a copy. ### Property Separation by Lifecycle * Analyze the update frequency of different properties within a class to identify those with different lifecycles. * Decouple frequently updated status fields (such as `onlineStatus` or `statusMessage`) from more stable attributes (such as `userId` or `accountName`) by moving them into separate classes. * Grouping properties by their lifecycle prevents unnecessary updates to stable data and makes the data model easier to maintain as the application scales. To maintain high development velocity, reviewers should prioritize brevity and structure in their feedback. Leading with a clear recommendation and supporting it with organized technical reasoning ensures that code reviews remain a tool for progress rather than a source of confusion.

line

Code Quality Improvement Techniques Part 24: The Value of Legacy (opens in new tab)

The LY Corporation Review Committee advocates for simplifying code by avoiding unnecessary inheritance when differences between classes are limited to static data rather than dynamic logic. By replacing complex interfaces and subclasses with simple data models and specific instances, developers can reduce architectural overhead and improve code readability. This approach ensures that configurations, such as UI themes, remain predictable and easier to maintain without the baggage of a type hierarchy. ### Limitations of Inheritance-Based Configuration * The initial implementation used a `FooScreenThemeStrategy` interface to define UI elements like background colors, text colors, and icons. * Specific themes (Light and Dark) were implemented as separate classes that overridden the interface properties. * This pattern creates an unnecessary proliferation of types when the only difference between the themes is the specific value of the constants being returned. * Using inheritance for simple value changes makes the code harder to follow and can lead to over-engineering. ### Valid Scenarios for Inheritance * **Dynamic Logic:** When behavior needs to change dynamically at runtime via dynamic dispatch. * **Sum Types:** Implementing restricted class hierarchies, such as Kotlin `sealed` classes or Java's equivalent. * **Decoupling:** Separating interface from implementation to satisfy DI container requirements or to improve build speeds. * **Dependency Inversion:** Applying architectural patterns to resolve circular dependencies or to enforce one-way dependency flows. ### Transitioning to Data Models and Instantiation * Instead of an interface, a single "final" class or data class (e.g., `FooScreenThemeModel`) should be defined to hold the required properties. * Individual themes are created as simple instances of this model rather than unique subclasses. * In Kotlin, defining a class without the `open` keyword ensures that the properties are not dynamically altered and that no hidden, instance-specific logic is introduced. * This "instantiation over inheritance" strategy guarantees that properties remain static and the code remains concise. To maintain a clean codebase, prioritize data-driven instantiation over class-based inheritance whenever logic remains constant. This practice reduces the complexity of the type system and makes the code more resilient to unintended side effects.

naver

Replacing a DB CDC replication tool that processes (opens in new tab)

Naver Pay successfully transitioned its core database replication system from a legacy tool to "ergate," a high-performance CDC (Change Data Capture) solution built on Apache Flink and Spring. This strategic overhaul was designed to improve maintainability for backend developers while resolving rigid schema dependencies that previously caused operational bottlenecks. By leveraging a modern stream-processing architecture, the system now manages massive transaction volumes with sub-second latency and enhanced reliability. ### Limitations of the Legacy System * **Maintenance Barriers:** The previous tool, mig-data, was written in pure Java by database core specialists, making it difficult for standard backend developers to maintain or extend. * **Strict Schema Dependency:** Developers were forced to follow a rigid DDL execution order (Target DB before Source DB) to avoid replication halts, complicating database operations. * **Blocking Failures:** Because the legacy system prioritized bi-directional data integrity, a single failed record could stall the entire replication pipeline for a specific shard. * **Operational Risk:** Recovery procedures were manual and restricted to a small group of specialized personnel, increasing the time-to-recovery during outages. ### Technical Architecture and Stack * **Apache Flink (LTS 2.0.0):** Selected for its high-availability, low-latency, and native Kafka integration, allowing the team to focus on replication logic rather than infrastructure. * **Kubernetes Session Mode:** Used to manage 12 concurrent jobs (6 replication, 6 verification) through a single Job Manager endpoint for streamlined monitoring and deployment. * **Hybrid Framework Approach:** The team isolated high-speed replication logic within Flink while using Spring (Kotlin) for complex recovery modules to leverage developer familiarity. * **Data Pipeline:** The system captures MySQL binlogs via `nbase-cdc`, publishes them to Kafka, and uses Flink `jdbc-sink` jobs to apply changes to Target DBs (nBase-T and Oracle). ### Three-Tier Operational Model: Replication, Verification, and Recovery * **Real-time Replication:** Processes incoming Kafka records and appends custom metadata columns (`ergate_yn`, `rpc_time`) to track the replication source and original commit time. * **Delayed Verification:** A dedicated "verifier" Flink job consumes the same Kafka topic with a 2-minute delay to check Target DB consistency against the source record. * **Secondary Logic:** To prevent false positives from rapid updates, the verifier performs a live re-query of the Source DB if a mismatch is initially detected. * **Multi-Stage Recovery:** * **Automatic Short-term:** Retries transient failures after 5 minutes. * **Automatic Long-term:** Uses batch processes to resolve persistent discrepancies. * **Manual:** Provides an admin interface for developers to trigger targeted reconciliations via API. ### Improvements in Schema Management and Performance * **DDL Independence:** By implementing query and schema caching, ergate allows Source and Target tables to be updated in any order without halting the pipeline. * **Performance Scaling:** The new system is designed to handle 10x the current peak QPS, ensuring stability even during high-traffic events like major sales or promotions. * **Metadata Tracking:** The inclusion of specific replication identifiers allows for clear distinction between automated replication and manual force-sync actions during troubleshooting. The ergate project demonstrates that a hybrid architecture—combining the high-throughput processing of Apache Flink with the robust logic handling of Spring—is highly effective for mission-critical financial systems. Organizations managing large-scale data replication should consider decoupling complex recovery logic from the main processing stream to ensure both performance and developer productivity.

line

Code Quality Improvement Techniques Part 23 (opens in new tab)

While early returns are a popular technique for clarifying code by handling error cases first, they should not be applied indiscriminately. This blog post argues that when error cases and normal cases share the same logic, integrating them into a single flow is often superior to branching. By treating edge cases as part of the standard execution path, developers can simplify their code and reduce unnecessary complexity. ### Unifying Edge Cases with Normal Logic Rather than treating every special condition as an error to be excluded via an early return, it is often more effective to design logic that naturally accommodates these cases. * For functions processing lists, standard collection operations like `map` or `filter` already handle empty collections without requiring explicit checks. * Integrating edge cases can lead to more concise code, though developers should be mindful of minor performance trade-offs, such as the overhead of creating sequence or list instances for empty inputs. * Unification ensures that the "main purpose" of the function remains the focus, rather than a series of guard clauses. ### Utilizing Language-Specific Safety Features Modern programming languages provide built-in operators and functions that allow developers to handle potential errors as part of the standard expression flow. * **Safe Navigation:** Use safe call operators (e.g., `?.`) and null-coalescing operators (e.g., `?:`) to handle null values as normal data flow rather than branching with `if (value == null)`. * **Collection Access:** Instead of manually checking if an index is within bounds, use functions like `getOrNull` or `getOrElse` to retrieve values safely. * **Property Dependencies:** In UI logic, instead of early returning when a string is empty, you can directly assign visibility and text values based on the condition (e.g., `isVisible = text.isNotEmpty()`). ### Functional Exception Handling When a process involves multiple steps that might throw exceptions, traditional early returns can lead to repetitive try-catch blocks and fragmented logic. * By using the `flatMap` pattern and Result-style types, developers can chain operations together. * Converting exceptions into specific error types within a wrapper (like a `Success` or `Error` sealed class) allows the entire sequence to be treated as a unified data flow. * This approach makes the overall business logic much clearer, as the "happy path" is represented by a clean chain of function calls rather than a series of nested or sequential error checks. Before implementing an early return, evaluate whether the edge case can be gracefully integrated into the main logic flow. If the language features or standard libraries allow the normal processing path to handle the edge case naturally, choosing integration over exclusion will result in more maintainable and readable code.

line

Code Quality Improvement Techniques Part 22 (opens in new tab)

The post argues that developers should avoid overriding the `equals` method to compare only a subset of an object’s properties, as this violates the fundamental principles of identity and structural equivalence. Implementing "partial equality" often leads to subtle, hard-to-trace bugs in reactive programming environments where UI updates depend on detecting changes through equality checks. To ensure system reliability, `equals` must strictly represent either referential identity or total structural equivalence. ### Risks of Partial Equality in Reactive UI * Reactive frameworks such as Kotlin’s `StateFlow`, `Flow`, and Android’s `LiveData` utilize `distinctUntilChanged` logic to optimize performance. * These "observable" patterns compare the new object instance with the previous one using `equals`; if the result is `true`, the update is ignored to prevent unnecessary re-rendering. * If a `UserProfileViewData` object only compares a `userId` field, the UI will fail to reflect changes to a user's nickname or profile image because the framework incorrectly assumes the data has not changed. * To avoid this, any comparison logic that only checks specific fields should be moved to a uniquely named function, such as `hasSameIdWith()`, instead of hijacking the standard `equals` method. ### Defining Identity vs. Equivalence * **Identity (Referential Equality):** This indicates that two references point to the exact same object instance, which is the default behavior of `Object.equals()` in Java or `Any.equals()` in Kotlin. * **Equivalence (Structural Equality):** This indicates that two objects are logically the same because all their properties match. In Kotlin, `data class` implementations provide this by default for all parameters defined in the primary constructor. * Proper implementation of equivalence requires that all fields within the object also have clearly defined equality logic. ### Nuances and Implementation Exceptions * **Kotlin Data Class Limitations:** Only properties declared in the primary constructor are included in the compiler-generated `equals` and `hashCode` methods; properties declared in the class body are ignored by default. * **Calculated Caches:** It is acceptable to exclude certain fields from an equality check if they do not change the logical state of the object, such as a `cachedValue` used to store the results of a heavy mathematical operation. * **Context-Dependent Equality:** The definition of equality can change based on the model's purpose. For example, a mathematical model might treat 1/2 and 2/4 as equal, whereas a UI display model might treat them as different because they represent different strings of text. When implementing `equals`, prioritize full structural equivalence to prevent data-stale bugs in reactive systems. If you only need to compare a unique identifier, create a dedicated method instead of repurposing the standard equality check.

line

Code Quality Improvement Techniques Part 21 (opens in new tab)

Designing objects that require a specific initialization sequence often leads to fragile code and runtime exceptions. When a class demands that a method like `prepare()` be called before its primary functionality becomes available, it places the burden of safety on the consumer rather than the structure of the code itself. To improve reliability, developers should aim to create "unbreakable" interfaces where an instance is either ready for use upon creation or restricted by the type system from being used incorrectly. ### Problems with "Broken" Constructors * Classes that allow instantiation in an "unprepared" state rely on documentation or developer memory to avoid `IllegalStateException` errors. * When an object is passed across different layers of an application, it becomes difficult to track whether the required setup logic has been executed. * Relying on runtime checks to verify internal state increases the surface area for bugs that only appear during specific execution paths. ### Immediate Initialization and Factory Patterns * The most direct solution is to move initialization logic into the `init` block, allowing properties to be defined as read-only (`val`). * Because constructors have limitations—such as the inability to use `suspend` functions or handle complex side effects—a private constructor combined with a static factory method (e.g., `companion object` in Kotlin) is often preferred. * Using a factory method like `createInstance()` ensures that all necessary preparation logic is completed before a user ever receives the object instance. ### Lazy and Internal Preparation * If the initialization process is computationally expensive and might not be needed for every instance, "lazy" initialization can defer the cost until the first time a functional method is called. * In Kotlin, the `by lazy` delegate can be used to encapsulate preparation logic, ensuring it only runs once and remains thread-safe. * Alternatively, the class can handle preparation internally within its main methods, checking the initialization state automatically so the user does not have to manage it manually. ### Type-Safe State Transitions * For complex lifecycles, the type system can be used to enforce order by splitting the object into two distinct classes: one for the "unprepared" state and one for the "prepared" state. * The initial class contains only the `prepare()` method, which returns a new instance of the "Prepared" class upon completion. * This approach makes it a compile-time impossibility to call methods like `play()` on an object that hasn't been prepared, effectively eliminating a whole category of runtime errors. ### Recommendations When designing classes with internal states, prioritize structural safety by making it impossible to represent an invalid state. Use factory functions for complex setup logic and consider splitting classes into separate types if they have distinct "ready" and "not ready" phases to leverage the compiler for error prevention.

line

Code Quality Improvement Techniques Part (opens in new tab)

When implementing resource management patterns similar to Kotlin's `use` or Java's try-with-resources, developers often face the challenge of handling exceptions that occur during both primary execution and resource cleanup. Simply wrapping these multiple failures in a custom exception container can inadvertently break the calling code's error-handling logic by masking the original exception type. To maintain code quality, developers should prioritize the primary execution exception and utilize the `addSuppressed` mechanism to preserve secondary errors without disrupting the expected flow. ### The Risks of Custom Exception Wrapping Creating a new exception class to consolidate multiple errors during resource management can lead to significant issues for the caller. * Wrapping an expected exception, such as an `IOException`, inside a custom `DisposableException` prevents specific `catch` blocks from identifying and handling the original error. * This pattern often results in unhandled exceptions or the loss of specific error context, especially when the wrapper is hidden inside utility functions. * While this approach aims to be "neat" by capturing all possible failures, it forces the caller to understand the internal wrapping logic of the utility rather than the business logic errors. ### Prioritizing Primary Logic over Cleanup When errors occur in both the main execution block and the cleanup (e.g., `dispose()` or `close()`), it is critical to determine which exception takes precedence. * The exception from the main execution block is typically the "primary" failure that reflects a business logic or IO error, whereas a cleanup failure is often secondary. * Throwing a cleanup exception while discarding the primary error makes debugging difficult, as the root cause of the initial failure is lost. * In a typical `try-finally` block, if the `finally` block throws an exception, it naturally suppresses any exception thrown in the `try` block unless handled manually. ### Implementing Better Suppression Logic A more robust implementation mimics the behavior of Kotlin’s `Closeable.use` by ensuring the most relevant error is thrown while keeping others accessible for debugging. * Instead of creating a wrapper class, use `Throwable.addSuppressed()` to attach the cleanup exception to the primary exception. * If only the primary block fails, throw that exception directly to satisfy the caller's `catch` requirements. * If both the primary block and the cleanup fail, throw the primary exception and add the cleanup exception as a suppressed error. * If only the cleanup fails, it is then appropriate to throw the cleanup exception as the standalone failure. ### Considerations for Checked and Unchecked Exceptions The impact of exception handling varies by language, particularly in Java where checked exceptions are enforced by the compiler. * Converting a checked exception into an unchecked `RuntimeException` inside a wrapper can cause the compiler to miss necessary error-handling requirements. * If exceptions have parent-child relationships, such as `IOException` and `Exception`, wrapping can cause a specific handler to be bypassed in favor of a more generic one. * It is generally recommended to only wrap checked exceptions in `RuntimeException` when the error is truly unrecoverable and the caller is not expected to handle it. When designing custom resource management utilities, always evaluate which exception is most critical for the caller to see. Prioritize the primary execution error and use suppression for auxiliary cleanup failures to ensure that your error-handling remains transparent and predictable for the rest of the application.

line

Code Quality Improvement Techniques Part 19: Child Lock (opens in new tab)

The "child lock" technique focuses on improving code robustness by restricting the scope of what child classes can override in an inheritance hierarchy. By moving away from broad, overridable functions that rely on manual `super` calls, developers can prevent common implementation errors and ensure that core logic remains intact across all subclasses. This approach shifts the responsibility of maintaining the execution flow to the parent class, making the codebase more predictable and easier to maintain. ## Problems with Open Functions and Manual Super Calls Providing an `open` function in a parent class that requires child classes to call `super` creates several risks: * **Missing `super` calls:** If a developer forgets to call `super.bind()`, the essential logic in the parent class (such as updating headers or footers) is skipped, often leading to silent bugs that are difficult to track. * **Implicit requirements:** Relying on inline comments to tell developers they must override a function is brittle. If the method isn't `abstract`, the compiler cannot enforce that the child class implements necessary logic. * **Mismatched responsibilities:** When a single function handles both shared logic and specific implementations, the responsibility of the code becomes blurred, making it easier for child classes to introduce side effects or incorrect behavior. ## Implementing the "Child Lock" with Template Methods To resolve these issues, the post recommends a pattern often referred to as the Template Method pattern: * **Seal the execution flow:** Remove the `open` modifier from the primary entry point (e.g., the `bind` method). This prevents child classes from changing the overall sequence of operations. * **Separate concerns:** Move the customizable portion of the logic into a new `protected abstract` function. * **Enforced implementation:** Because the new function is `abstract`, the compiler forces every child class to provide an implementation, ensuring that specific logic is never accidentally omitted. * **Guaranteed execution:** The parent class calls the abstract method from within its non-overridable method, ensuring that shared logic (like UI updates) always runs regardless of how the child is implemented. ## Refining Overridability and Language Considerations Designing for inheritance requires careful control over how child classes interact with parent logic: * **Avoid "super" dependency:** Generally, if a child class must explicitly call a parent function to work correctly, the inheritance structure is too loose. Exceptions are usually limited to lifecycle methods like `onCreate` in Android or constructors/destructors. * **C++ Private Virtuals:** In C++, developers can use `private virtual` functions. These allow a parent class to define a rigid flow in a public method while still allowing subclasses to provide specific implementations for the private virtual components, even though the child cannot call those functions directly. To ensure long-term code quality, the range of overridability should be limited as much as possible. By narrowing the interface between parent and child classes, you create a more rigid "contract" that prevents accidental bugs and clarifies the intent of the code.