refactoring

15 posts

line

Code Quality Improvement Techniques Part 30 (opens in new tab)

Code quality often suffers when functions share implicit dependencies, where the correct behavior of one relies on the state or validation provided by another. This "invisible" connection creates fragile code that is prone to runtime errors and logic mismatches during refactoring or feature expansion. To solve this, developers should consolidate related logic or make dependencies explicit to ensure consistency and safety. ## Problems with Implicit Function Dependencies When logic is split across separate functions—such as one for validation (`isContentValid`) and another for processing (`getMessageText`)—developers often rely on undocumented preconditions. * **Fragile Runtime Safety:** In the provided example, `getMessageText` throws a runtime error if called on invalid data, assuming the caller has already checked `isContentValid`. * **Maintenance Burden:** When new data types (e.g., a new message type) are added, developers must remember to update both functions to keep them in sync, increasing the risk of "forgotten" updates. * **Hidden Logic Flow:** Callers might not realize the two functions are linked, leading to improper usage where the transformation function is called without the necessary prior validation. ## Consolidating Logic for Single-Source Truth The most effective way to eliminate implicit dependencies is to merge filtering and transformation into a single function. This ensures that the code cannot reach a processing state without passing through the necessary logic. * **Nullable Returns:** By changing the transformation function to return a nullable type (`String?`), the function can signal that a piece of data is "invalid" or "empty" directly through its return value. * **Simplified Caller Logic:** The UI layer no longer needs to call two separate functions; it simply checks if the result of the transformation is null to determine visibility. * **Elimination of Redundant Branches:** This approach reduces the number of `when` or `if-else` blocks that need to be maintained across the codebase. ## Establishing Explicit Consistency In scenarios where separate functions for validation and transformation are required for clarity or architectural reasons, the validation logic should be defined in terms of the transformation. * **Dependent Validation:** Instead of writing a separate `when` block for `isContentValid`, the function should simply check if `getMessageText` returns a non-null value. * **Guaranteed Synchronization:** This structure makes the relationship between the two functions explicit and guarantees that if a message is deemed "valid," it will always produce a valid text output. * **Improved Documentation:** Defining functions this way serves as self-documenting code, showing future developers exactly how the two operations are linked. When functions share a "red thread" of logic, they should either be merged or structured so that one acts as the source of truth for the other. By removing the need for callers to remember implicit preconditions, you reduce the surface area for bugs and make the codebase significantly easier to extend.

line

Code Quality Improvement Techniques Part 29 (opens in new tab)

Complexity in software often arises from "Gordian Variables," where tangled data dependencies make the logic flow difficult to trace and maintain. By identifying and designing an ideal intermediate data structure, developers can decouple these dependencies and simplify complex operations. This approach replaces convoluted conditional checks with a clean, structured data flow that highlights the core business logic. ## The Complexity of Tangled Dependencies Synchronizing remote data with local storage often leads to fragmented logic when the relationship between data IDs and objects is not properly managed. * Initial implementations frequently use set operations like `subtract` on ID lists to determine which items to create, update, or delete. * This approach forces the program to re-access original data sets multiple times, creating a disconnected flow between identifying a change and executing it. * Dependency entanglements often necessitate "impossible" runtime error handling (e.g., `error("This must not happen")`) because the compiler cannot guarantee data presence within maps during the update phase. * Inconsistent processing patterns emerge, where "add" and "update" logic might follow one sequence while "delete" logic follows an entirely different one. ## Designing Around Intermediate Data Structures To untangle complex flows, developers should work backward from an ideal data representation that categorizes all possible states—additions, updates, and deletions. * The first step involves creating lookup maps for both remote and local entries to provide O(1) access to data objects. * A unified collection of all unique IDs from both sources serves as the foundation for a single, comprehensive transformation pass. * A specialized utility function, such as `partitionByNullity`, can transform a sequence of data pairs (`Pair<Remote?, Local?>`) into three distinct, non-nullable lists. * This transformation results in a `Triple` containing `createdEntries`, `updatedEntries` (as pairs), and `deletedEntries`, effectively separating data preparation from business execution. ## Improved Synchronization Flow Restructuring the function around categorized lists allows the primary synchronization logic to remain concise and readable. * The synchronization function becomes a sequence of two phases: data categorization followed by execution loops. * By using the `partitionByNullity` pattern, the code eliminates the need for manual null checks or "impossible" error branches during the update process. * The final implementation highlights the most important part of the code—the `forEach` blocks for adding, updating, and deleting—by removing the noise of ID-based lookups and set mathematics. When faced with complex data dependencies, prioritize the creation of a clean intermediate data structure over-optimizing individual logical branches. Designing a data flow that naturally represents the different states of your business logic will result in more robust, self-documenting, and maintainable code.

line

Code Quality Improvement Techniques Part 27 (opens in new tab)

Over-engineering through excessive Dependency Injection (DI) can introduce unnecessary complexity and obscure a system's logic. While DI is a powerful tool for modularity, applying it to simple utility functions or data models often creates a maintenance burden without providing tangible benefits. Developers should aim to balance flexibility with simplicity by only injecting dependencies that serve a specific architectural purpose. ### The Risks of Excessive Dependency Injection Injecting every component, including simple formatters and model factories, can lead to several technical issues that degrade code maintainability: * **Obscured Logic Flow:** When utilities are hidden behind interfaces and injected via constructors, tracing the actual execution path requires navigating through multiple callers and implementation files, making the code harder to read. * **Increased Caller Responsibility:** Requiring dependencies for every small component forces the calling class to manage a "bloated" set of objects, often leading to a chain reaction where high-level classes must resolve dozens of unrelated dependencies. * **Data Inconsistency:** Injecting multiple utilities that rely on a shared state (like a `Locale`) creates a risk where a caller might accidentally pass mismatched configurations to different components, breaking the expected association between values. ### Valid Use Cases for Dependency Injection DI should be reserved for scenarios where the benefits of abstraction outweigh the cost of complexity. Proper use cases include: * **Lifecycle and Scope Management:** Sharing objects with specific lifecycles, such as those managing global state or cross-cutting concerns. * **Dependency Inversion:** Breaking circular dependencies between modules or ensuring the code adheres to specific architectural boundaries (e.g., Clean Architecture). * **Implementation Switching:** Enabling the replacement of components for different environments, such as swapping a real network repository for a mock implementation during unit testing or debugging. * **Decoupling for Build Performance:** Separating implementations into different modules to improve incremental build speeds or to isolate proprietary third-party libraries. ### Strategies for Refactoring and Simplification To improve code quality, developers should identify "transparent" dependencies that can be internalized or simplified: * **Direct Instantiation:** For simple data models like `NewsSnippet`, replace factory functions with direct constructor calls to clarify the intent and reduce boilerplate. * **Internalize Simple Utilities:** Classes like `TimeTextFormatter` or `StringTruncator` that perform basic logic can be maintained as private properties within the class or as stateless `object` singletons rather than being injected. * **Selective Injection:** Reserve constructor parameters for complex objects (e.g., repositories that handle network or database access) and environment-dependent values (e.g., a user's `Locale`). The core principle for maintaining a clean codebase is to ensure every injected dependency has a clear, documented purpose. By avoiding the trap of "injecting everything by default," developers can create systems that are easier to trace, test, and maintain.

toss

Legacy Settlement Overhaul: From (opens in new tab)

Toss Payments recently overhauled its 20-year-old legacy settlement system to overcome deep-seated technical debt and prepare for massive transaction growth. By shifting from monolithic SQL queries and aggregated data to a granular, object-oriented architecture, the team significantly improved system maintainability, traceability, and batch processing performance. The transition focused on breaking down complex dependencies and ensuring that every transaction is verifiable and reproducible. ### Replacing Monolithic SQL with Object-Oriented Logic * The legacy system relied on a "giant common query" filled with nested `DECODE`, `CASE WHEN`, and complex joins, making it nearly impossible to identify the impact of small changes. * The team applied a "Divide and Conquer" strategy, splitting the massive query into distinct domains and refined sub-functions. * Business logic was moved from the database layer into Kotlin-based objects (e.g., `SettlementFeeCalculator`), making business rules explicit and easier to test. * This modular approach allowed for "Incremental Migration," where specific features (like exchange rate conversions) could be upgraded to the new system independently. ### Improving Traceability through Granular Data Modeling * The old system stored data in an aggregated state (Sum), which prevented developers from tracing errors back to specific transactions or reusing data for different reporting needs. * The new architecture manages data at the minimum transaction unit (1:1), ensuring that every settlement result corresponds to a specific transaction. * "Setting Snapshots" were introduced to store the exact contract conditions (fee rates, VAT status) at the time of calculation, allowing the system to reconstruct the context of past settlements. * A state-based processing model was implemented to enable selective retries for failed transactions, significantly reducing recovery time compared to the previous "all-or-nothing" transaction approach. ### Optimizing High-Resolution Data and Query Performance * Managing data at the transaction level led to an explosion in data volume, necessitating specialized database strategies. * The team implemented date-based Range Partitioning and composite indexing on settlement dates to maintain high query speeds despite the increased scale. * To balance write performance and read needs, they created "Query-specific tables" that offload the processing burden from the main batch system. * Complex administrative queries were delegated to a separate high-performance data serving platform, maintaining a clean separation between core settlement logic and flexible data analysis. ### Resolving Batch Performance and I/O Bottlenecks * The legacy batch system struggled with long processing times that scaled poorly with transaction growth due to heavy I/O and single-threaded processing. * I/O was minimized by caching merchant contract information in memory at the start of a batch step, eliminating millions of redundant database lookups. * The team optimized the `ItemProcessor` in Spring Batch by implementing bulk lookups (using a Wrapper structure) to handle multiple records at once rather than querying the database for every individual item. This modernization demonstrates that scaling a financial system requires moving beyond "convenient" aggregations toward a granular, state-driven architecture. By decoupling business logic from the database and prioritizing data traceability, Toss Payments has built a foundation capable of handling the next generation of transaction volumes.

woowahan

We Refactor Culture Just Like (opens in new tab)

The Commerce Web Frontend Development team at Woowa Brothers recently underwent a significant organizational "refactoring" to manage the increasing complexity of their expanding commerce platform. By moving away from rigid, siloed roles and adopting a flexible "boundary-less" part system, the team successfully synchronized disparate services like B Mart and Baemin Store. This cultural shift demonstrates that treating organizational structure with the same iterative mindset as code can eliminate operational bottlenecks and foster a more resilient engineering environment. ### Transitioning to Boundary-less Parts * The team abandoned traditional division methods—such as project-based, funnel-based, or service-vs-backoffice splits—because they created resource imbalances and restricted developers' understanding of the overall service flow. * Traditional project-based splits often led to specific teams being overwhelmed during peak periods while others remained underutilized, creating significant delivery bottlenecks. * To solve these inefficiencies, the team introduced "boundary-less parts," where developers are not strictly tied to a single domain but are encouraged to work across the entire commerce ecosystem. * This structure allows the organization to remain agile, moving resources fluidly to address high-priority business needs without being hindered by departmental "walls." ### From R&R to Responsibility and Expandability (R&E) * The team replaced the traditional R&R (Role & Responsibility) model with "R&E" (Responsibility & Expandability), focusing on the core principle of "owning" a problem until it is fully resolved. * This shift encourages developers to expand their expertise beyond their immediate tasks, fostering a culture where helping colleagues and understanding neighboring domains is the standard. * Work is distributed through a strategic sync between team and part leaders, but team members maintain the flexibility to jump into different domains as project requirements evolve. * Regular "part shuffling" is utilized to ensure that domain knowledge is distributed across the entire 20-person frontend team, preventing the formation of information silos. ### Impact on Technical Integration and Team Resilience * The flexible structure was instrumental in the "ONE COMMERCE" initiative, which required integrating the technical stacks and user experiences of B Mart and Baemin Store. * Because developers had broad domain context, they were able to identify redundant logic across different services and abstract them into shared, common modules, ensuring architectural consistency. * The organization significantly improved its "Bus Factor"—the number of people who can leave before a project stalls—by ensuring multiple engineers understand the context of any given system. * Developers evolved into "domain-wide engineers" who understand the full lifecycle of a transaction, from the customer-facing UI to the backend administrative and logistics data flows. To prevent today's organizational solutions from becoming tomorrow's cultural legacy debt, engineering teams should proactively refactor their workflows. Moving from rigid role definitions to a model based on shared responsibility and cross-domain mobility is essential for maintaining velocity and technical excellence in large-scale platform environments.

line

Code Quality Improvement Techniques Part (opens in new tab)

Effective code review communication relies on a "conclusion-first" approach to minimize cognitive load and ensure clarity for the developer. By stating proposed changes or specific requests before providing the underlying rationale, reviewers help authors understand the primary goal of the feedback immediately. This practice improves development productivity by making review comments easier to parse and act upon without repeated reading. ### Optimizing Review Comment Structure * Place the core suggestion or requested code change at the very beginning of the comment to establish immediate context. * Follow the initial request with a structured explanation, utilizing headers or numbered lists to organize multiple supporting arguments. * Clearly distinguish between the "what" (the requested change) and the "why" (the technical justification) to prevent the intended action from being buried in a long technical discussion. * Use visual formatting to help the developer quickly validate the logic behind the suggestion once they understand the proposed change. ### Immutability and Data Class Design * Prefer the use of `val` over `var` in Kotlin `data class` structures to ensure object immutability. * Using immutable properties prevents bugs associated with unintended side effects that occur when mutable objects are shared across different parts of an application. * Instead of reassigning values to a mutable property, utilize the `copy()` function to create a new instance with updated state, which results in more robust and predictable code. * Avoid mixing `var` properties with `data class` features, as this can lead to confusion regarding whether to modify the existing instance or create a copy. ### Property Separation by Lifecycle * Analyze the update frequency of different properties within a class to identify those with different lifecycles. * Decouple frequently updated status fields (such as `onlineStatus` or `statusMessage`) from more stable attributes (such as `userId` or `accountName`) by moving them into separate classes. * Grouping properties by their lifecycle prevents unnecessary updates to stable data and makes the data model easier to maintain as the application scales. To maintain high development velocity, reviewers should prioritize brevity and structure in their feedback. Leading with a clear recommendation and supporting it with organized technical reasoning ensures that code reviews remain a tool for progress rather than a source of confusion.

line

Code Quality Improvement Techniques Part 24: The Value of Legacy (opens in new tab)

The LY Corporation Review Committee advocates for simplifying code by avoiding unnecessary inheritance when differences between classes are limited to static data rather than dynamic logic. By replacing complex interfaces and subclasses with simple data models and specific instances, developers can reduce architectural overhead and improve code readability. This approach ensures that configurations, such as UI themes, remain predictable and easier to maintain without the baggage of a type hierarchy. ### Limitations of Inheritance-Based Configuration * The initial implementation used a `FooScreenThemeStrategy` interface to define UI elements like background colors, text colors, and icons. * Specific themes (Light and Dark) were implemented as separate classes that overridden the interface properties. * This pattern creates an unnecessary proliferation of types when the only difference between the themes is the specific value of the constants being returned. * Using inheritance for simple value changes makes the code harder to follow and can lead to over-engineering. ### Valid Scenarios for Inheritance * **Dynamic Logic:** When behavior needs to change dynamically at runtime via dynamic dispatch. * **Sum Types:** Implementing restricted class hierarchies, such as Kotlin `sealed` classes or Java's equivalent. * **Decoupling:** Separating interface from implementation to satisfy DI container requirements or to improve build speeds. * **Dependency Inversion:** Applying architectural patterns to resolve circular dependencies or to enforce one-way dependency flows. ### Transitioning to Data Models and Instantiation * Instead of an interface, a single "final" class or data class (e.g., `FooScreenThemeModel`) should be defined to hold the required properties. * Individual themes are created as simple instances of this model rather than unique subclasses. * In Kotlin, defining a class without the `open` keyword ensures that the properties are not dynamically altered and that no hidden, instance-specific logic is introduced. * This "instantiation over inheritance" strategy guarantees that properties remain static and the code remains concise. To maintain a clean codebase, prioritize data-driven instantiation over class-based inheritance whenever logic remains constant. This practice reduces the complexity of the type system and makes the code more resilient to unintended side effects.

line

Code Quality Improvement Techniques Part 23 (opens in new tab)

While early returns are a popular technique for clarifying code by handling error cases first, they should not be applied indiscriminately. This blog post argues that when error cases and normal cases share the same logic, integrating them into a single flow is often superior to branching. By treating edge cases as part of the standard execution path, developers can simplify their code and reduce unnecessary complexity. ### Unifying Edge Cases with Normal Logic Rather than treating every special condition as an error to be excluded via an early return, it is often more effective to design logic that naturally accommodates these cases. * For functions processing lists, standard collection operations like `map` or `filter` already handle empty collections without requiring explicit checks. * Integrating edge cases can lead to more concise code, though developers should be mindful of minor performance trade-offs, such as the overhead of creating sequence or list instances for empty inputs. * Unification ensures that the "main purpose" of the function remains the focus, rather than a series of guard clauses. ### Utilizing Language-Specific Safety Features Modern programming languages provide built-in operators and functions that allow developers to handle potential errors as part of the standard expression flow. * **Safe Navigation:** Use safe call operators (e.g., `?.`) and null-coalescing operators (e.g., `?:`) to handle null values as normal data flow rather than branching with `if (value == null)`. * **Collection Access:** Instead of manually checking if an index is within bounds, use functions like `getOrNull` or `getOrElse` to retrieve values safely. * **Property Dependencies:** In UI logic, instead of early returning when a string is empty, you can directly assign visibility and text values based on the condition (e.g., `isVisible = text.isNotEmpty()`). ### Functional Exception Handling When a process involves multiple steps that might throw exceptions, traditional early returns can lead to repetitive try-catch blocks and fragmented logic. * By using the `flatMap` pattern and Result-style types, developers can chain operations together. * Converting exceptions into specific error types within a wrapper (like a `Success` or `Error` sealed class) allows the entire sequence to be treated as a unified data flow. * This approach makes the overall business logic much clearer, as the "happy path" is represented by a clean chain of function calls rather than a series of nested or sequential error checks. Before implementing an early return, evaluate whether the edge case can be gracefully integrated into the main logic flow. If the language features or standard libraries allow the normal processing path to handle the edge case naturally, choosing integration over exclusion will result in more maintainable and readable code.

line

Code Quality Improvement Techniques Part (opens in new tab)

Effective refactoring often fails when developers focus on the physical structure of code rather than its conceptual meaning. When nested loops for paged data are extracted into separate functions based solely on their technical boundaries, the resulting code can remain difficult to read and maintain. The article argues that true code quality is achieved by aligning function boundaries with logical units, such as abstracting data retrieval into sequences to flatten complex structures. ## Limitations of Naive Extraction - Traditional paged data processing often results in nested loops, where an outer `while` loop manages page indices and an inner `for` loop iterates through items in a chunk. - Simply extracting the inner loop into a private method like `saveMetadataInPage(page)` frequently fails to improve readability because it splits the conceptual task of "fetching all items" into two disconnected locations. - This "mechanical extraction" preserves the underlying implementation complexity, forcing the reader to track the state of pagination and loop conditions across multiple function calls. ## Refactoring Based on Conceptual Boundaries - A more effective approach identifies the high-level semantic units: "retrieving all items" and "processing each item." - In Kotlin, the pagination logic can be encapsulated within a `Sequence<Item>` using the `sequence` builder and `yieldAll` keywords. - By transforming the data source into a sequence, the consumer function can replace a nested loop with a single, clean `for` loop. - This abstraction allows the main business logic to focus on "what" is being done (saving metadata) while hiding the "how" (managing page indices and `hasNext` flags). ## Forest over Trees - When refactoring, developers should prioritize the "forest" (the relationship between operations) over the "trees" (individual functions). - This methodology is not limited to loops; it applies equally to nested conditional branches and complex data structures. - The goal should always be to ensure that the code reflects the meaning of the task, which often requires restructuring the data flow rather than just splitting existing blocks of code.

line

Code Quality Improvement Techniques Part (opens in new tab)

The builder pattern is frequently overused in modern development, often leading to code that is less robust than it appears. While it provides a fluent API, it frequently moves the detection of missing mandatory fields from compile-time to runtime, creating a "house of sand" that can collapse unexpectedly. By prioritizing constructors and factory functions, developers can leverage the compiler to ensure data integrity and build more stable applications. ### Limitations of the Standard Builder Pattern * In a typical builder implementation, mandatory fields are often initialized as nullable types and checked for nullity only when the `.build()` method is called. * This reliance on runtime checks like `checkNotNull` means that a developer might forget to set a required property, leading to an `IllegalStateException` during execution rather than a compiler error. * Unless the platform or a specific library (like an ORM) requires it, the boilerplate of a builder often hides these structural weaknesses without providing significant benefits. ### Strengthening Foundations with Constructors and Defaults * Using a class constructor or a factory function is often the simplest and most effective way to prevent bugs related to missing data. * In languages like Kotlin, the need for builders is further reduced by the availability of default parameters and named arguments, allowing for concise instantiation even with many optional fields. * If a builder must be used, mandatory arguments should be required in the builder's own constructor (e.g., `Builder(userName, emailAddress)`) to ensure the object is never in an invalid state. ### Managing Creation State and Pipelines * Developers sometimes pass a builder as an "out parameter" to other functions to populate data, which can obscure the flow of data and reduce readability. * A better approach is to use functions that return specific values, which are then passed into a final constructor, keeping the logic functional and transparent. * For complex, multi-stage creation logic, defining distinct types for each stage—such as moving from a `UserAccountModel` to a `UserProfileViewComponent`—can ensure that only valid, fully-formed data moves through the pipeline. ### Appropriate Use of Terminal Operations * The builder-like syntax is highly effective when implementing "terminal operations," where various transformations are applied in an arbitrary order before a final execution. * This pattern is particularly useful in image processing or UI styling (e.g., `.crop().fitIn().colorFilter()`), where it serves as a more readable alternative to deeply nested decorator patterns. * In these specific cases, the pattern facilitates a clear sequence of operations while maintaining a "last step" (like `.createBitmap()`) that signals the end of the configuration phase. Prioritize the use of constructors and factory functions to catch as many errors as possible during compilation. Reserve the builder pattern for scenarios involving complex terminal operations or when dealing with restrictive library requirements that demand a specific instantiation style.

line

Code Quality Improvement Techniques Part 1 (opens in new tab)

Effective naming in software development should prioritize the perspective of the code's consumer over the visual consistency of class declarations. By following natural grammatical structures, developers can reduce ambiguity and ensure that the purpose of a class or variable is immediately clear regardless of context. Ultimately, clear communication through grammar is more valuable for long-term maintenance than aesthetic symmetry in the codebase. ### Prefixing vs. Postfixing for Class Names When splitting a large class like `SettingRepository` into specific modules (e.g., Account, Security, or Language), the choice of where to place the modifier significantly impacts readability. * Postfixing modifiers (e.g., `SettingRepositorySecurity`) might look organized in a file directory, but it creates grammatical confusion when the class is used in isolation. * A developer encountering `SettingRepositorySecurity` in a constructor might misinterpret it as a "security module belonging to the SettingRepository" rather than a repository specifically for security settings. * Prefixing the modifier (e.g., `SecuritySettingRepository`) follows standard English grammar, clearly identifying the object as a specific type of repository and reducing the cognitive load for the reader. ### Handling Multiple Modifiers and the "Sandwich" Effect In cases where a single prefix is insufficient, such as defining the "height of a send button in portrait mode," naming becomes more complex. * Using only prefixes (e.g., `portraitSendButtonHeight`) can be ambiguous, potentially being read as the "height of a button used to send a portrait." * To resolve this, developers can use a "modifier sandwich" by moving some details to the end using prepositions like "for," "of," or "in" (e.g., `sendButtonHeightForPortrait`). * While prepositions are helpful for variables, they should generally be avoided in class or struct names to ensure that instance names derived from the type remain concise. * Developers should also defer to platform-specific conventions; for example, Java and Kotlin often omit prepositions in standard APIs, such as using `currentTimeMillis` instead of `currentTimeInMillis`. When naming any component, favor the clarity of the person reading the implementation over the convenience of the person writing the definition. Prioritizing grammatical correctness ensures that the intent of the code remains obvious even when a developer is looking at a single line of code.

line

Code Quality Improvement Techniques Part 14 (opens in new tab)

Applying the Single Responsibility Principle is a fundamental practice for maintaining high code quality, but over-fragmenting logic can inadvertently lead to architectural complexity. While splitting classes aims to increase cohesion, it can also scatter business constraints and force callers to manage an overwhelming number of dependencies. This post explores the "responsibility of assigning responsibility," arguing that sometimes maintaining a slightly larger, consolidated class is preferable to creating fragmented "Ravioli code." ### Initial Implementation and the Refactoring Drive The scenario involves a dynamic "Launch Button" that can fire rockets, fireworks, or products depending on its mode. * The initial design used a single `LaunchButtonBinder` that held references to all possible `Launcher` types and an internal enum to select the active one. * To strictly follow the Single Responsibility Principle, developers often attempt to split this into two parts: a binder for the button logic and a selector for choosing the mode. * The refactored approach utilized a `LaunchBinderSelector` to manage multiple `LaunchButtonBinder` instances, using an `isEnabled` flag to toggle which logic was active. ### The Problem of Scattered Constraints and State While the refactored classes are individually simpler, the overall system becomes harder to reason about due to fragmented logic. * **Verification Difficulty:** In the original code, the constraint that "only one thing launches at a time" was obvious in a single file; in the refactored version, a developer must trace multiple classes and loops to verify this behavior. * **State Redundancy:** Adding an `isEnabled` property to binders creates a risk of state synchronization issues between the selector’s current mode and the binders' internal flags. * **Information Hiding Trade-offs:** Attempting to hide implementation details often forces the caller to resolve all dependencies (binders, buttons, and launchers) manually, which can turn the caller into a bloated "God class." ### Avoiding "Ravioli Code" Through Balanced Design The pursuit of granular responsibilities can lead to "Ravioli code," where the system consists of many small, independent components but lacks a clear, cohesive structure. * The original implementation’s advantage was that it encapsulated all logic related to the launch button's constraints in one place. * When deciding to split a class, developers must evaluate if the move improves the overall system or simply shifts the burden of complexity to the caller. * Effective design requires balancing individual class cohesion with the overhead of inter-module coupling and dependency management. When refactoring for code quality, prioritize the clarity of the overall system over the dogmatic pursuit of small classes. If splitting a class makes it harder to verify business constraints or complicates the caller's logic significantly, it may be better to keep those related responsibilities together.

line

Code Quality Improvement Techniques Part 1 (opens in new tab)

The "Clone Family" anti-pattern occurs when two parallel inheritance hierarchies—such as a data model tree and a provider tree—share an implicit relationship that is not enforced by the type system. This structure often leads to type-safety issues and requires risky downcasting to access specific data types, increasing the likelihood of runtime errors during code modifications. To resolve this, developers should replace rigid inheritance with composition or utilize parametric polymorphism to explicitly link related types. ## The Risks of Implicit Correspondence Maintaining two separate inheritance trees where individual subclasses are meant to correspond to one another creates several technical hurdles. * **Downcasting Requirements:** Because a base provider typically returns a base data model type, developers must manually cast the result to a specific subclass (e.g., `as FooDataModel`), which bypasses compiler safety. * **Lack of Type Enforcement:** The constraint that a specific provider always returns a specific model is purely implicit; the compiler cannot prevent a provider from returning the wrong model type. * **Fragile Architecture:** As the system grows, ensuring that "Provider A" always maps to "Model A" becomes difficult to audit, leading to potential bugs when new developers join the project or when the hierarchy is extended. ## Substituting Inheritance with Composition When the primary goal of inheritance is simply to share common logic, such as fetching raw data, using composition or aggregation is often a superior alternative. * **Logic Extraction:** Shared functionality can be moved into a standalone class, such as an `OriginalDataProvider`, which is then held as a private property within specific provider classes. * **Direct Type Returns:** By removing the shared parent class, each provider can explicitly return its specific data model type without needing a common interface. * **Decoupling:** This approach eliminates the "Clone Family" entirely by removing the need for parallel trees, resulting in cleaner and more modular code. ## Leveraging Parametric Polymorphism In scenarios where a common parent class is necessary—for example, to manage a collection of providers within a shared lifecycle—generics can be used to bridge the two hierarchies safely. * **Generic Type Parameters:** By defining the parent as `ParentProvider<T>`, the base class can use a type parameter for its return values rather than a generic base model. * **Subclass Specification:** Each implementation (e.g., `FooProvider : ParentProvider<FooDataModel>`) explicitly defines its return type, allowing the compiler to enforce the relationship. * **Flexible Constraints:** Developers can still utilize type bounds, such as `ParentProvider<T : CommonDataModel>`, to ensure that the generics adhere to a specific interface while maintaining type safety for callers. When designing data providers and models, avoid creating parallel structures that rely on implicit assumptions. Prioritize composition to simplify the architecture, or use generics if inheritance is required, ensuring that the relationships between classes remain explicit and verifiable by the compiler.

line

Code Quality Improvement Techniques Part 1 (opens in new tab)

The "Set Discount" technique improves code quality by grouping related mutable properties into a single state object rather than allowing them to be updated individually. By restricting state changes through a controlled interface, developers can prevent inconsistent configurations and simplify the lifecycle management of complex classes. This approach ensures that dependent values are updated atomically, significantly reducing bugs caused by race conditions or stale data. ### The Risks of Fragmented Mutability When a class exposes multiple independent mutable properties, such as `isActive`, `minImportanceToRecord`, and `dataCountPerSampling`, it creates several maintenance challenges: * **Order Dependency:** Developers might accidentally set `isActive` to true before updating the configuration properties, causing the system to briefly run with stale or incorrect settings. * **Inconsistent Logic:** Internal state resets (like clearing a counter) may be tied to one property but forgotten when another related property changes, leading to unpredictable behavior. * **Concurrency Issues:** Even in single-threaded environments, asynchronous updates to individual properties can create race conditions that are difficult to debug. ### Consolidating State with SamplingPolicy To resolve these issues, the post recommends refactoring individual properties into a dedicated configuration class and using a single reference to manage the state: * **Atomic Updates:** By wrapping configuration values into a `SamplingPolicy` class, the system ensures that the minimum importance level and sampling interval are always updated together. * **Representing "Inactive" with Nulls:** Instead of a separate boolean flag, the `policy` property can be made nullable. An `inactive` state is naturally represented by `null`, making it impossible to "activate" the recorder without providing a valid policy. * **Explicit Lifecycle Methods:** Replacing property setters with methods like `startRecording()` and `finishRecording()` forces a clear transition of state and ensures that counters are reset consistently every time a new session begins. ### Advantages of Restricting State Transitions Moving from individual property mutation to a consolidated interface offers several technical benefits: * **Guaranteed Consistency:** It eliminates the possibility of "half-configured" states because the policy is replaced as a whole. * **Simplified Thread Safety:** If the class needs to be thread-safe, developers only need to synchronize a single reference update rather than coordinating multiple volatile variables. * **Improved Readability:** The intent of the code becomes clearer to future maintainers because the valid combinations of state are explicitly defined by the API. When designing components where properties are interdependent or must change simultaneously, you should avoid providing public setters for every field. Instead, provide a focused interface that limits updates to valid combinations, ensuring the object remains in a predictable state throughout its lifecycle.

line

Code Quality Improvement Techniques Part 1 (opens in new tab)

Effective code design often involves shifting the responsibility of state verification from the caller to the receiving object. By internalizing "if-checks" within the function that performs the action, developers can reduce boilerplate, prevent bugs caused by missing preconditions, and simplify state transitions. This encapsulation ensures that objects maintain their own integrity while providing a cleaner, more intuitive API for the rest of the system. ### Internalizing State Verification * Instead of the caller using a pattern like `if (!receiver.isState()) { receiver.doAction() }`, the check should be moved inside the `doAction` method. * Moving the check inside the function prevents bugs that occur when a caller forgets to verify the state, which could otherwise lead to crashes or invalid data transitions. * This approach hides internal state details from the caller, simplifying the object's interface and focusing on the desired outcome rather than the prerequisite checks. * If "doing nothing" when a condition isn't met is non-obvious, developers should use descriptive naming (e.g., `markAsFriendIfNotYet`) or clear documentation to signal this behavior. ### Leveraging Return Values for Conditional Logic * When a caller needs to trigger a secondary effect—such as showing a UI popup—only if an action was successful, it is better to return a status value (like a `Boolean`) rather than using higher-order functions. * Passing callbacks like `onSucceeded` into a use case can create unnecessary dependency cycles and makes it difficult for the caller to discern if the execution is synchronous or asynchronous. * Returning a `Boolean` to indicate if a state change actually occurred allows the caller to handle side effects cleanly and sequentially. * To ensure the caller doesn't ignore these results, developers can use documentation or specific compiler annotations to force the verification of the returned value. To improve overall code quality, prioritize "telling" an object what to do rather than "asking" about its state and then acting. Centralizing state logic within the receiver not only makes the code more robust against future changes but also makes the intent of the calling code much easier to follow.