caching

3 posts

naver

@RequestCache: Developing a Custom (opens in new tab)

The development of `@RequestCache` addresses the performance degradation and network overhead caused by redundant external API calls or repetitive computations within a single HTTP request. By implementing a custom Spring-based annotation, developers can ensure that specific data is fetched only once per request and shared across different service layers. This approach provides a more elegant and maintainable solution than manual parameter passing or struggling with the limitations of global caching strategies. ### Addressing Redundant Operations in Web Services * Modern web architectures often involve multiple internal services (e.g., Order, Payment, and Notification) that independently request the same data, such as a user profile. * These redundant calls increase response times, put unnecessary load on external servers, and waste system resources. * `@RequestCache` provides a declarative way to cache method results within the scope of a single HTTP request, ensuring the actual logic or API call is executed only once. ### Limitations of Manual Data Passing * The common alternative of passing response objects as method parameters leads to "parameter drilling," where intermediate service layers must accept data they do not use just to pass it to a deeper layer. * In the "Strategy Pattern," adding a new data dependency to an interface forces every implementation to change, even those that have no use for the new parameter, which violates clean architecture principles. * Manual passing makes method signatures brittle and increases the complexity of refactoring as the call stack grows. ### The TTL Dilemma in Traditional Caching * Using Redis or a local cache with Time-To-Live (TTL) settings is often insufficient for request-level isolation. * If the TTL is set too short, the cache might expire before a long-running request finishes, leading to the very redundant calls the system was trying to avoid. * If the TTL is too long, the cache persists across different HTTP requests, which is logically incorrect for data that should be fresh for every new user interaction. ### Leveraging Spring’s Request Scope and Proxy Mechanism * The implementation utilizes Spring’s `@RequestScope` to manage the cache lifecycle, ensuring that data is automatically cleared when the request ends. * Under the hood, `@RequestScope` uses a Singleton Proxy that delegates calls to a specific instance stored in the `RequestContextHolder` for the current thread. * The cache relies on `RequestAttribute`, which uses `ThreadLocal` storage to guarantee isolation between different concurrent requests. * Lifecycle management is handled by Spring’s `FrameworkServlet`, which prevents memory leaks by automatically cleaning up request attributes after the response is sent. For applications dealing with deep call stacks or complex service interactions, a request-scoped caching annotation provides a robust way to optimize performance without sacrificing code readability. This mechanism is particularly recommended when the same data is needed across unrelated service boundaries within a single transaction, ensuring consistency and efficiency throughout the request lifecycle.

netflix

Behind the Streams: Real-Time Recommendations for Live Events Part 3 | by Netflix Technology Blog | Netflix TechBlog (opens in new tab)

Netflix manages the massive surge of concurrent users during live events by utilizing a hybrid strategy of prefetching and real-time broadcasting to deliver synchronized recommendations. By decoupling data delivery from the live trigger, the system avoids the "thundering herd" effect that would otherwise overwhelm cloud infrastructure during record-breaking broadcasts. This architecture ensures that millions of global devices receive timely updates and visual cues without requiring linear, inefficient scaling of compute resources. ### The Constraint Optimization Problem To maintain a seamless experience, Netflix engineers balance three primary technical constraints: time to update, request throughput, and compute cardinality. * **Time:** The specific duration required to coordinate and push a recommendation update to the entire global fleet. * **Throughput:** The maximum capacity of cloud services to handle incoming requests without service degradation. * **Cardinality:** The variety and complexity of unique requests necessary to serve personalized updates to different user segments. ### Two-Phase Recommendation Delivery The system splits the delivery process into two distinct stages to smooth out traffic spikes and ensure high availability. * **Prefetching Phase:** While members browse the app normally before an event, the system downloads materialized recommendations, metadata, and artwork into the device's local cache. * **Broadcasting Phase:** When the event begins, a low-cardinality "at least once" message is broadcast to all connected devices, triggering them to display the already-cached content instantaneously. * **Traffic Smoothing:** This approach eliminates the need for massive, real-time data fetches at the moment of kickoff, distributing the heavy lifting of data transfer over a longer period. ### Live State Management and UI Synchronization A dedicated Live State Management (LSM) system tracks event schedules in real time to ensure the user interface stays perfectly in sync with the production. * **Dynamic Adjustments:** If a live event is delayed or ends early, the LSM adjusts the broadcast triggers to preserve accuracy and prevent "spoilers" or dead links. * **Visual Cues:** The UI utilizes "Live" badging and dynamic artwork transitions to signal urgency and guide users toward the stream. * **Frictionless Playback:** For members already on a title’s detail page, the system can trigger an automatic transition into the live player the moment the broadcast begins, reducing navigation latency. To support global-scale live events, technical teams should prioritize edge-heavy strategies that pre-position assets on client devices. By shifting from a reactive request-response model to a proactive prefetch-and-trigger model, platforms can maintain high performance and reliability even during the most significant traffic peaks.

line

Getting 200% (opens in new tab)

Riverpod is a powerful state management library for Flutter designed to overcome the limitations of its predecessor, Provider, by offering a more flexible and robust framework. By decoupling state from the widget tree and providing built-in support for asynchronous data, it significantly reduces boilerplate code and improves application reliability. Ultimately, it allows developers to focus on logic rather than the complexities of manual state synchronization and resource management. ### Modern State Management Architecture Riverpod introduces a streamlined approach to state by separating the logic into Models, Providers, and Views. Unlike the standard `setState` approach, Riverpod manages the lifecycle of state automatically, ensuring resources are allocated and disposed of efficiently. * **Providers as Logic Hubs:** Providers define how state is built and updated, supporting synchronous data, Futures, and Streams. * **Consumer Widgets:** Views use `ref.watch` to subscribe to data and `ref.read` to trigger actions, creating a clear reactive loop. * **Global Access:** Because providers are not tied to the widget hierarchy, they can be accessed from anywhere in the app without passing context through multiple layers. ### Optimization for Server Data and Asynchronous Logic One of Riverpod's strongest advantages is its native handling of server-side data, which typically requires manual logic in other libraries. It simplifies the user experience during network requests by providing built-in states for loading and error handling. * **Resource Cleanup:** Using `ref.onDispose`, developers can automatically cancel active API calls when a provider is no longer needed, preventing memory leaks and unnecessary network usage. * **State Management Utilities:** It natively supports "pull-to-refresh" functionality through `ref.refresh` and allows for custom data expiration settings. * **AsyncValue Integration:** Riverpod wraps asynchronous data in an `AsyncValue` object, making it easy to check if a provider `hasValue`, `hasError`, or `isLoading` directly within the UI. ### Advanced State Interactions and Caching Beyond basic data fetching, Riverpod allows providers to interact with each other to create complex, reactive workflows. This is particularly useful for features like search filters or multi-layered data displays. * **Cross-Provider Subscriptions:** A provider can "watch" another provider; for example, a `PostList` provider can automatically rebuild itself whenever a `Filter` provider's state changes. * **Strategic Caching:** Developers can implement "instant" page transitions by yielding cached data from a list provider to a detail provider immediately, then updating the UI once the full network request completes. * **Offline-First Capabilities:** By combining local database streams with server-side Futures, Riverpod can display local data first to ensure a seamless user experience regardless of network connectivity. ### Seamless Data Synchronization Maintaining consistency across different screens is simplified through Riverpod's centralized state. When a user interacts with a data point on one screen—such as "starring" a post on a detail page—the change can be propagated globally so that the main list view is updated instantly without additional manual refreshes. This synchronization ensures the UI remains a "single source of truth" across the entire application. For developers building data-intensive Flutter applications, Riverpod is a highly recommended choice. Its ability to handle complex asynchronous states and inter-provider dependencies with minimal code makes it an essential tool for creating scalable, maintainable, and high-performance mobile apps.