naver

@RequestCache: Developing a Custom (opens in new tab)

The development of @RequestCache addresses the performance degradation and network overhead caused by redundant external API calls or repetitive computations within a single HTTP request. By implementing a custom Spring-based annotation, developers can ensure that specific data is fetched only once per request and shared across different service layers. This approach provides a more elegant and maintainable solution than manual parameter passing or struggling with the limitations of global caching strategies.

Addressing Redundant Operations in Web Services

  • Modern web architectures often involve multiple internal services (e.g., Order, Payment, and Notification) that independently request the same data, such as a user profile.
  • These redundant calls increase response times, put unnecessary load on external servers, and waste system resources.
  • @RequestCache provides a declarative way to cache method results within the scope of a single HTTP request, ensuring the actual logic or API call is executed only once.

Limitations of Manual Data Passing

  • The common alternative of passing response objects as method parameters leads to "parameter drilling," where intermediate service layers must accept data they do not use just to pass it to a deeper layer.
  • In the "Strategy Pattern," adding a new data dependency to an interface forces every implementation to change, even those that have no use for the new parameter, which violates clean architecture principles.
  • Manual passing makes method signatures brittle and increases the complexity of refactoring as the call stack grows.

The TTL Dilemma in Traditional Caching

  • Using Redis or a local cache with Time-To-Live (TTL) settings is often insufficient for request-level isolation.
  • If the TTL is set too short, the cache might expire before a long-running request finishes, leading to the very redundant calls the system was trying to avoid.
  • If the TTL is too long, the cache persists across different HTTP requests, which is logically incorrect for data that should be fresh for every new user interaction.

Leveraging Spring’s Request Scope and Proxy Mechanism

  • The implementation utilizes Spring’s @RequestScope to manage the cache lifecycle, ensuring that data is automatically cleared when the request ends.
  • Under the hood, @RequestScope uses a Singleton Proxy that delegates calls to a specific instance stored in the RequestContextHolder for the current thread.
  • The cache relies on RequestAttribute, which uses ThreadLocal storage to guarantee isolation between different concurrent requests.
  • Lifecycle management is handled by Spring’s FrameworkServlet, which prevents memory leaks by automatically cleaning up request attributes after the response is sent.

For applications dealing with deep call stacks or complex service interactions, a request-scoped caching annotation provides a robust way to optimize performance without sacrificing code readability. This mechanism is particularly recommended when the same data is needed across unrelated service boundaries within a single transaction, ensuring consistency and efficiency throughout the request lifecycle.