data-node-warmup

1 posts

daangn

Easily Operating Karrot (opens in new tab)

This blog post by the Daangn (Karrot) search platform team details their journey in optimizing Elasticsearch operations on Kubernetes (ECK). While their initial migration to ECK reduced deployment times, the team faced critical latency spikes during rolling restarts due to "cold caches" and high traffic volumes. To achieve a "deploy anytime" environment, they developed a data node warm-up system to ensure nodes are performance-ready before they begin handling live search requests. ## Scaling Challenges and Operational Constraints - Over two years, Daangn's search infrastructure expanded from a single cluster to four specialized clusters, with peak traffic jumping from 1,000 to over 10,000 QPS. - The initial strategy of "avoiding peak hours" for deployments became a bottleneck, as the window for safe updates narrowed while total deployment time across all clusters exceeded six hours. - Manual monitoring became a necessity rather than an option, as engineers had to verify traffic conditions and latency graphs before and during every ArgoCD sync. ## The Hazards of Rolling Restarts in Elasticsearch - Standard Kubernetes rolling restarts are problematic for stateful systems because a "Ready" Pod does not equate to a "Performant" Pod; Elasticsearch relies heavily on memory-resident caches (page cache, query cache, field data cache). - A version update in the Elastic Operator once triggered an unintended rolling restart that caused a 60% error rate and 3-second latency spikes because new nodes had to fetch all data from disk. - When a node restarts, the cluster enters a "Yellow" state where remaining replicas must handle 100% of the traffic, creating a single point of failure and increasing the load on the surviving nodes. ## Strategy for Reliable Node Warm-up - The primary goal was to reach a state where p99 latency remains stable during restarts, regardless of whether the deployment occurs during peak traffic hours. - The solution involves a "Warm-up System" designed to pre-load frequently accessed data into the filesystem and Elasticsearch internal caches before the node is allowed to join the load balancer. - By executing representative search queries against a newly started node, the system ensures that the necessary segments are already in the page cache, preventing the disk I/O thrashing that typically follows a cold start. ## Implementation Goals - Automate the validation of node readiness beyond simple health checks to include performance readiness. - Eliminate the need for human "eyes-on-glass" monitoring during the 90-minute deployment cycles. - Maintain high availability and consistent user experience even when shards are being reallocated and replicas are temporarily unassigned. To maintain a truly resilient search platform on Kubernetes, it is critical to recognize that for stateful applications, "available" is not the same as "ready." Implementing a customized warm-up controller or logic is a recommended practice for any high-traffic Elasticsearch environment to decouple deployment schedules from traffic patterns.