SageMaker HyperPod now supports Managed tiered KV cache and intelligent routing

SageMaker HyperPod now supports Managed tiered KV cache and intelligent routing

Amazon SageMaker HyperPod now supports Managed Tiered KV Cache and Intelligent Routing for large language model (LLM) inference, enabling customers to optimize inference performance for long-context prompts and multi-turn conversations. Customers deploying production LLM applications need fast response times while processing lengthy documents or maintaining conversation context, but traditional inference approaches require recalculating attention mechanisms for all previous tokens with each new token generation, creating computational overhead and escalating costs. Managed Tiered KV Cache addresses this challenge by intelligently caching and reusing computed values, while Intelligent Routing directs requests to optimal instances.

These capabilities deliver up to 40% latency reduction, 25% throughput improvement, and 25% cost savings compared to baseline configurations. The Managed Tiered KV Cache feature uses a two-tier architecture combining local CPU memory (L1) with disaggregated cluster-wide storage (L2). AWS-native disaggregated tiered storage is the recommended backend, providing scalable terabyte-scale capacity and automatic tiering from CPU memory to local SSD for optimal memory and storage utilization. We also offer Redis as an alternative L2 cache option. The architecture enables efficient reuse of previously computed key-value pairs across requests. The newly introduced Intelligent Routing maximizes cache utilization through three configurable strategies: prefix-aware routing for common prompt patterns, KV-aware routing for maximum cache efficiency with real-time cache tracking, and round-robin for stateless workloads. These features work seamlessly together. Intelligent routing directs requests to instances with relevant cached data, reducing time to first token in document analysis and maintaining natural conversation flow in multi-turn dialogues. Built-in observability integration with Amazon Managed Grafana provides metrics for monitoring performance. You can enable these features through InferenceEndpointConfig or SageMaker JumpStart when deploying models via the HyperPod Inference Operator on EKS-orchestrated clusters.

These features are available in all regions where SageMaker HyperPod is available. To learn more, see the user guide.

Categories: marketing:marchitecture/artificial-intelligence

Source: Amazon Web Services



Latest Posts

Pass It On
Leave a Comment

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply