LServe Efficient Long-sequence LLM Serving with Unified Sparse Attention

Extensive Reading Author Info MIT HAN Lab Background Long-context LLM serving is bottlenecked by attention and KV caches. Prefilling has quadratic attention cost in sequence length, while decoding is memory-bound due to ever-growing KV caches; this makes 128k–512k contexts and long reasoning traces (e.g., 20k-token CoT) slow and expensive in practice. Existing KV cache optimizations are incomplete. Quantization and compression methods (e.g., KV quantization, paged KV cache) reduce memory and bandwidth but do not change the asymptotic attention complexity, so latency still grows linearly (decoding) or quadratically (prefilling) with context length. ...

November 15, 2025 · Last updated on November 17, 2025 · 3 min · KKKZOZ

Efficient Streaming Language Models with Attention Sinks

Extensive Reading Author Info MIT HAN Lab Background When applying LLMs for infinite input streams, two main challenges arise: KV Cache will grow infinitely which leads to excessive memory usage and decode latency LLM’s performance will degrade when the sequence length goes beyond the attention window size set during pre-training Window Attention: Only keep $L$ recent tokens in KV cache Model degrades dramatically once the sequence length exceeds the cache size (even just evict the first token) Slide Window with Re-computation: Do not reuse KV. At every step, rebuild the whole window last $L$ tokens and run the Transformer on that small segment from scratch Slide Window Example t = 1: Window: [x₁] Run the model on this length-1 sequence, use the output of x₁. t = 2: Window: [x₁, x₂] Run the model on [x₁, x₂] (full self-attention 2×2), use the output of x₂. t = 3: Window: [x₁, x₂, x₃] Run the model on these 3 tokens (3×3 attention), use x₃. t = 4: Window slides: [x₂, x₃, x₄] Run the model again on this 3-token segment (3×3 attention), use x₄. t = 5: window [x₃, x₄, x₅], full 3×3 attention, use x₅. t = 6: window [x₄, x₅, x₆], full 3×3 attention, use x₆. Observations A surprisingly large amount of attention score is allocated to the initial tokens, irrespective of their relevance to the language modeling task. ...

November 13, 2025 · Last updated on November 17, 2025 · 3 min · KKKZOZ

Quest Query-Aware Sparsity for Efficient Long-Context LLM Inference

Extensive Reading Author Info MIT HAN Lab Background In long-context inference: The KV cache grows linearly with context length ($L$). At each decoding step, the model must read the entire KV cache to compute attention. Existing works recognize there is small part of tokens that can domoinate the accuracy of token generation, and they choose to evict unimportant tokens: StreamingLLM keeps a sliding window plus a few “anchor” tokens. H2O, TOVA, etc., use heuristics or statistics to permanently drop “less important” tokens. Once a token is evicted, it’s gone. BUT, the important tokens are Query-dependent. ...

November 13, 2025 · Last updated on November 17, 2025 · 2 min · KKKZOZ

H2O Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models

Skimming Author Info Zhenyu “Allen” Zhang: A final-year Ph.D. student at the Electrical and Computer Engineering Department of UT Austin. Ying Sheng Insights Inherent Sparsity of Attention 推理过程中,其注意力矩阵表现出极高的稀疏性,超过95%的注意力值都非常小。这意味着在生成下一个 token 时,模型实际上只关注了过去所有词元中的一小部分。这为减少 KV Cache 的大小提供了可能性,因为大部分缓存的键值对实际上很少被用到 Existence of “Heavy Hitters” 通过分析词元在注意力计算中的累积得分,作者发现这些得分遵循 Power-law distribution, 这意味着只有一小部分词元 (Heavy Hitters) 贡献了绝大部分的注意力价值。这些 H₂ 词元对于维持模型的性能至关重要,如果将它们从缓存中移除,模型的准确率会急剧下降 Effectiveness of Local Statistics 理论上,要识别出真正的 Heavy Hitters 需要知道未来所有词元的注意力信息,这在自回归生成中是不现实的。 论文通过实验发现,仅使用局部信息——即在每个解码步骤中,根据已经生成的词元来计算和累积注意力分数——来动态确定 H₂,其效果与使用全局信息几乎一样好。 Note 既然不是所有的历史信息都同等重要,那么就可以设计一种智能的缓存管理策略,只保留那些最关键的信息,从而在有限的显存中实现高效推理。 ...

August 21, 2025 · Last updated on November 17, 2025 · 1 min · KKKZOZ