Striped Attention Faster Ring Attention for Causal Transformers

Skimming Author Info Implementation and Benchmark zhuzilin/ring-flash-attention: Ring attention implementation with flash attention Corresponding virtualization is here Background Challenges Insights Ring attention suffers from workload imbalance Due to the casual mask mechanism, some devices are doing meaningless computations in the iterations while other devices stays busy all the time. Stripped attention propose an another way to distribute workloads across devices to eliminate the imbalance. Approaches Striped Attention 让每个设备都持有了在原始序列中均匀分布的、不连续的词元 Example Important 理解这个例子最重要的一点:Ring Attention 和 Striped Attention 都不是采用朴素的注意力计算 ...

August 17, 2025 · Last updated on October 4, 2025 · 3 min · KKKZOZ

TPI-LLM Serving 70B-scale LLMs Efficiently on Low-resource Mobile Devices

Extensive Reading A similar paper is found: arxiv.org/pdf/2504.08791? Author Info ‪Zonghang Li‬ - ‪Google 学术搜索‬ Background LLM serving is shifting from the cloud to edge devices like smartphones and laptops. This trend is driven by growing privacy concerns, as users want to avoid sending their sensitive interaction data to cloud providers. The goal is to process user requests locally on their own devices. Preliminaries TP 场景下 KV Cache 的维护 Challenges Hardware Limitations: Mobile devices have very limited memory (typically 4-16 GiB) and computing power, often lacking GPUs. Running a 70B-scale model can require over 40 GiB of memory, which far exceeds the capacity of a single device. Inefficient Parallelism: The standard solution for distributed systems, pipeline parallelism, is inefficient for home scenarios where only one request is processed at a time. This leads to many devices being idle most of the time, wasting resources. Slow Memory Offloading: Existing on-device solutions like llama.cpp and Accelerate offload model data to disk to save RAM. However, their blocking disk I/O operations significantly slow down the inference speed. Insights 在低资源设备协同的环境下,应该选择 Tensor Parallelism 用户的请求一次性只有一条,并行的目的应该是降低延迟而不是增加吞吐量 Tensor Parallelism 依赖 allreduce 操作来同步和聚合计算结果,在低资源设备协同的环境下,通信瓶颈并非网络带宽而是链路延迟 使用 star-based allreduce 来降低网络跳数进而降低延迟 使用滑动窗口内存调度器来异步加载和卸载权重 由一个独立的线程后台进行 将权重的加载隐藏在计算和同步的过程中 Approaches ...

August 17, 2025 · Last updated on August 26, 2025 · 2 min · KKKZOZ

LLM.int8() 8-bit Matrix Multiplication for Transformers at Scale

Extensive Reading Author Info About Me — Tim Dettmers: A research scientist at the Allen Institute for Artificial Intelligence (Ai2) and an incoming Assistant Professor at Carnegie Mellon University (CMU). ‪Mike Lewis‬ - ‪Google Scholar‬ Related Blogs LLM.int8() and Emergent Features — Tim Dettmers A Gentle Introduction to 8-bit Matrix Multiplication for transformers at scale using transformers, accelerate and bitsandbytes Background 常见的 8 bit 量化有两种: Absmax quantization 在所有数据中,找到绝对值的最大值,我们称之为 abs_max,然后计算一个全局的 scaling factor 用数据中的每一个数字乘以这个 scaling factor, 再四舍五入到最近的整数完成量化 Zeropoint Quantization 找到数据中的最大值和最小值,计算 scaling factor 同时引入一个偏移量 zeropoint 来利用整个映射后的数值范围 精度更高,但是开销更大 Challenges How to preserve high quantization precision at scales beyond 1B parameters? How to deal with the systematic outliers emerged in all transformer layers starting at scales of 6.7B parameters? Insights Regular quantization methods introduce larger quantization errors for outliers. The amount of outlier can be small, but contributes the majority to the LLM’s quality. Isolate the outlier feature dimensions into a 16-bit matrix multiplication while other values are multiplied in 8-bit. Approaches 主要包括两部分内容: ...

August 12, 2025 · Last updated on August 25, 2025 · 2 min · KKKZOZ

LLM Generated Content

Q: 能否用一个更详细的例子说明 git worktree 好的,没问题。让我们通过一个完整、详细的实际场景来演练一下 git worktree 的用法。 场景设定 你的项目:一个名为 my-website 的网站项目。 你的主目录:~/dev/my-website 你的主要任务:你正在开发一个复杂的新功能——“图片画廊”(Photo Gallery)。为此,你创建了一个名为 feature/gallery 的分支,并且已经写了一些代码,但工作远未完成。 突发状况:产品经理跑过来说,网站页脚的版权年份还是去年,这是一个紧急的线上 Bug,必须马上修复! 现在,你的 feature/gallery 分支工作区是“凌乱”的,有很多新建的、修改过的但还不能提交的文件。 详细步骤演练 第 1 步:检查当前状态 首先,我们看看当前的工作目录。 # You are in your main project directory cd ~/dev/my-website # You are working on the gallery feature git status git status 的输出可能是这样的: On branch feature/gallery Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git restore <file>..." to discard changes in working directory) modified: src/app.js Untracked files: (use "git add <file>..." to include in what will be committed) src/components/PhotoGallery.vue no changes added to commit (use "git add" and/or "git commit -a") 此时,你不能直接切换到 main 分支,因为这会强制你处理这些未提交的修改,这正是我们想避免的。 ...

August 8, 2025 · Last updated on August 12, 2025 · 4 min · KKKZOZ

Deja Vu Contextual Sparsity for Efficient LLMs at Inference Time

Intensive Reading Author Info Zichang Liu:Research Scientist at Meta. Jue Wang, Ph.D: Founder & President of Stylar AI (stylar.ai). Tri Dao: Assistant Professor of Computer Science at Princeton University. Chief Scientist at Together AI. Background LLM Inference Latency Breakdown Challenges Speeding up inference-time sparse LLMs in wall-clock time while maintaining quality and in-context learning abilities remains a challenging problem. While sparsity and pruning have been well-studied, they have not seen wide adoption on LLMs due to the poor quality and efficiency trade-offs on modern hardware such as GPUs: ...

August 4, 2025 · Last updated on September 1, 2025 · 3 min · KKKZOZ

Fast On-device LLM Inference with NPUs

Intensive Reading Author Info Daliang Xu (徐大亮) - Daliang Xu’s Website: An incoming Assistant Professor at BUPT. ‪Hao Zhang‬ - ‪Google Scholar‬: Author of Edgellm. Mengwei Xu: An associate professor in BUPT. Professor Xuanzhe Liu @ Peking University: an Endowed Boya Distinguished Professor at the School of Computer Science in Peking University. Background The prefill stage is often the bottleneck in typical mobile applications. 论文设定的背景限制,但大部分情况下应该还是 decoding 阶段是瓶颈? Modern mobile SoCs ubiquitously include mobile neural processing units (NPUs) that are well-suited for integer operations, such as INT8-based matrix multiplication. ...

August 4, 2025 · Last updated on August 19, 2025 · 3 min · KKKZOZ

LLM Preliminaries

Math Vector-Matrix Multiplication 从三个不同的角度分析向量乘以矩阵的运算过程 $xW$。 假设向量 $x$ 的形状是 $(1, 3)$,矩阵 $W$ 的形状是 $(3, 6)$。 $$x = \begin{bmatrix} x_1 & x_2 & x_3 \end{bmatrix}$$$$ W = \begin{bmatrix} w_{11} & w_{12} & w_{13} & w_{14} & w_{15} & w_{16} \\\\ w_{21} & w_{22} & w_{23} & w_{24} & w_{25} & w_{26} \\\\ w_{31} & w_{32} & w_{33} & w_{34} & w_{35} & w_{36} \end{bmatrix} $$根据矩阵乘法规则,结果 $y = xW$ 的形状将是 $(1, 6)$。 角度一:将 W 视为元素的二维集合 这是最基本、最微观的视角。我们将矩阵 $W$ 看作是一个 $3 \times 6$ 的数字网格。结果向量 $y$ 中的每一个元素 $y_j$,都是通过将向量 $x$ 的每个元素与其在矩阵 $W$ 中对应列的每个元素相乘,然后将结果相加得到的。 ...

August 4, 2025 · Last updated on September 1, 2025 · 13 min · KKKZOZ

LLM in a flash Efficient Large Language Model Inference with Limited Memory

Intensive Reading Author Info ‪Keivan Alizadeh-Vahid‬ - ‪Google Scholar‬ Iman Mirzadeh: An ML Research Engineer at Apple. Background LLM is hard for personal devices to load. The standard approach is to load the entire model into DRAM (Dynamic Random Access Memory) for inference. However, this severely limits the maximum model size that can be run. Challenges The primary challenge is that the memory footprint of large language models (LLMs) often exceeds the limited DRAM capacity of personal devices. While storing models on high-capacity flash memory is a potential solution, it introduces two new major challenges: ...

July 30, 2025 · Last updated on September 1, 2025 · 3 min · KKKZOZ