LayerSkip Enabling Early Exit Inference and Self-Speculative Decoding

Extensive Reading Author Info Background Early Exit (Dynamic Halting) These techniques attempt to stop the forward pass at an intermediate layer if the model is sufficiently confident in the prediction. Problems: In standard LLMs, early layers are “lazy” (not trained to produce final tokens), leading to severe accuracy drops; furthermore, these methods typically require adding and training auxiliary “exit heads,” which increases parameter overhead. Layer Pruning and Dropout Existing research has explored skipping layers (dropout) during training to make sub-networks robust or pruning layers post-training for speed. Problems: Standard uniform layer dropout does not specifically incentivize early layers to be accurate, and post-training pruning often results in performance degradation that requires complex fine-tuning to recover. Insights Accelerate Large Language Model (LLM) inference by enabling the model to generate tokens using fewer layers when possible, while maintaining accuracy. ...

February 9, 2026 · Last updated on February 9, 2026 · 3 min · KKKZOZ

Draft & Verify Lossless Large Language Model Acceleration via Self-Speculative Decoding

Extensive Reading Author Info Prerequisite 贝叶斯优化是一种用于全局优化的策略,专门用于解决黑盒函数(Black-box function)的极值问题。它特别适用于那些评估代价昂贵(computationally expensive)、不可导或没有解析表达式的复杂函数。 其核心思想是:不要盲目地搜索,而是根据已有的历史数据构建一个概率模型,智能地推测下一次应该尝试哪里,从而以最少的尝试次数找到全局最优解。 贝叶斯优化由两个关键部分组成: 代理模型(Surrogate Model): 这是对目标函数的一种概率近似。最常用的是高斯过程(Gaussian Process, GP)。 与普通回归模型不同,代理模型不仅预测某个输入点对应的函数值(均值),还会给出一个不确定性范围(方差)。 作用:它告诉我们“根据目前已知的点,目标函数长什么样”以及“我们在哪些地方比较确信,哪些地方完全不知道”。 采集函数(Acquisition Function): 这是根据代理模型来指导下一步决策的函数。常见的有 Expected Improvement (EI) 或 Upper Confidence Bound (UCB)。 它负责解决探索(Exploration)与开发(Exploitation)的权衡问题: Exploitation:去代理模型预测值最好的地方,试图找到当前的局部最优。 Exploration:去代理模型不确定性最高(方差大)的地方,试图发现未知的潜在最优解。 作用:它计算搜索空间中每个点的“潜在价值”,价值最高点就是下一次实验的参数。 优化流程(迭代闭环) 观察:根据当前的初始数据点,训练代理模型(高斯过程)。 决策:最大化采集函数,找到下一个最有希望的候选点 $x$。 评估:在真实的复杂系统(目标函数)中运行这个参数 $x$,得到真实结果 $y$。 更新:将新的数据对 $(x, y)$ 加入历史数据,更新代理模型的后验概率分布。 重复:重复上述步骤,直到达到预定的迭代次数或满足收敛条件。 凡是符合**“输入参数维度不高(通常<20维)”且“验证一次结果很慢或很贵”**的问题,都是贝叶斯优化的用武之地 ...

February 8, 2026 · Last updated on February 9, 2026 · 2 min · KKKZOZ

LLM in a flash Efficient Large Language Model Inference with Limited Memory

Intensive Reading Author Info ‪Keivan Alizadeh-Vahid‬ - ‪Google Scholar‬ Iman Mirzadeh: An ML Research Engineer at Apple. Background LLM is hard for personal devices to load. The standard approach is to load the entire model into DRAM (Dynamic Random Access Memory) for inference. However, this severely limits the maximum model size that can be run. Challenges The primary challenge is that the memory footprint of large language models (LLMs) often exceeds the limited DRAM capacity of personal devices. While storing models on high-capacity flash memory is a potential solution, it introduces two new major challenges: ...

July 30, 2025 · Last updated on February 9, 2026 · 3 min · KKKZOZ