DDIA: Chapter 6 Partioning

The main reason for wanting to partition data is scalability. Normally, partitions are defined in such a way that each piece of data (each record, row, or document) belongs to exactly one partition. Partitioning and Replication Partitioning is usually combined with replication so that copies of each partition are stored on multiple nodes. This means that, even though each record belongs to exactly one partition, it may still be stored on several different nodes for fault tolerance. ...

October 24, 2023 · Last updated on August 1, 2025 · 7 min · KKKZOZ

DDIA: Chapter 5 Replication

Replication Versus Partitioning There are two common ways data is distributed across multiple nodes: Replication Keeping a copy of the same data on several different nodes, potentially in different locations. Replication provides redundancy and can also help improve performance. Partitioning Splitting a big database into smaller subsets called partitions so that different partitions can be assigned to different nodes (also known as sharding). These are separate mechanisms, but they often go hand in hand: ...

October 23, 2023 · Last updated on August 1, 2025 · 14 min · KKKZOZ

DDIA: Chapter 4 Encoding and Evolution

Formats for Encoding Data 这里提到了两种兼容性,后面分析数据编码格式时都会用到: In order for the system to continue running smoothly, we need to maintain compatibility in both directions: Backward compatibility Newer code can read data that was written by older code. Forward compatibility Older code can read data that was written by newer code. 直译有一个问题, 英语的"前后"在时间和空间上统一, 而汉语却是相反. 比如 forward 在空间上指前进, 在时间上指未来. 但是汉语中的"前"在空间上指前进, 在时间上却指过去. 向后兼容很好理解:指新的版本的软/硬件可以使用老版本的软/硬件产生的数据。 Forward compatibility 译为向前兼容极容易混乱,这里可以想成向未来兼容:指老的版本的软/硬件可以使用新版本的软/硬件产生的数据。 以下是几个例子: Intel 的 x86指令集 CPU 是向后兼容的,因为新款 CPU 依然可以运行老版本的软件。Intel 保证老版本 CPU 有的指令集新版本一定还保留着,这种只增加不删除的策略,保证了我们换 CPU 时,不需要更换很多软件。 ...

October 21, 2023 · Last updated on August 1, 2025 · 9 min · KKKZOZ

DDIA: Chapter 2 Data Models and Query Languages

Relational Model Versus Document Model 首先谈到了 NoSQL 的诞生: There are several driving forces behind the adoption of NoSQL databases, including: A need for greater scalability than relational databases can easily achieve, includ‐ ing very large datasets or very high write throughput A widespread preference for free and open source software over commercial database products Specialized query operations that are not well supported by the relational model Frustration with the restrictiveness of relational schemas, and a desire for a more dynamic and expressive data model 然后通过下图的这份简历来说明了 one-to-many 这种关系 ...

October 20, 2023 · Last updated on August 1, 2025 · 7 min · KKKZOZ

DDIA: Chapter 3 Storage and Retrieval

这一章主要讲的是数据库更底层的一些东西。 In order to tune a storage engine to perform well on your kind of workload, you need to have a rough idea of what the storage engine is doing under the hood. Data Structures That Power Your Database Index Any kind of index usually slows down writes, because the index also needs to be updated every time data is written. This is an important trade-off in storage systems: well-chosen indexes speed up read queries, but every index slows down writes. ...

October 20, 2023 · Last updated on August 1, 2025 · 9 min · KKKZOZ

Estimating LLM Uncertainty with Evidence

Extensive Reading Author Info Background Hallucinations exist in Large Language Models (LLMs) — where models generate unreliable responses due to a lack of knowledge. Existing methods for estimating uncertainty to detect hallucinations are flawed: Failure of Probability-Based Methods: Traditional methods rely on softmax probabilities. The normalization process (softmax) causes a loss of “evidence strength” information. A high probability does not always mean the model is knowledgeable; it might simply mean one token is slightly better than others in a low-knowledge scenario. Conversely, a low probability might not mean ignorance; it could mean the model knows multiple valid answers (e.g., synonyms). Limitations of Sampling-Based Methods: Methods like Semantic Entropy require multiple sampling iterations, which is computationally expensive and fails to capture the model’s inherent epistemic uncertainty (e.g., consistently producing the same incorrect answer due to lack of training data). Insights The reason why probability-based methods fail to identify reliability is that probability is normalized. ...

February 2, 2026 · Last updated on February 2, 2026 · 4 min · KKKZOZ

R-Stitch Dynamic Trajectory Stitching for Efficient Reasoning

Extensive Reading Author Info R-Stitch: Dynamic Trajectory Stitching for Efficient Reasoning Background Existing acceleration methods like Speculative Decoding have limitations: Rigid Consistency: They require the Small Language Model (SLM) to match the LLM’s tokens exactly. If the SLM phrases a correct reasoning step differently, speculative decoding rejects it, wasting computation. Low Agreement: In complex reasoning tasks, token-level agreement between SLMs and LLMs is often low, leading to frequent rollbacks and minimal speed gains. ...

February 2, 2026 · Last updated on February 2, 2026 · 3 min · KKKZOZ

FlexPrefill A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference

Extensive Reading Author Info About me - Xunhao Lai Good at writing Triton, here is another repo: XunhaoLai/native-sparse-attention-triton: Efficient triton implementation of Native Sparse Attention. Background As LLM context windows expand (up to 1M+ tokens), the pre-filling phase (processing the input prompt) becomes prohibitively expensive due to the quadratic complexity of full attention($O(n^2)$). Why prior sparse attention is insufficient Many approaches use fixed sparse patterns (e.g., sliding window) or offline-discovered patterns/ratios. These often fail because: ...

January 29, 2026 · Last updated on February 2, 2026 · 5 min · KKKZOZ