Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/source/user-guide/sparse-attention/cacheblend.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

![blend_scheme.jpg](../../_static/images/blend_scheme.jpg)

**🚀 Knowledge Cached Fusion Algorithm | 📄 EuroSys 2025 Paper **
**🚀 Knowledge Cached Fusion Algorithm | 📄 EuroSys 2025 Paper**

[![License](https://img.shields.io/badge/License-MIT-green.svg)](https://github.com/ModelEngine-Group/unified-cache-management/blob/main/LICENSE)
[![Python](https://img.shields.io/badge/Python-3.10+-blue.svg)](https://python.org)
Expand Down Expand Up @@ -31,7 +31,7 @@ CacheBlend reduces TTFT by 2.2 ~ 3.3× and increases throughput by 2.8 ~ 5× und
1. **🔐 Chunk Hash Encoding**: Similar as prefix hash encoder, hash all blocks in each chunk from the same hash meta beginning.
2. **⚡ Combine Prefix Cache and Chunk Cache**: Since chunk cache and native prefix cache share the same hash space, ucm first performs prefix cache lookup to fetch fully reused cache and then conduct chunk cache lookup to fetch the candidate cache for blending.
3. **🎯 Delta-Rope PostProcess**: Rectify loaded chunk cache according to their position in the new request.
3. **🔍 Integrate Cache Blend and First Token Generation**: Construct compute mask and attention meta according to HKVD tokens, cache miss tokens and suffix tokens, then compute their kv cache in a single model forward stage.
3. **🔍 Integrate Cache Blend and First Token Generation**: Construct compute mask and attention meta according to the HKVD tokens, cache miss tokens and suffix tokens, then compute their kv cache in a single model forward stage.
4. **🚀 Comprehensive Hook for LLM Forward Pipeline**: Based on ucm sparse module, blend module sparse the prefill tokens not only in attention stage but also in ffn, layer stage.

## 🚀 Quick Start
Expand Down
1 change: 1 addition & 0 deletions docs/source/user-guide/sparse-attention/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,4 +41,5 @@ esa
gsa
kvcomp
kvstar
cacheblend
:::
2 changes: 1 addition & 1 deletion examples/offline_inference_blend.py
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,7 @@ def main():
# choose one data row in LongBenchV1 (wikimqa)
assert os.path.isfile(
path_to_dataset
), f"Incorrect dataset path. Please specify the dataset path by `export DATASET_PATH=/path/to/longbench/multifieldqa_zh.jsonl`"
), f"Incorrect dataset path. Please specify the dataset path by `export DATASET_PATH=/home/data/Longbench/data/2wikimqa.jsonl`"
with open(path_to_dataset, "r") as f:
lines = f.readlines()
dataset_row = json.loads(lines[0])
Expand Down
2 changes: 2 additions & 0 deletions ucm/sparse/blend/blend.py
Original file line number Diff line number Diff line change
Expand Up @@ -189,6 +189,8 @@ def build_sparse_meta(

def _update_attn_metadata(self):
# update attn_metadata, cause we sparse the prefill tokens
# golden kv caches are available in current blend layer, so maybe we should cache all of them
# so maybe we should modify slot_mapping at the beginning of next layer/attn
self.attn_metadata.slot_mapping = self.attn_metadata.slot_mapping[
self.blend_req_metas.compute_mask
]
Expand Down