Skip to content

Conversation

@Kotomi-Du
Copy link

@Kotomi-Du Kotomi-Du commented Dec 3, 2025

Description

This PR is to add a small subgraph Gather + ScatterElementUpdate for KVCache to allow OpenVINO to do KV cache reorder during model inference. This pattern will be optimized out by OV GPU if there is no related information provided (done in OV 33114)

The graph below shows how the PR impacts an onnx model when triggering makeStateful() path.

image

Motivation and Context

The Microsoft Phi-Silica application leverages tree-based speculative decoding to accelerate LLM inference. This technique requires frequent manipulation of past KV cache states (e.g. trimming, reordering). This is because only a single branch of the speculative draft tree is accepted after verification.

On the other side, the current KV Cache API available is OV is very slow which cannot meet MSFT requirements. Details in CVS-174809. As OV team suggested, the only way to support reorder feature is to add specific nodes in the original graph. This PR is to serve this purpose.

Open

  • If NPU don't want to have this path, a device specific flag has to be added.

If feature goes to new ABI?

Yes

Jira Ticket :

CVS-176367

@Kotomi-Du Kotomi-Du marked this pull request as draft December 3, 2025 20:15
@Kotomi-Du Kotomi-Du force-pushed the update_kvcache_node branch from 2a0d722 to 899feb5 Compare December 6, 2025 01:31
@Kotomi-Du Kotomi-Du force-pushed the update_kvcache_node branch from 899feb5 to 5432bd4 Compare December 6, 2025 01:32
@Kotomi-Du Kotomi-Du marked this pull request as ready for review December 9, 2025 05:03
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds support for KV cache reordering in the OpenVINO stateful LLM model to enable tree-based speculative decoding. It introduces a new subgraph pattern (Gather + ScatterElementsUpdate) that allows OpenVINO to perform KV cache reordering during inference, which can be optimized out by the GPU if not needed.

Key changes:

  • Adds new graph nodes (src_idx, dst_idx parameters and Gather/ScatterElementsUpdate operations) to enable KV cache manipulation
  • Implements ReorderKVCache API across the backend stack with parsing logic for comma-separated index pairs
  • Stores reorder indices in StatefulOVInferRequest for processing during inference

Reviewed changes

Copilot reviewed 10 out of 10 changed files in this pull request and generated 7 comments.

Show a summary per file
File Description
ov_stateful_patch_utils.h Adds opset12 include for ScatterElementsUpdate operation
ov_stateful_patch_utils.cc Implements the new KV cache reorder subgraph with src_idx/dst_idx parameters and Gather/ScatterElementsUpdate nodes
ov_interface.h Declares ReorderKVCache method and adds member variables for storing reorder indices
ov_interface.cc Implements ReorderKVCache with index validation and tensor population logic using hardcoded shape values
openvino_execution_provider.cc Adds kvcache_reorder option parsing to convert semicolon-delimited string format into index vectors
ibackend.h Adds virtual ReorderKVCache method to IBackend interface
basic_backend.h/cc Implements ReorderKVCache to propagate calls to inference request pool
backend_manager.h/cc Implements ReorderKVCache as pass-through to concrete backend

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

}
ovInfReq.set_tensor("src_idx", src_idx_tensor);
ov::Tensor dst_idx_tensor = ov::Tensor(ov::element::i32, {1, 32, kv_dst_indices.size(), 96});
for (int i = 0; i < kv_dst_indices.size(); ++i) {
Copy link

Copilot AI Dec 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Loop variable i should be size_t instead of int to match the type of kv_dst_indices.size() and avoid signed/unsigned comparison warnings.

Suggested change
for (int i = 0; i < kv_dst_indices.size(); ++i) {
for (size_t i = 0; i < kv_dst_indices.size(); ++i) {

Copilot uses AI. Check for mistakes.

kv_src_indices.clear();
kv_dst_indices.clear();
for (int i = 0; i < src_indices.size(); ++i) {
Copy link

Copilot AI Dec 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Loop variable i should be size_t instead of int to match the type of src_indices.size() and avoid signed/unsigned comparison warnings.

Suggested change
for (int i = 0; i < src_indices.size(); ++i) {
for (size_t i = 0; i < src_indices.size(); ++i) {

Copilot uses AI. Check for mistakes.

// Flag to add Gather+ScatterElementsUpdate subgraph to reorder KV cache for LLM speculative decoding
// TO-DO: extend to NPU device when OpenVINO NPU has related optimization
bool is_support_kvcache_reorder = device.find("GPU") != std::string::npos;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are two places that decide whether kv_cache_reorder can be supported -- here, as well as within StatefulOVInferRequest::StatefulOVInferRequest -- Can we tie this together by having them make a call to a helper function? It probably makes sense to define as a new function exposed through ov_stateful_patch_utils

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants