forked from vllm-project/vllm
-
Notifications
You must be signed in to change notification settings - Fork 0
Add Flashinfer cudnn Backend for ViT #30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
maxyanghu
wants to merge
34
commits into
mlperf-inf-mm-q3vl-v6.0
Choose a base branch
from
vit-attn-cudnn-backend
base: mlperf-inf-mm-q3vl-v6.0
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
34 commits
Select commit
Hold shift + click to select a range
d4597f3
add implementation
maxyanghu 7cbf291
add impl
maxyanghu 8713291
add flashinfer
maxyanghu f9362fb
fix tp
maxyanghu d48087f
Temporary change for ViT
Anerudhan 71eeda2
fix workspace_buffer device.
b-mu 392b3ac
change max_seqlen to 128k.
b-mu 772a17b
remove duplicate multiplier.
b-mu c38e8c4
fix accuracy and refactor
maxyanghu 19d5ffa
more fix
maxyanghu 47af3e1
change dockerfile
maxyanghu a09a785
format
maxyanghu bfd41ec
fix version
maxyanghu 5599eb4
change python version
maxyanghu 76b1482
remove qwen25 transformer support
maxyanghu fec4833
change dockerfile
maxyanghu 9a8c2d5
add build versions
maxyanghu f6a2ee7
chagne version
maxyanghu 4b9aa2a
change version
maxyanghu f782e97
change
maxyanghu 56868a9
change
maxyanghu c2ca450
change
maxyanghu 1d8b7ec
change
maxyanghu 413260e
change
maxyanghu 7a2ac66
build image
maxyanghu e8d34b7
change back
maxyanghu 5adb294
change to 10.0f
maxyanghu bc90e8f
fix fi import
maxyanghu 2d1286d
change to build in dev image
maxyanghu 42858c6
change location
maxyanghu c9a8f9b
change location
maxyanghu 89703a4
change
maxyanghu 9431a61
change cubin and jitcache to wheels
maxyanghu 0e0f19e
change
maxyanghu File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -13,6 +13,7 @@ | |
| from vllm.v1.attention.ops.vit_attn_wrappers import ( | ||
| vit_fa4_flash_attn_wrapper, | ||
| vit_flash_attn_wrapper, | ||
| vit_flashinfer_wrapper, | ||
| vit_torch_sdpa_wrapper, | ||
| ) | ||
|
|
||
|
|
@@ -34,6 +35,7 @@ def __init__( | |
| num_kv_heads: int | None = None, | ||
| prefix: str = "", | ||
| multimodal_config: MultiModalConfig | None = None, | ||
| workspace_buffer: torch.Tensor | None = None, # Only used for FlashInfer | ||
| ) -> None: | ||
| """ | ||
| Args: | ||
|
|
@@ -49,10 +51,10 @@ def __init__( | |
|
|
||
| self.num_heads = num_heads | ||
| self.head_size = head_size | ||
| self.scale = scale | ||
| self.scale = 1.0 / (head_size**0.5) if scale is None else scale | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What is this default scale factor based on? |
||
| self.num_kv_heads = num_heads if num_kv_heads is None else num_kv_heads | ||
| self.layer_name = prefix | ||
|
|
||
| self.workspace_buffer = workspace_buffer | ||
| assert self.num_heads % self.num_kv_heads == 0, ( | ||
| f"num_heads ({self.num_heads}) is not " | ||
| f"divisible by num_kv_heads ({self.num_kv_heads})" | ||
|
|
@@ -185,6 +187,27 @@ def _forward_fa( | |
| output = output.reshape(bsz, q_len, -1) | ||
| return output | ||
|
|
||
| def _forward_flashinfer( | ||
| self, | ||
| query: torch.Tensor, | ||
| key: torch.Tensor, | ||
| value: torch.Tensor, | ||
| cu_seqlens: torch.Tensor | None = None, | ||
| max_seqlen: torch.Tensor | None = None, | ||
| sequence_lengths: torch.Tensor | ||
| | None = None, # Only used for FlashInfer CuDNN backend | ||
| ) -> torch.Tensor: | ||
| return vit_flashinfer_wrapper( | ||
| q=query, | ||
| k=key, | ||
| v=value, | ||
| scale=self.scale, | ||
| workspace_buffer=self.workspace_buffer, | ||
| cu_seqlens=cu_seqlens, | ||
| max_seqlen=max_seqlen, | ||
| sequence_lengths=sequence_lengths, | ||
| ) | ||
|
|
||
| def _forward_fa4( | ||
| self, | ||
| query: torch.Tensor, | ||
|
|
@@ -226,6 +249,8 @@ def forward_native( | |
| value: torch.Tensor, | ||
| cu_seqlens: torch.Tensor | None = None, | ||
| max_seqlen: torch.Tensor | None = None, # Only used for Flash Attention | ||
| sequence_lengths: torch.Tensor | ||
| | None = None, # Only used for FlashInfer CuDNN backend | ||
| ) -> torch.Tensor: | ||
| return self._forward_sdpa(query, key, value, cu_seqlens) | ||
|
|
||
|
|
@@ -236,11 +261,17 @@ def forward_cuda( | |
| value: torch.Tensor, | ||
| cu_seqlens: torch.Tensor | None = None, | ||
| max_seqlen: torch.Tensor | None = None, # Only used for Flash Attention | ||
| sequence_lengths: torch.Tensor | ||
| | None = None, # Only used for FlashInfer CuDNN backend | ||
| ) -> torch.Tensor: | ||
| if self.is_fa4_backend: | ||
| return self._forward_fa4(query, key, value, cu_seqlens, max_seqlen) | ||
| elif self.is_flash_attn_backend: | ||
| return self._forward_fa(query, key, value, cu_seqlens, max_seqlen) | ||
| elif self.attn_backend == AttentionBackendEnum.FLASHINFER: | ||
| return self._forward_flashinfer( | ||
| query, key, value, cu_seqlens, max_seqlen, sequence_lengths | ||
| ) | ||
| elif self.attn_backend == AttentionBackendEnum.TORCH_SDPA: | ||
| return self._forward_sdpa(query, key, value, cu_seqlens) | ||
| else: | ||
|
|
@@ -256,6 +287,8 @@ def forward_cpu( | |
| value: torch.Tensor, | ||
| cu_seqlens: torch.Tensor | None = None, | ||
| max_seqlen: torch.Tensor | None = None, # Only used for Flash Attention | ||
| sequence_lengths: torch.Tensor | ||
| | None = None, # Only used for FlashInfer CuDNN backend | ||
| ) -> torch.Tensor: | ||
| return self._forward_sdpa(query, key, value, cu_seqlens) | ||
|
|
||
|
|
@@ -266,6 +299,8 @@ def forward_xpu( | |
| value: torch.Tensor, | ||
| cu_seqlens: torch.Tensor | None = None, | ||
| max_seqlen: torch.Tensor | None = None, # Only used for Flash Attention | ||
| sequence_lengths: torch.Tensor | ||
| | None = None, # Only used for FlashInfer CuDNN backend | ||
| ) -> torch.Tensor: | ||
| assert self.is_flash_attn_backend, ( | ||
| "XPU only supports FLASH_ATTN for vision attention." | ||
|
|
||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not quite familiar with the flashinfer build. Please correct me if I'm wrong.
tool/flashinfer-build.sh. While the usage may be different, I feel like we should also setFI_TORCH_CUDA_ARCH_LISTin our case which can reduce our image build time a lot.python3are used throughout other places in this docker file. Let's switch topython3for consistency. And alsopythonmay not work.