Skip to content

Point flash attention reference to kernelize repo#1

Merged
sjw36 merged 1 commit intokernelize-ai:mainfrom
leonematt:update/cmake
Sep 5, 2025
Merged

Point flash attention reference to kernelize repo#1
sjw36 merged 1 commit intokernelize-ai:mainfrom
leonematt:update/cmake

Conversation

@leonematt
Copy link
Member

@leonematt leonematt commented Sep 4, 2025

This commit updates the flash-attention repo reference to the kernelize flash attention repo

This commit updates the flash-attention repo reference to the kernelize flash attention repo

Signed-off-by: Matthew Leon <matthew.leon.tech@gmail.com>
@leonematt leonematt requested a review from sjw36 September 4, 2025 21:52
@github-actions
Copy link

github-actions bot commented Sep 4, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors.

You ask your reviewers to trigger select CI tests on top of fastcheck CI.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

If you have any questions, please reach out to us on Slack at https://slack.vllm.ai.

🚀

@sjw36 sjw36 merged this pull request into kernelize-ai:main Sep 5, 2025
3 checks passed
@leonematt leonematt deleted the update/cmake branch September 10, 2025 15:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants