-
Notifications
You must be signed in to change notification settings - Fork 79
[WIP] implement build matrix for release workflow #559
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Note Gemini is unable to generate a summary for this pull request due to the file types involved not being currently supported. |
| needs: test | ||
| if: ${{ inputs.buildMusaCann }} | ||
| runs-on: ubuntu-latest | ||
| strategy: | ||
| fail-fast: false | ||
| matrix: | ||
| include: | ||
| - name: musa | ||
| target: final-llamacpp | ||
| platforms: "linux/amd64" | ||
| tag_suffix: "-musa" | ||
| variant: "musa" | ||
| base_image: "mthreads/musa:rc4.3.0-runtime-ubuntu22.04-amd64" | ||
|
|
||
| - name: Build SGLang CUDA image | ||
| uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 | ||
| with: | ||
| file: Dockerfile | ||
| target: final-sglang | ||
| platforms: linux/amd64 | ||
| build-args: | | ||
| "LLAMA_SERVER_VERSION=${{ inputs.llamaServerVersion }}" | ||
| "LLAMA_SERVER_VARIANT=cuda" | ||
| "BASE_IMAGE=nvidia/cuda:12.9.0-runtime-ubuntu24.04" | ||
| "SGLANG_VERSION=${{ inputs.sglangVersion }}" | ||
| push: true | ||
| sbom: true | ||
| provenance: mode=max | ||
| tags: ${{ steps.tags.outputs.sglang-cuda }} | ||
| - name: cann | ||
| target: final-llamacpp | ||
| platforms: "linux/arm64, linux/amd64" | ||
| tag_suffix: "-cann" | ||
| variant: "cann" | ||
| base_image: "ascendai/cann:8.2.rc2-910b-ubuntu22.04-py3.11" | ||
|
|
||
| - name: Build ROCm image | ||
| uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 | ||
| steps: | ||
| - name: Checkout repo | ||
| uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 | ||
|
|
||
| - name: Format tags | ||
| id: tags | ||
| shell: bash | ||
| run: | | ||
| echo "tags<<EOF" >> "$GITHUB_OUTPUT" | ||
| echo "docker/model-runner:${{ inputs.releaseTag }}${{ matrix.tag_suffix }}" >> "$GITHUB_OUTPUT" | ||
| if [ "${{ inputs.pushLatest }}" == "true" ]; then | ||
| echo "docker/model-runner:latest${{ matrix.tag_suffix }}" >> "$GITHUB_OUTPUT" | ||
| fi | ||
| echo 'EOF' >> "$GITHUB_OUTPUT" | ||
| - name: Log in to DockerHub | ||
| uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef | ||
| with: | ||
| file: Dockerfile | ||
| target: final-llamacpp | ||
| platforms: linux/amd64 | ||
| build-args: | | ||
| "LLAMA_SERVER_VERSION=${{ inputs.llamaServerVersion }}" | ||
| "LLAMA_SERVER_VARIANT=rocm" | ||
| "BASE_IMAGE=rocm/dev-ubuntu-22.04" | ||
| push: true | ||
| sbom: true | ||
| provenance: mode=max | ||
| tags: ${{ steps.tags.outputs.rocm }} | ||
| username: "docker" | ||
| password: ${{ secrets.ORG_ACCESS_TOKEN }} | ||
|
|
||
| - name: Build MUSA image | ||
| if: ${{ inputs.buildMusaCann }} | ||
| uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 | ||
| - name: Set up Buildx | ||
| uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 | ||
| with: | ||
| file: Dockerfile | ||
| target: final-llamacpp | ||
| platforms: linux/amd64 | ||
| build-args: | | ||
| "LLAMA_SERVER_VERSION=${{ inputs.llamaServerVersion }}" | ||
| "LLAMA_SERVER_VARIANT=musa" | ||
| "BASE_IMAGE=mthreads/musa:rc4.3.0-runtime-ubuntu22.04-amd64" | ||
| push: true | ||
| sbom: true | ||
| provenance: mode=max | ||
| tags: ${{ steps.tags.outputs.musa }} | ||
| version: "lab:latest" | ||
| driver: cloud | ||
| endpoint: "docker/make-product-smarter" | ||
| install: true | ||
|
|
||
| - name: Build CANN image | ||
| if: ${{ inputs.buildMusaCann }} | ||
| - name: Build and push ${{ matrix.name }} image | ||
| uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 | ||
| with: | ||
| file: Dockerfile | ||
| target: final-llamacpp | ||
| platforms: linux/arm64, linux/amd64 | ||
| target: ${{ matrix.target }} | ||
| platforms: ${{ matrix.platforms }} | ||
| build-args: | | ||
| "LLAMA_SERVER_VERSION=${{ inputs.llamaServerVersion }}" | ||
| "LLAMA_SERVER_VARIANT=cann" | ||
| "BASE_IMAGE=ascendai/cann:8.2.rc2-910b-ubuntu22.04-py3.11" | ||
| LLAMA_SERVER_VERSION=${{ inputs.llamaServerVersion }} | ||
| LLAMA_SERVER_VARIANT=${{ matrix.variant }} | ||
| BASE_IMAGE=${{ matrix.base_image }} | ||
| push: true | ||
| sbom: true | ||
| provenance: mode=max | ||
| tags: ${{ steps.tags.outputs.cann }} | ||
| tags: ${{ steps.tags.outputs.tags }} |
Check warning
Code scanning / CodeQL
Workflow does not contain permissions Medium
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 13 hours ago
In general, the fix is to explicitly declare permissions for the workflow or for specific jobs so that the default, potentially broad, permissions of GITHUB_TOKEN are not used. The minimal needed permission here is read access to repository contents, since all jobs use actions/checkout and do not appear to modify GitHub resources (no pushes, releases, or PR/issue operations).
The best fix with no behavior change is to add a single permissions block at the top level of .github/workflows/release.yml, alongside name / run-name / on, so that all jobs inherit these restricted permissions. We set contents: read, which is sufficient for actions/checkout to read the repository and does not interfere with Docker Hub login or image pushes (those use secrets.ORG_ACCESS_TOKEN, not GITHUB_TOKEN). No additional methods, imports, or definitions are needed; this is purely a YAML configuration change in .github/workflows/release.yml. Concretely, insert:
permissions:
contents: readright after the run-name or before the on: block.
-
Copy modified lines R3-R4
| @@ -1,5 +1,7 @@ | ||
| name: Release model-runner images for CE | ||
| run-name: Release model-runner images for CE, version ${{ inputs.releaseTag }} | ||
| permissions: | ||
| contents: read | ||
|
|
||
| on: | ||
| workflow_dispatch: |
No description provided.