From 990586528fb8a5383e3d8a427823bbe3b4ec2669 Mon Sep 17 00:00:00 2001 From: Kunal Vaishnavi Date: Mon, 9 Feb 2026 20:20:18 +0000 Subject: [PATCH 1/3] Add recipes for Qwen-3 0.6B only --- Qwen-Qwen3-0.6B/LICENSE | 197 ++++++++++++++++++ .../cpu/Qwen-Qwen3-0.6B_cpu_fp32.json | 23 ++ Qwen-Qwen3-0.6B/cpu/README.md | 25 +++ Qwen-Qwen3-0.6B/cpu/info.yaml | 6 + .../cuda/Qwen-Qwen3-0.6B_cuda_fp16.json | 23 ++ Qwen-Qwen3-0.6B/cuda/README.md | 25 +++ Qwen-Qwen3-0.6B/cuda/info.yaml | 6 + .../webgpu/Qwen-Qwen3-0.6B_webgpu_fp16.json | 23 ++ Qwen-Qwen3-0.6B/webgpu/README.md | 25 +++ Qwen-Qwen3-0.6B/webgpu/info.yaml | 6 + 10 files changed, 359 insertions(+) create mode 100644 Qwen-Qwen3-0.6B/LICENSE create mode 100644 Qwen-Qwen3-0.6B/cpu/Qwen-Qwen3-0.6B_cpu_fp32.json create mode 100644 Qwen-Qwen3-0.6B/cpu/README.md create mode 100644 Qwen-Qwen3-0.6B/cpu/info.yaml create mode 100644 Qwen-Qwen3-0.6B/cuda/Qwen-Qwen3-0.6B_cuda_fp16.json create mode 100644 Qwen-Qwen3-0.6B/cuda/README.md create mode 100644 Qwen-Qwen3-0.6B/cuda/info.yaml create mode 100644 Qwen-Qwen3-0.6B/webgpu/Qwen-Qwen3-0.6B_webgpu_fp16.json create mode 100644 Qwen-Qwen3-0.6B/webgpu/README.md create mode 100644 Qwen-Qwen3-0.6B/webgpu/info.yaml diff --git a/Qwen-Qwen3-0.6B/LICENSE b/Qwen-Qwen3-0.6B/LICENSE new file mode 100644 index 00000000..d03a1412 --- /dev/null +++ b/Qwen-Qwen3-0.6B/LICENSE @@ -0,0 +1,197 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf of + any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + Copyright 2024 Alibaba Cloud + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + http://www.apache.org/licenses/LICENSE-2.0 + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/Qwen-Qwen3-0.6B/cpu/Qwen-Qwen3-0.6B_cpu_fp32.json b/Qwen-Qwen3-0.6B/cpu/Qwen-Qwen3-0.6B_cpu_fp32.json new file mode 100644 index 00000000..1feacdbf --- /dev/null +++ b/Qwen-Qwen3-0.6B/cpu/Qwen-Qwen3-0.6B_cpu_fp32.json @@ -0,0 +1,23 @@ +{ + "input_model": { + "type": "HfModel", + "model_path": "Qwen/Qwen3-0.6B", + "task": "text-generation" + }, + "systems": { + "local_system": { + "type": "LocalSystem", + "accelerators": [ + { "device": "cpu", "execution_providers": ["CPUExecutionProvider"] } + ] + } + }, + "engine": { "target": "local_system" }, + "passes": { + "builder": { + "type": "ModelBuilder", + "precision": "fp32", + "extra_options": {} + } + } +} diff --git a/Qwen-Qwen3-0.6B/cpu/README.md b/Qwen-Qwen3-0.6B/cpu/README.md new file mode 100644 index 00000000..ca4e355f --- /dev/null +++ b/Qwen-Qwen3-0.6B/cpu/README.md @@ -0,0 +1,25 @@ +# Qwen-Qwen3-0.6B — CPU optimization + +This folder contains Olive recipes for optimizing Qwen-Qwen3-0.6B targeting the CPU EP. + +## What this folder is for + +- Execution Provider: CPU EP +- Typical precision: FP32 precision by default +- Example recipe filename: Qwen-Qwen3-0.6B_cpu_fp32.json + +## Setup + +1) Install Olive (version compatible with your repo). +2) Install the appropriate runtime package for this backend: + - onnxruntime-genai (CPU build) +3) Run Olive to build/optimize the model + - olive run --config Qwen-Qwen3-0.6B_cpu_fp32.json + +Additional notes: +- Optional: Use INT4 or INT4 + INT8 quantization recipes to improve throughput on CPU. +- Runs purely on CPU; no GPU required. + +--- + +This README was auto-generated for the CPU EP of Qwen-Qwen3-0.6B. diff --git a/Qwen-Qwen3-0.6B/cpu/info.yaml b/Qwen-Qwen3-0.6B/cpu/info.yaml new file mode 100644 index 00000000..35c8934b --- /dev/null +++ b/Qwen-Qwen3-0.6B/cpu/info.yaml @@ -0,0 +1,6 @@ +arch: qwen3 +recipes: + - name: Qwen-Qwen3-0.6B_cpu_fp32 + file: Qwen-Qwen3-0.6B_cpu_fp32.json + devices: cpu + eps: CPUExecutionProvider diff --git a/Qwen-Qwen3-0.6B/cuda/Qwen-Qwen3-0.6B_cuda_fp16.json b/Qwen-Qwen3-0.6B/cuda/Qwen-Qwen3-0.6B_cuda_fp16.json new file mode 100644 index 00000000..a0edd57a --- /dev/null +++ b/Qwen-Qwen3-0.6B/cuda/Qwen-Qwen3-0.6B_cuda_fp16.json @@ -0,0 +1,23 @@ +{ + "input_model": { + "type": "HfModel", + "model_path": "Qwen/Qwen3-0.6B", + "task": "text-generation" + }, + "systems": { + "local_system": { + "type": "LocalSystem", + "accelerators": [ + { "device": "gpu", "execution_providers": ["CUDAExecutionProvider"] } + ] + } + }, + "engine": { "target": "local_system" }, + "passes": { + "builder": { + "type": "ModelBuilder", + "precision": "fp16", + "extra_options": {} + } + } +} diff --git a/Qwen-Qwen3-0.6B/cuda/README.md b/Qwen-Qwen3-0.6B/cuda/README.md new file mode 100644 index 00000000..7531b653 --- /dev/null +++ b/Qwen-Qwen3-0.6B/cuda/README.md @@ -0,0 +1,25 @@ +# Qwen-Qwen3-0.6B — CUDA optimization + +This folder contains Olive recipes for optimizing Qwen-Qwen3-0.6B targeting the CUDA EP. + +## What this folder is for + +- Execution Provider: CUDA EP +- Typical precision: FP16 precision by default +- Example recipe filename: Qwen-Qwen3-0.6B_cuda_fp16.json + +## Setup + +1) Install Olive (version compatible with your repo). +2) Install the appropriate runtime package for this backend: + - onnxruntime-genai-cuda (CUDA build) with compatible CUDA/cuDNN versions +3) Run Olive to build/optimize the model + - olive run --config Qwen-Qwen3-0.6B_cuda_fp16.json + +Additional notes: +- Ensure CUDA and cuDNN versions are compatible with your onnxruntime-genai package. +- Requires an NVIDIA GPU and matching CUDA drivers/toolkit. + +--- + +This README was auto-generated for the CUDA EP of Qwen-Qwen3-0.6B. diff --git a/Qwen-Qwen3-0.6B/cuda/info.yaml b/Qwen-Qwen3-0.6B/cuda/info.yaml new file mode 100644 index 00000000..b7f61922 --- /dev/null +++ b/Qwen-Qwen3-0.6B/cuda/info.yaml @@ -0,0 +1,6 @@ +arch: qwen3 +recipes: + - name: Qwen-Qwen3-0.6B_cuda_fp16 + file: Qwen-Qwen3-0.6B_cuda_fp16.json + devices: gpu + eps: CUDAExecutionProvider diff --git a/Qwen-Qwen3-0.6B/webgpu/Qwen-Qwen3-0.6B_webgpu_fp16.json b/Qwen-Qwen3-0.6B/webgpu/Qwen-Qwen3-0.6B_webgpu_fp16.json new file mode 100644 index 00000000..8bc77e0d --- /dev/null +++ b/Qwen-Qwen3-0.6B/webgpu/Qwen-Qwen3-0.6B_webgpu_fp16.json @@ -0,0 +1,23 @@ +{ + "input_model": { + "type": "HfModel", + "model_path": "Qwen/Qwen3-0.6B", + "task": "text-generation" + }, + "systems": { + "local_system": { + "type": "LocalSystem", + "accelerators": [ + { "device": "gpu", "execution_providers": ["WebGpuExecutionProvider"] } + ] + } + }, + "engine": { "target": "local_system" }, + "passes": { + "builder": { + "type": "ModelBuilder", + "precision": "fp16", + "extra_options": {} + } + } +} diff --git a/Qwen-Qwen3-0.6B/webgpu/README.md b/Qwen-Qwen3-0.6B/webgpu/README.md new file mode 100644 index 00000000..547d8b14 --- /dev/null +++ b/Qwen-Qwen3-0.6B/webgpu/README.md @@ -0,0 +1,25 @@ +# Qwen-Qwen3-0.6B — WebGPU optimization + +This folder contains Olive recipes for optimizing Qwen-Qwen3-0.6B targeting the WebGPU EP. + +## What this folder is for + +- Execution Provider: WebGPU EP +- Typical precision: FP16 precision by default +- Example recipe filename: Qwen-Qwen3-0.6B_webgpu_fp16.json + +## Setup + +1) Install Olive (version compatible with your repo). +2) Install the appropriate runtime package for this backend: + - onnxruntime-webgpu and onnxruntime-genai +3) Run Olive to build/optimize the model + - olive run --config Qwen-Qwen3-0.6B_webgpu_fp16.json + +Additional notes: +- Ensure onnxruntime-genai is installed with the --no-deps flag. Otherwise, it will install the CPU build of ONNX Runtime and override your WebGPU build. +- Runs in a WebGPU-capable environment. + +--- + +This README was auto-generated for the WebGPU EP of Qwen-Qwen3-0.6B. diff --git a/Qwen-Qwen3-0.6B/webgpu/info.yaml b/Qwen-Qwen3-0.6B/webgpu/info.yaml new file mode 100644 index 00000000..205aa499 --- /dev/null +++ b/Qwen-Qwen3-0.6B/webgpu/info.yaml @@ -0,0 +1,6 @@ +arch: qwen3 +recipes: + - name: Qwen-Qwen3-0.6B_webgpu_fp16 + file: Qwen-Qwen3-0.6B_webgpu_fp16.json + devices: gpu + eps: WebGpuExecutionProvider From ceb5c1ea1504817a4b204115d8b189486b0cac47 Mon Sep 17 00:00:00 2001 From: Kunal Vaishnavi Date: Tue, 10 Feb 2026 00:32:49 +0000 Subject: [PATCH 2/3] Only keep Qwen-3 0.6B CPU recipe --- .../cpu/Qwen-Qwen3-0.6B_cpu_fp32.json | 23 ---------- ...Qwen-Qwen3-0.6B_cpu_int4_kld_gradient.json | 42 +++++++++++++++++++ Qwen-Qwen3-0.6B/cpu/README.md | 8 ++-- Qwen-Qwen3-0.6B/cpu/info.yaml | 4 +- .../cuda/Qwen-Qwen3-0.6B_cuda_fp16.json | 23 ---------- Qwen-Qwen3-0.6B/cuda/README.md | 25 ----------- Qwen-Qwen3-0.6B/cuda/info.yaml | 6 --- .../webgpu/Qwen-Qwen3-0.6B_webgpu_fp16.json | 23 ---------- Qwen-Qwen3-0.6B/webgpu/README.md | 25 ----------- Qwen-Qwen3-0.6B/webgpu/info.yaml | 6 --- 10 files changed, 48 insertions(+), 137 deletions(-) delete mode 100644 Qwen-Qwen3-0.6B/cpu/Qwen-Qwen3-0.6B_cpu_fp32.json create mode 100644 Qwen-Qwen3-0.6B/cpu/Qwen-Qwen3-0.6B_cpu_int4_kld_gradient.json delete mode 100644 Qwen-Qwen3-0.6B/cuda/Qwen-Qwen3-0.6B_cuda_fp16.json delete mode 100644 Qwen-Qwen3-0.6B/cuda/README.md delete mode 100644 Qwen-Qwen3-0.6B/cuda/info.yaml delete mode 100644 Qwen-Qwen3-0.6B/webgpu/Qwen-Qwen3-0.6B_webgpu_fp16.json delete mode 100644 Qwen-Qwen3-0.6B/webgpu/README.md delete mode 100644 Qwen-Qwen3-0.6B/webgpu/info.yaml diff --git a/Qwen-Qwen3-0.6B/cpu/Qwen-Qwen3-0.6B_cpu_fp32.json b/Qwen-Qwen3-0.6B/cpu/Qwen-Qwen3-0.6B_cpu_fp32.json deleted file mode 100644 index 1feacdbf..00000000 --- a/Qwen-Qwen3-0.6B/cpu/Qwen-Qwen3-0.6B_cpu_fp32.json +++ /dev/null @@ -1,23 +0,0 @@ -{ - "input_model": { - "type": "HfModel", - "model_path": "Qwen/Qwen3-0.6B", - "task": "text-generation" - }, - "systems": { - "local_system": { - "type": "LocalSystem", - "accelerators": [ - { "device": "cpu", "execution_providers": ["CPUExecutionProvider"] } - ] - } - }, - "engine": { "target": "local_system" }, - "passes": { - "builder": { - "type": "ModelBuilder", - "precision": "fp32", - "extra_options": {} - } - } -} diff --git a/Qwen-Qwen3-0.6B/cpu/Qwen-Qwen3-0.6B_cpu_int4_kld_gradient.json b/Qwen-Qwen3-0.6B/cpu/Qwen-Qwen3-0.6B_cpu_int4_kld_gradient.json new file mode 100644 index 00000000..d331c2c8 --- /dev/null +++ b/Qwen-Qwen3-0.6B/cpu/Qwen-Qwen3-0.6B_cpu_int4_kld_gradient.json @@ -0,0 +1,42 @@ +{ + "input_model": { + "type": "HfModel", + "model_path": "Qwen/Qwen3-0.6B", + "load_kwargs": { + "torch_dtype": "float16" + } + }, + "passes": { + "s": { + "type": "SelectiveMixedPrecision", + "algorithm": "kld_gradient", + "bits": 4, + "high_bits": 8, + "ratio": 0.65, + "sym": false, + "group_size": 32 + }, + "g": { + "type": "gptq", + "bits": 4, + "sym": false, + "group_size": 32 + }, + "r": { + "type": "rtn", + "bits": 4, + "sym": false, + "group_size": 32, + "lm_head": true, + "embeds": true + }, + "m": { + "type": "ModelBuilder", + "precision": "int4" + } + }, + "log_severity_level": 0, + "output_dir": "model", + "cache_dir": "cache", + "no_artifacts": true +} diff --git a/Qwen-Qwen3-0.6B/cpu/README.md b/Qwen-Qwen3-0.6B/cpu/README.md index ca4e355f..7af2629d 100644 --- a/Qwen-Qwen3-0.6B/cpu/README.md +++ b/Qwen-Qwen3-0.6B/cpu/README.md @@ -5,8 +5,8 @@ This folder contains Olive recipes for optimizing Qwen-Qwen3-0.6B targeting the ## What this folder is for - Execution Provider: CPU EP -- Typical precision: FP32 precision by default -- Example recipe filename: Qwen-Qwen3-0.6B_cpu_fp32.json +- Typical precision: INT4 precision by default +- Example recipe filename: Qwen-Qwen3-0.6B_cpu_int4_kld_gradient.json ## Setup @@ -14,10 +14,10 @@ This folder contains Olive recipes for optimizing Qwen-Qwen3-0.6B targeting the 2) Install the appropriate runtime package for this backend: - onnxruntime-genai (CPU build) 3) Run Olive to build/optimize the model - - olive run --config Qwen-Qwen3-0.6B_cpu_fp32.json + - olive run --config Qwen-Qwen3-0.6B_cpu_int4_kld_gradient.json Additional notes: -- Optional: Use INT4 or INT4 + INT8 quantization recipes to improve throughput on CPU. +- Optional: Use best practices when considering accuracy vs. memory to improve throughput on CPU. - Runs purely on CPU; no GPU required. --- diff --git a/Qwen-Qwen3-0.6B/cpu/info.yaml b/Qwen-Qwen3-0.6B/cpu/info.yaml index 35c8934b..9eec9650 100644 --- a/Qwen-Qwen3-0.6B/cpu/info.yaml +++ b/Qwen-Qwen3-0.6B/cpu/info.yaml @@ -1,6 +1,6 @@ arch: qwen3 recipes: - - name: Qwen-Qwen3-0.6B_cpu_fp32 - file: Qwen-Qwen3-0.6B_cpu_fp32.json + - name: Qwen-Qwen3-0.6B_cpu_int4_kld_gradient + file: Qwen-Qwen3-0.6B_cpu_int4_kld_gradient.json devices: cpu eps: CPUExecutionProvider diff --git a/Qwen-Qwen3-0.6B/cuda/Qwen-Qwen3-0.6B_cuda_fp16.json b/Qwen-Qwen3-0.6B/cuda/Qwen-Qwen3-0.6B_cuda_fp16.json deleted file mode 100644 index a0edd57a..00000000 --- a/Qwen-Qwen3-0.6B/cuda/Qwen-Qwen3-0.6B_cuda_fp16.json +++ /dev/null @@ -1,23 +0,0 @@ -{ - "input_model": { - "type": "HfModel", - "model_path": "Qwen/Qwen3-0.6B", - "task": "text-generation" - }, - "systems": { - "local_system": { - "type": "LocalSystem", - "accelerators": [ - { "device": "gpu", "execution_providers": ["CUDAExecutionProvider"] } - ] - } - }, - "engine": { "target": "local_system" }, - "passes": { - "builder": { - "type": "ModelBuilder", - "precision": "fp16", - "extra_options": {} - } - } -} diff --git a/Qwen-Qwen3-0.6B/cuda/README.md b/Qwen-Qwen3-0.6B/cuda/README.md deleted file mode 100644 index 7531b653..00000000 --- a/Qwen-Qwen3-0.6B/cuda/README.md +++ /dev/null @@ -1,25 +0,0 @@ -# Qwen-Qwen3-0.6B — CUDA optimization - -This folder contains Olive recipes for optimizing Qwen-Qwen3-0.6B targeting the CUDA EP. - -## What this folder is for - -- Execution Provider: CUDA EP -- Typical precision: FP16 precision by default -- Example recipe filename: Qwen-Qwen3-0.6B_cuda_fp16.json - -## Setup - -1) Install Olive (version compatible with your repo). -2) Install the appropriate runtime package for this backend: - - onnxruntime-genai-cuda (CUDA build) with compatible CUDA/cuDNN versions -3) Run Olive to build/optimize the model - - olive run --config Qwen-Qwen3-0.6B_cuda_fp16.json - -Additional notes: -- Ensure CUDA and cuDNN versions are compatible with your onnxruntime-genai package. -- Requires an NVIDIA GPU and matching CUDA drivers/toolkit. - ---- - -This README was auto-generated for the CUDA EP of Qwen-Qwen3-0.6B. diff --git a/Qwen-Qwen3-0.6B/cuda/info.yaml b/Qwen-Qwen3-0.6B/cuda/info.yaml deleted file mode 100644 index b7f61922..00000000 --- a/Qwen-Qwen3-0.6B/cuda/info.yaml +++ /dev/null @@ -1,6 +0,0 @@ -arch: qwen3 -recipes: - - name: Qwen-Qwen3-0.6B_cuda_fp16 - file: Qwen-Qwen3-0.6B_cuda_fp16.json - devices: gpu - eps: CUDAExecutionProvider diff --git a/Qwen-Qwen3-0.6B/webgpu/Qwen-Qwen3-0.6B_webgpu_fp16.json b/Qwen-Qwen3-0.6B/webgpu/Qwen-Qwen3-0.6B_webgpu_fp16.json deleted file mode 100644 index 8bc77e0d..00000000 --- a/Qwen-Qwen3-0.6B/webgpu/Qwen-Qwen3-0.6B_webgpu_fp16.json +++ /dev/null @@ -1,23 +0,0 @@ -{ - "input_model": { - "type": "HfModel", - "model_path": "Qwen/Qwen3-0.6B", - "task": "text-generation" - }, - "systems": { - "local_system": { - "type": "LocalSystem", - "accelerators": [ - { "device": "gpu", "execution_providers": ["WebGpuExecutionProvider"] } - ] - } - }, - "engine": { "target": "local_system" }, - "passes": { - "builder": { - "type": "ModelBuilder", - "precision": "fp16", - "extra_options": {} - } - } -} diff --git a/Qwen-Qwen3-0.6B/webgpu/README.md b/Qwen-Qwen3-0.6B/webgpu/README.md deleted file mode 100644 index 547d8b14..00000000 --- a/Qwen-Qwen3-0.6B/webgpu/README.md +++ /dev/null @@ -1,25 +0,0 @@ -# Qwen-Qwen3-0.6B — WebGPU optimization - -This folder contains Olive recipes for optimizing Qwen-Qwen3-0.6B targeting the WebGPU EP. - -## What this folder is for - -- Execution Provider: WebGPU EP -- Typical precision: FP16 precision by default -- Example recipe filename: Qwen-Qwen3-0.6B_webgpu_fp16.json - -## Setup - -1) Install Olive (version compatible with your repo). -2) Install the appropriate runtime package for this backend: - - onnxruntime-webgpu and onnxruntime-genai -3) Run Olive to build/optimize the model - - olive run --config Qwen-Qwen3-0.6B_webgpu_fp16.json - -Additional notes: -- Ensure onnxruntime-genai is installed with the --no-deps flag. Otherwise, it will install the CPU build of ONNX Runtime and override your WebGPU build. -- Runs in a WebGPU-capable environment. - ---- - -This README was auto-generated for the WebGPU EP of Qwen-Qwen3-0.6B. diff --git a/Qwen-Qwen3-0.6B/webgpu/info.yaml b/Qwen-Qwen3-0.6B/webgpu/info.yaml deleted file mode 100644 index 205aa499..00000000 --- a/Qwen-Qwen3-0.6B/webgpu/info.yaml +++ /dev/null @@ -1,6 +0,0 @@ -arch: qwen3 -recipes: - - name: Qwen-Qwen3-0.6B_webgpu_fp16 - file: Qwen-Qwen3-0.6B_webgpu_fp16.json - devices: gpu - eps: WebGpuExecutionProvider From 559afc4d443e1efeb77eee63ecc3a8dc00945576 Mon Sep 17 00:00:00 2001 From: Kunal Vaishnavi Date: Tue, 10 Feb 2026 00:38:56 +0000 Subject: [PATCH 3/3] Show installing Olive from main branch in README --- Qwen-Qwen3-0.6B/cpu/README.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/Qwen-Qwen3-0.6B/cpu/README.md b/Qwen-Qwen3-0.6B/cpu/README.md index 7af2629d..374d456e 100644 --- a/Qwen-Qwen3-0.6B/cpu/README.md +++ b/Qwen-Qwen3-0.6B/cpu/README.md @@ -10,7 +10,8 @@ This folder contains Olive recipes for optimizing Qwen-Qwen3-0.6B targeting the ## Setup -1) Install Olive (version compatible with your repo). +1) Install the main branch of Olive: + - pip install git+https://github.com/microsoft/olive.git 2) Install the appropriate runtime package for this backend: - onnxruntime-genai (CPU build) 3) Run Olive to build/optimize the model