-
Notifications
You must be signed in to change notification settings - Fork 104
docs: add tutorial series #484
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
docs: add tutorial series #484
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
Summary of ChangesHello @yoursanonymous, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the project's documentation by introducing a new, comprehensive tutorial series. These tutorials are designed to guide users through real-world production scenarios for batch workloads on Kubernetes using Volcano, bridging the gap between basic examples and complex end-to-end configurations. The series covers critical aspects like distributed machine learning, big data processing, GPU optimization, multi-tenancy, and workflow orchestration. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a valuable and comprehensive tutorial series covering several real-world production scenarios with Volcano. The documentation is well-structured and addresses important use cases. I've identified a few issues, including a critical error in one of the YAML manifests that would prevent it from working, an incorrect field in another example, and a broken link on the main tutorial page. My review includes specific suggestions to correct these points. With these fixes, this will be an excellent addition to the project's documentation.
| apiVersion: batch.volcano.sh/v1alpha1 | ||
| kind: Job | ||
| metadata: | ||
| name: gpu-sharing-tutorial | ||
| spec: | ||
| minAvailable: 1 | ||
| schedulerName: volcano | ||
| template: | ||
| spec: | ||
| containers: | ||
| - name: gpu-container | ||
| image: nvidia/cuda:11.0-base | ||
| command: ["sh", "-c", "nvidia-smi && sleep 3600"] | ||
| resources: | ||
| limits: | ||
| volcano.sh/vgpu-number: 1 # Request 1 virtual GPU | ||
| volcano.sh/vgpu-memory: 2000 # Limit to 2000MiB of GPU memory | ||
| volcano.sh/vgpu-cores: 20 # Limit to 20% of GPU compute | ||
| restartPolicy: Never |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The provided Volcano Job manifest is invalid. The spec.template field is not a valid field for a batch.volcano.sh/v1alpha1 Job. You should use spec.tasks, which is a list of tasks, where each task contains a pod template. The current manifest will fail to apply. I've suggested a corrected manifest that wraps the pod template inside a task.
apiVersion: batch.volcano.sh/v1alpha1
kind: Job
metadata:
name: gpu-sharing-tutorial
spec:
minAvailable: 1
schedulerName: volcano
tasks:
- replicas: 1
name: gpu-task
template:
spec:
containers:
- name: gpu-container
image: nvidia/cuda:11.0-base
command: ["sh", "-c", "nvidia-smi && sleep 3600"]
resources:
limits:
volcano.sh/vgpu-number: 1 # Request 1 virtual GPU
volcano.sh/vgpu-memory: 2000 # Limit to 2000MiB of GPU memory
volcano.sh/vgpu-cores: 20 # Limit to 20% of GPU compute
restartPolicy: Never| --- | ||
|
|
||
| > [!TIP] | ||
| > Each tutorial is designed to be self-contained. However, if you are new to Volcano, we recommend starting with the [Quick Start: Deploy a Volcano Job](https://volcano.sh/en/docs/tutorials/#quick-start-deploy-a-volcano-job) guide. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| - name: volcano-job-step | ||
| resource: | ||
| action: create | ||
| successCondition: status.state == Completed # Wait for the Job to finish |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The successCondition for the resource template is incorrect. The status.state field of a Volcano Job is an object, so checking for equality with a string will fail. To check for completion, you should evaluate the phase subfield. The condition should be status.state.phase == 'Completed'. Note that Completed must be a string literal.
| successCondition: status.state == Completed # Wait for the Job to finish | |
| successCondition: status.state.phase == 'Completed' # Wait for the Job to finish |
| Check the logs to verify that the container correctly detects the GPU environment via `nvidia-smi`: | ||
|
|
||
| ```bash | ||
| kubectl logs gpu-sharing-tutorial-completion-task-0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Following the correction to the Job manifest, the pod name used in this kubectl logs command will be incorrect. A Volcano pod's name is constructed as <job-name>-<task-name>-<index>. With the suggested manifest fix (which names the task gpu-task), the correct pod name would be gpu-sharing-tutorial-gpu-task-0.
| kubectl logs gpu-sharing-tutorial-completion-task-0 | |
| kubectl logs gpu-sharing-tutorial-gpu-task-0 |
Signed-off-by: vinayak sharma <vinayaks0111@gmail.com>
648ee4a to
ab33c7e
Compare
|
Please fix the CI failures thanks |
/kind documentation
This pull request introduces a new, comprehensive tutorial series focused on real-world production scenarios. It addresses the gap between basic examples and the complex end-to-end configurations required for production environments.