Skip to content

Conversation

@yao531441
Copy link
Collaborator

Description

Support PolyLingua on Gaudi Platform

Issues

n/a.

Type of change

List the type of change like below. Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds new functionality)
  • Breaking change (fix or feature that would break existing design and interface)
  • Others (enhancement, documentation, validation, etc.)

Dependencies

List the newly introduced 3rd party dependency if exists.

Tests

Describe the tests that you ran to verify your changes.

Copilot AI review requested due to automatic review settings December 17, 2025 09:30
@github-actions
Copy link

github-actions bot commented Dec 17, 2025

Dependency Review

✅ No vulnerabilities or license issues found.

Scanned Files

None

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds support for running PolyLingua on Intel Gaudi HPU platform by introducing Gaudi-specific configurations and test infrastructure.

Key Changes

  • Added comprehensive end-to-end test script for Gaudi platform deployment
  • Created Docker Compose configuration for Gaudi/HPU deployment using vLLM with Habana runtime
  • Added NUM_CARDS environment variable to support multi-card Gaudi configurations

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 7 comments.

Show a summary per file
File Description
PolyLingua/tests/test_compose_on_gaudi.sh New end-to-end test script for validating PolyLingua on Gaudi platform
PolyLingua/docker_compose/intel/hpu/gaudi/compose.yaml Docker Compose configuration for Gaudi platform with vLLM service using Habana runtime
PolyLingua/set_env.sh Added NUM_CARDS environment variable for Gaudi multi-card support
PolyLingua/.env.example Documented NUM_CARDS configuration option
PolyLingua/README.md Added Qwen2.5-7B-Instruct as Gaudi default model

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Signed-off-by: Yao, Qing <qing.yao@intel.com>
Signed-off-by: Yao, Qing <qing.yao@intel.com>

services:
vllm-service:
image: opea/vllm-gaudi:1.4
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please align the vllm-gaudi version with the other Gaudi examples.

Signed-off-by: Yao, Qing <qing.yao@intel.com>
Signed-off-by: Yao, Qing <qing.yao@intel.com>
Signed-off-by: Yao, Qing <qing.yao@intel.com>
Signed-off-by: Yao, Qing <qing.yao@intel.com>
Signed-off-by: Yao, Qing <qing.yao@intel.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants