Skip to content

Update README with token match rate on text backbone#52

Open
sdeeptan-aws wants to merge 1 commit intoaws-neuron:mainfrom
sdeeptan-aws:qwenvl32b
Open

Update README with token match rate on text backbone#52
sdeeptan-aws wants to merge 1 commit intoaws-neuron:mainfrom
sdeeptan-aws:qwenvl32b

Conversation

@sdeeptan-aws
Copy link
Contributor

Description

Updated Qwen2.5-VL-32B-Instruct contrib model README with 100% token match accuracy on text backbone. Qwen2.5-VL is a vision-language model with 64 decoder layers. Two key validation findings: (1) AutoModelForCausalLM does not work for VLMs — must use Qwen2ForCausalLM to load the HF reference, and (2) compiled model must use the full 64 layers, not a reduced test build. With the correct text backbone extraction and full layer count, the model achieves 100% token match.

Model Information

Model Name: Qwen2.5-VL-32B-Instruct
Model Architecture: Multimodal vision-language model (Qwen2-based decoder-only transformer, 64 layers)
Purpose: Vision-language understanding and text generation / instruction following

Checklist

Required Components

  • Accuracy Test (test/integration/test_model.py)
    • Validates model generation and coherence
    • Performance benchmarks (TTFT, throughput)
    • Test can compile and run the model on Neuron
  • README.md with the following sections:
    • Usage Example: Clear code example showing how to use the model
    • Compatibility Matrix: Table showing tested Neuron SDK versions and instance types
    • Example Checkpoints: Links to compatible model checkpoints
    • Testing Instructions: Command to run the test suite for the model
  • Source Code (src/)
    • Modeling code following NxD Inference patterns (unchanged in this PR)

Optional Components

  • Unit Tests (CPU or Neuron-based)

Folder Structure

/contrib/models/Qwen2.5-VL-32B-Instruct/
  README.md
  /src
    modeling_qwen2_5_vl.py
  /test
    /integration
      test_model.py

Testing

Model was compiled and tested with TP=2, batch_size=1, seq_len=128, bfloat16. Text backbone validated only — vision modalities not yet verified.

  1. Text backbone extraction: AutoModelForCausalLM fails for VLMs — must use Qwen2ForCausalLM to load HF reference
  2. Layer count verification: Compiled model must have full 64 layers — test builds with reduced layers (e.g., 4) produce poor accuracy

Test Results:

Test Status Result
Smoke Test ✅ PASS Model loads successfully
Token Matching ✅ PASS 100% match (text backbone, 64 layers)
TTFT (P50) ✅ PASS 7.98ms
Throughput ✅ PASS 120.65 tok/s

Compatibility

Tested with:

  • Instance Type(s): Trn1
  • Configuration: TP=2, batch_size=1, seq_len=128, bfloat16

Additional Information

  • AutoModelForCausalLM doesn't work: VLMs register with AutoModelForVision2Seq or similar. Use Qwen2ForCausalLM directly for the text backbone.
  • Layer count matters: Test builds may compile with reduced layers (e.g., 4 instead of 64) for faster iteration. Always verify num_hidden_layers in compiled config.json before validation.
  • Text-only validation: The LLM backbone can be validated independently of vision components.

Related Issues

N/A

vLLM Integration

  • This model/feature is intended for use with vLLM
  • Documentation includes vLLM registration instructions

By submitting this PR, I confirm that:

  • I have read and followed the contributing guidelines
  • This is a community contribution and may have limited testing compared to officially-supported models
  • The code follows best practices and is well-documented
  • All required components listed above are included

Copy link

@aws-yishanm aws-yishanm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approved because Readme and test were present.

@petesraj-aws petesraj-aws self-requested a review February 23, 2026 21:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants