Skip to content

Conversation

@asamal4
Copy link
Collaborator

@asamal4 asamal4 commented Jan 20, 2026

Description

Use max_completion_tokens instead of deprecated max_tokens

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up service version
  • Bump-up dependent library
  • Bump-up library or tool used for development (does not change the final image)
  • CI configuration change
  • Unit tests improvement

Tools used to create PR

Identify any AI code assistants used in this PR (for transparency and review context)

  • Assisted-by: N/A

Related Tickets & Documents

  • Related Issue #
  • Closes #

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Testing

  • Please provide detailed steps to perform tests related to this code change.
  • How were the fix/results from this change verified? Please provide relevant screenshots or results.

Summary by CodeRabbit

  • Refactor
    • Updated LLM parameter naming from "max_tokens" to "max_completion_tokens" across the system for improved API alignment.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 20, 2026

Walkthrough

The parameter key for LLM maximum completion tokens is renamed from "max_tokens" to "max_completion_tokens" consistently across the custom LLM handler, DeepEval manager, base LLM manager, and all corresponding test files. No logic or control flow changes are introduced.

Changes

Cohort / File(s) Summary
Core LLM Implementation
src/lightspeed_evaluation/core/llm/custom.py, src/lightspeed_evaluation/core/llm/deepeval.py, src/lightspeed_evaluation/core/llm/manager.py
Parameter key renamed from max_tokens to max_completion_tokens in litellm.completion payload construction, LiteLLMModel initialization, and public get_llm_params()/get_model_info() return dictionaries.
Test Files
tests/unit/core/llm/test_deepeval_manager.py, tests/unit/core/llm/test_llm_manager.py, tests/unit/core/llm/test_manager.py
Test fixtures and assertions updated to validate max_completion_tokens key instead of max_tokens in llm_params dictionaries and model info outputs.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

  • add common custom llm #70 — Main PR completing the LLM parameter key rename from "max_tokens" to "max_completion_tokens" across all LLM implementations and tests, directly addressing the parameter naming introduced in prior work on BaseCustomLLM and LLM parameter standardization.

Suggested reviewers

  • VladimirKadlec
  • tisnik
🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: replacing the deprecated max_tokens parameter with max_completion_tokens across the LLM configuration code, which is the primary objective of the PR.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/lightspeed_evaluation/core/llm/deepeval.py (1)

19-26: Fix DeepEval LiteLLMModel parameter: use max_tokens instead of max_completion_tokens.

DeepEval's LiteLLMModel constructor accepts max_tokens, not max_completion_tokens. The current parameter is silently ignored and won't configure the token limit as intended.

Update line 23 to:

max_tokens=llm_params.get("max_completion_tokens"),

Also, replace the print() statement on line 28 with structured logging (required by coding guidelines for src/lightspeed_evaluation/**/*.py files).

🧹 Nitpick comments (1)
src/lightspeed_evaluation/core/llm/custom.py (1)

119-128: Parameter name is correct; consider optional backward compatibility fallback.
LiteLLM's completion API supports max_completion_tokens, so the current implementation is valid. However, if callers may pass max_tokens (older convention), the suggested fallback pattern provides safer compatibility:

💡 Optional backward compatibility improvement
-        "max_completion_tokens": self.llm_params.get("max_completion_tokens"),
+        "max_completion_tokens": self.llm_params.get(
+            "max_completion_tokens", self.llm_params.get("max_tokens")
+        ),

Copy link
Contributor

@VladimirKadlec VladimirKadlec left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants