-
Notifications
You must be signed in to change notification settings - Fork 3.2k
Fix QAEvaluator to pass through is_reasoning_model parameter #44342
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Co-authored-by: nagkumar91 <4727422+nagkumar91@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR fixes a bug where QAEvaluator was not forwarding the is_reasoning_model parameter to its underlying model-based evaluators, causing failures when used with reasoning models (o1/o3) that don't support the max_tokens parameter.
Key changes:
- Added
is_reasoning_modelparameter toQAEvaluator.__init__()with default valueFalse - Forwarded the parameter to all five model-based sub-evaluators (GroundednessEvaluator, RelevanceEvaluator, CoherenceEvaluator, FluencyEvaluator, SimilarityEvaluator)
- Added unit tests verifying proper parameter propagation for both default and explicit values
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
| sdk/evaluation/azure-ai-evaluation/azure/ai/evaluation/_evaluators/_qa/_qa.py | Added is_reasoning_model parameter and forwarded it to all model-based sub-evaluators, excluding F1ScoreEvaluator which performs local computation |
| sdk/evaluation/azure-ai-evaluation/tests/unittests/test_built_in_evaluator.py | Added two test cases verifying proper parameter propagation with default (False) and explicit (True) values |
sdk/evaluation/azure-ai-evaluation/tests/unittests/test_built_in_evaluator.py
Outdated
Show resolved
Hide resolved
sdk/evaluation/azure-ai-evaluation/tests/unittests/test_built_in_evaluator.py
Outdated
Show resolved
Hide resolved
API Change CheckAPIView identified API level changes in this PR and created the following API reviews |
|
@copilot |
…in_evaluator.py Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
…in_evaluator.py Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: nagkumar91 <4727422+nagkumar91@users.noreply.github.com>
Fixed in commit 363bec4. Applied black formatting to the long lines in the evaluators list by splitting them across multiple lines. Warning Firewall rules blocked me from connecting to one or more addresses (expand for details)I tried to connect to the following addresses, but was blocked by firewall rules:
If you need me to access, download, or install something from one of these locations, you can either:
|
Description
QAEvaluator was not forwarding the
is_reasoning_modelparameter to its underlying evaluators, causing failures with reasoning models (o1/o3) that don't supportmax_tokens:Changes
is_reasoning_model: bool = Falseparameter and passed it through to all model-based evaluators (GroundednessEvaluator, RelevanceEvaluator, CoherenceEvaluator, FluencyEvaluator, SimilarityEvaluator)F1ScoreEvaluator unchanged as it performs local computation without LLM calls.
Usage
All SDK Contribution checklist:
General Guidelines and Best Practices
Testing Guidelines
Original prompt
✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.