Skip to content

Conversation

@Ceceliachenen
Copy link
Collaborator

  1. 修复eval上传新的数据集时出现的错误
  2. run_config 支持并发评估eval数据,并在前端透出
  3. 支持自定义并保存eval llm评估prompt,并在前端透出

@github-actions
Copy link

☂️ Python Coverage

current status: ✅

Overall Coverage

Lines Covered Coverage Threshold Status
13875 5523 40% 0% 🟢

New Files

File Coverage Status
backend/evaluation/evaluator/prompts/eval_prompts.py 53% 🟢
TOTAL 53% 🟢

Modified Files

File Coverage Status
backend/api/v1/config_apis/evaluation.py 19% 🟢
backend/db/models/evaluation/evaluator_config.py 89% 🟢
backend/db/models/evaluation/run_config.py 91% 🟢
backend/evaluation/evaluator/llm_judge_evaluator.py 22% 🟢
backend/evaluation/evaluator/utils.py 59% 🟢
backend/rag/evaluation_tool.py 23% 🟢
backend/service/tool/evaluation_service.py 15% 🟢
TOTAL 45% 🟢

updated for commit: 1105dfa by action🐍

@moria97
Copy link
Collaborator

moria97 commented Jan 29, 2026

这个pr可以拆成两个吗?一个fix bug,一个加自定义prompt的逻辑,我打算年前发布一个和线上0128一样的版本,不改动数据库的。自定义prompt的可以放在后面

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants