Skip to content

Conversation

@konard
Copy link
Contributor

@konard konard commented Dec 9, 2025

Summary

This PR implements automated model availability testing for the Deep.Assistant API using GitHub Actions, addressing issue #3.

What's Included

Model Checker Scripts

  • check_models.py - Python-based async model checker using aiohttp
  • check_models.js - JavaScript-based model checker using OpenAI SDK
  • Both support testing all models or specific subsets via command-line arguments
  • Comprehensive error handling and detailed reporting

GitHub Actions Workflows

  1. Python Model Testing (.github/workflows/check-models-python.yml)

    • Tests models using Python
    • Daily scheduled runs at 6:00 UTC
    • Manual trigger with optional model selection
  2. JavaScript Model Testing (.github/workflows/check-models-javascript.yml)

    • Tests models using JavaScript
    • Daily scheduled runs at 6:00 UTC
    • Manual trigger with optional model selection
  3. Combined Testing (.github/workflows/check-all-models.yml)

    • Runs both Python and JavaScript tests in parallel
    • Provides combined summary of results
    • Daily scheduled runs at 6:00 UTC

Models Tested

  • OpenAI: o3-mini, o1-preview, o1-mini, gpt-4o, gpt-4o-mini, gpt-3.5-turbo, gpt-auto
  • Claude: claude-3-opus, claude-3-5-sonnet, claude-3-5-haiku, claude-3-7-sonnet
  • DeepSeek: deepseek-chat, deepseek-reasoner

Features

  • ✅ Separate workflows for Python and JavaScript testing
  • ✅ Combined workflow for comprehensive testing
  • ✅ Manual workflow triggers with model selection
  • ✅ Scheduled daily runs
  • ✅ Detailed output with model status and errors
  • ✅ GitHub Actions output variables for CI integration
  • ✅ Comprehensive documentation in README
  • ✅ Local testing support

Configuration Required

To enable the workflows, add the following repository secret:

  • OPENAI_API_KEY - Your Deep.Assistant API key (obtain from @DeepGPTBot using /api command)

Optional repository variable:

  • OPENAI_API_BASE - Custom API base URL (defaults to https://api.deep.assistant.run.place/v1)

Testing

The scripts can be tested locally before running in CI:

# Python
pip install -r requirements.txt
export OPENAI_API_KEY="your-key"
python check_models.py

# JavaScript
npm install
export OPENAI_API_KEY="your-key"
node check_models.js

Files Changed

  • .github/workflows/check-all-models.yml - Combined testing workflow
  • .github/workflows/check-models-javascript.yml - JavaScript testing workflow
  • .github/workflows/check-models-python.yml - Python testing workflow
  • check_models.py - Python model checker
  • check_models.js - JavaScript model checker
  • package.json - JavaScript dependencies
  • requirements.txt - Python dependencies
  • README.md - Documentation updates

Fixes

Closes #3

🤖 Generated with Claude Code

Co-Authored-By: Claude noreply@anthropic.com

Adding CLAUDE.md with task information for AI processing.
This file will be removed when the task is complete.

Issue: #3
@konard konard self-assigned this Dec 9, 2025
Implement automated model availability checking for Deep.Assistant API:
- Add Python model checker (check_models.py) with async support
- Add JavaScript model checker (check_models.js) using OpenAI SDK
- Create three GitHub Actions workflows:
  1. Python-only model testing
  2. JavaScript-only model testing
  3. Combined testing with summary
- Support for testing all models or specific subsets
- Daily scheduled runs at 6:00 UTC
- Manual trigger capability with model selection
- Comprehensive documentation in README

Tested models include: GPT, Claude, DeepSeek, and O-series models

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
@konard konard changed the title [WIP] Make it possible to check models availability via GitHub Actions Add GitHub Actions workflows for model availability testing Dec 9, 2025
@konard konard marked this pull request as ready for review December 9, 2025 05:31
@konard
Copy link
Contributor Author

konard commented Dec 9, 2025

🤖 Solution Draft Log

This log file contains the complete execution trace of the AI solution draft process.

💰 Cost estimation:

  • Public pricing estimate: $1.110901 USD
  • Calculated by Anthropic: $0.682000 USD
  • Difference: $-0.428902 (-38.61%)
    📎 Log file uploaded as GitHub Gist (301KB)
    🔗 View complete solution draft log

Now working session is ended, feel free to review and add any feedback on the solution draft.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Make it possible to check models availability via GitHub Actions

2 participants