Skip to content

Conversation

@LittleCoinCoin
Copy link
Member

This pull request introduces several updates focused on improving test coverage, refining semantic-release configuration validation, and ensuring proper handling of provider settings and API keys. The changes enhance regression and integration tests, update dependency versions, and add/adjust mock implementations for provider methods.

Semantic-release configuration and regression testing:

  • Major refactor of tests/regression_test_versioning.py to focus on validating semantic-release configuration, required plugins, branch setup, and semantic versioning format in pyproject.toml and package.json. Previous unit tests for VersionManager were removed in favor of regression tests that verify project-level release configuration.

Provider and API key handling in tests:

  • Integration and feature tests now load API keys from .env files using dotenv, and skip tests if the required environment variable is not set. This ensures reliability of tests that require external API access.
  • Added @requires_api_key decorator to relevant integration tests in tests/integration_test_openai.py to enforce API key presence. [1] [2] [3]

Mock implementations and test coverage:

  • Added mock methods hatchling_to_llm_tool_call and hatchling_to_provider_tool_result to provider test classes in both feature and registry tests, improving test coverage for provider tool call/result conversion logic. [1] [2] [3]

Dependency and workflow updates:

  • Updated the hatch dependency in pyproject.toml to use a stable release version (v0.5.1) instead of a feature branch.
  • Modified the GitHub Actions test workflow to skip tests requiring an API key when not available, improving CI reliability.

Minor test and import cleanups:

  • Removed unused imports and commented out certain commands in integration tests to streamline test execution and avoid failures for unsupported commands. [1] [2] [3]

LittleCoinCoin and others added 12 commits August 26, 2025 10:13
Commands `mcp:tool:execute`, `mcp:tool:schema`, `mcp:citations`, and
`mcp:reset` were commented out of release v0.4.2 of Hatchling for
instability.
This makes sure that the regression test deoesn't look for them.
The integration tests for OpenAI all require the API key at test setup
when creating the LLM provider instance.
Hence we must add the appropriate tag for skipping them when no api keys
are available (i.e. often during automation).
The methods `hatchling_to_llm_tool_call` and
`hatchling_to_provider_tool_result` were added in the previous update,
so the tests checking the integrity of the implementation of this
interface were failing.

We add the expected implementations.
We were before implementing a custom system to manage versions
automatically. We moved to `semantic-release`, so we must adapt the
regression tests associated to the versioning system.
Testing the way an OpenAI provider handles events required an api key.

Minor: deleted useless import statements
`tomli` is the library actually referenced in the dependencies of
`Hatchling` instead of `toml`.
Hence, trying to run the tests after simply installing `Hatchling` from
the `pyproject.toml`'s definition would fail when trying to use `toml`
for other setting file load.

test: some forgotten `toml` usage
We cannot expect the running ci to have access to LLM API keys for all
the tests. There are some protections by default in how the tests are
written but, to be sure, we explicitly indicate that we skip these.
Expected IO mode is binary.
test: fixing expected IO mode for `tomli.load`
@LittleCoinCoin LittleCoinCoin merged commit 52906b2 into CrackingShells:dev Aug 26, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant