Skip to content

Conversation

@yatharth
Copy link
Contributor

@yatharth yatharth commented Dec 23, 2025

This PR was automatically created by metta/setup/tools/publish.py.

Asana Task

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines 44 to 45
class JobRequestUpdate(SQLModel):
status: JobStatus = Field(default=JobStatus.pending)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Make JobRequestUpdate.status optional

Because JobRequestUpdate.status defaults to pending, FastAPI will populate request.status even when the client does not send a status field. The update endpoint in job_routes.py treats any non-None status as a requested transition and enforces VALID_TRANSITIONS, so a request that only wants to set result or error will be forced into a pending transition and can return 409 for jobs that are already running/completed. This makes partial updates fail unless clients always include a legal status. Consider changing status to JobStatus | None = None (or using Field(default=None) with exclude_unset) so updates without a status don’t trigger transition validation.

Useful? React with 👍 / 👎.

Comment on lines 159 to 161
self._out: nn.Module = nn.Sequential(*layers) if layers else nn.Identity()
if self._storage_dtype is not None and self._storage_dtype is not torch.float32:
self._out = self._out.to(dtype=self._storage_dtype)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Avoid forcing CortexTD output head to storage dtype

The new cast of self._out to self._storage_dtype means the output head’s weights are always in the storage dtype (e.g., bf16). If a caller sets compute_dtype to float32 (supported by the config) while dtype is bf16, the stack produces float32 activations and then self._out(y) either errors on mixed dtypes or silently downcasts, defeating the higher-precision compute path. This regression only appears when compute_dtype != dtype, but that is a documented option here. Consider keeping _out in compute dtype or casting y to the storage dtype before the head when compute_dtype differs.

Useful? React with 👍 / 👎.

@yatharth yatharth force-pushed the yatharth/metta-publish-cogames-version-pinning branch 2 times, most recently from 60f83dd to 13fa617 Compare December 23, 2025 23:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants