-
Notifications
You must be signed in to change notification settings - Fork 5
Description
The code for persisting the response cache fails in two places:
Line 403 in 3362f00
json.dump(config.codex_query_response_log, f) Line 441 in 3362f00
json.dump(config.codex_query_response_log, f)
This happens because the response object returned by OpenAI is a BaseModel from pydantic which isn't automatically JSON serializable.
It's unclear if a specific version of the OpenAI client library is expected beyond what's specified in requirements.txt. I've tested this with openai==1.78.1. But I believe this failure will occur for all recent versions of the openai client that return responses as pydantic models.
Possible Fix
- Update
response => response.model_dump()if it is a pydantic object inTiCoder/src/query_chat_model.py
Line 370 in 3362f00
v = (k, response, current_time) - Parse response into
ChatCompletioninTiCoder/src/query_chat_model.py
Line 318 in 3362f00
resp = config.codex_query_response_log[str(k)][1]
This is needed because call sites expect the response to be a structured object.
I have a PR implementing this that I will attach here.