-
Notifications
You must be signed in to change notification settings - Fork 75
Feat bugfixes #167
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Feat bugfixes #167
Conversation
In headless/API mode where no renderer activates, the startup buffer could grow without limit. Now caps at maxsize (default 1000), keeping most recent messages.
Previously, if subprocess.Popen() raised an exception, the temp file created for logging would remain on disk with no process writing to it. Now the file is cleaned up in the error path.
Prevents potential encoding issues on Windows where the default encoding is the system locale rather than UTF-8.
All call sites are within async functions where a running loop is guaranteed. get_event_loop() is deprecated since Python 3.10.
… errors Changed except Exception: pass to log a warning message when plugin model provider loading fails, aiding in debugging plugin issues. Closes code_puppy-w1i
Add threading.Lock to protect CURRENT_TOKEN_RATE global from torn reads/writes when accessed from multiple threads. Wraps all reads and writes in _calculate_rate(), update_rate_from_sse(), stop(), and get_current_rate() with _TOKEN_RATE_LOCK. Closes code_puppy-kal
- Import deque from collections - Replace unbounded list with deque(maxlen=1000) to prevent memory leak - Update get_captured_lines() to return list() instead of .copy() - O(n) membership check in stop() now bounded to max 1000 items Closes code_puppy-9kq
Replace threading.Lock with asyncio.Lock in the async call(), _on_success(), and _on_failure() methods to avoid blocking the event loop. Sync methods retain threading.Lock as intended. Closes code_puppy-861
Replace bare 'except Exception: pass' blocks in _process_messages() with debug-level logging for visibility when messages are dropped or listeners fail. Closes code_puppy-ie9
…n() inside try block
…eep in event_stream_handler
…oid shadowing builtin
…eep in async menu pickers
…eep in core_commands
…and add safety comments
…using temp client
- write_pid_file() now writes to a temp file and uses os.replace() for atomic swap, preventing empty/corrupt PID files on crash - get_daemon_pid() handles empty PID file content gracefully - Added TOCTOU race condition comment in start_daemon_background() Closes code_puppy-2tj
- Bump pydantic-ai-slim 1.26.0 → 1.56.0, anthropic 0.76→0.79, dbos ≥2.11.0 - Add ToolbarSpinner: lightweight VT100 scroll-region based bottom bar replacing Rich Live-based ConsoleSpinner for streaming output - Enable DBOS by default (config key 'enable_dbos' now defaults to True) - Update models.json: add claude-4-6-opus, rename gpt-5.1→5.2, remove deprecated zai-glm-4.6 entries - Patch pydantic _call_tool for new 'metadata' kwarg in pydantic-ai 1.56 - Simplify session_storage: drop _RestrictedUnpickler in favor of plain pickle.loads (modules shift too often across pydantic-ai versions) - Fix agent_tools: unique subagent names via uuid, use cancel_workflow_async - Remove redundant 'import asyncio' statements in cli_runner - Add token stats (ctx/out/rate) to bottom toolbar display - Update tests to match new defaults and spinner class
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (10)
code_puppy/config.py (3)
247-249:⚠️ Potential issue | 🟡 MinorMisleading comment contradicts the code.
Line 249 comment says
# Default to False for safetybut the code returnsTrue. This was likely left over from a previous default.Proposed fix
- return True # Default to False for safety + return True # Default to True
1120-1131:⚠️ Potential issue | 🟡 MinorDocstring says default is
'summarization'but code returns'truncation'.The docstring on line 1124 and the comment on line 1130 both reference
'summarization'as the default, but the actual return on line 1131 is'truncation'.Proposed fix
""" Returns the user-configured compaction strategy. Options are 'summarization' or 'truncation'. - Defaults to 'summarization' if not set or misconfigured. + Defaults to 'truncation' if not set or misconfigured. Configurable by 'compaction_strategy' key. """ val = get_value("compaction_strategy") if val and val.lower() in ["summarization", "truncation"]: return val.lower() - # Default to summarization + # Default to truncation return "truncation"
1155-1157:⚠️ Potential issue | 🟡 MinorDocstring inconsistency: says "default false" but
get_use_dbosnow defaults toTrue.The docstring
"Enable DBOS via config (true enables, default false)."is stale after the default was changed toTrue.Proposed fix
def set_enable_dbos(enabled: bool) -> None: - """Enable DBOS via config (true enables, default false).""" + """Enable or disable DBOS via config (default: enabled).""" set_config_value("enable_dbos", "true" if enabled else "false")code_puppy/plugins/claude_code_oauth/token_refresh_heartbeat.py (1)
191-201:⚠️ Potential issue | 🟡 MinorNested context use will clobber the outer heartbeat reference.
If
token_refresh_heartbeat_contextis entered while another is already active, the innerfinallyblock sets_current_heartbeat = None, orphaning the outer heartbeat fromis_heartbeat_running()andget_current_heartbeat(). Both heartbeat tasks would still run, but the global reference would be lost.If nesting is intentional, consider a stack or ref-count. If not, consider guarding against it:
Suggested guard
global _current_heartbeat + if _current_heartbeat is not None: + logger.debug("Heartbeat already active, skipping nested heartbeat") + yield _current_heartbeat + return + heartbeat = TokenRefreshHeartbeat(interval=interval)code_puppy/api/pty_manager.py (1)
401-411:⚠️ Potential issue | 🟠 Major
WNOHANGafterSIGTERMmay leave zombie processes.If the child hasn't exited by the time
waitpid(pid, WNOHANG)is called (likely, sinceSIGTERMwas just sent), the call returns(0, 0)without reaping. The child becomes a zombie once it does exit, and nothing comes back to collect it. For a long-running server this can accumulate zombies over the lifetime of the process.Consider awaiting the child in a non-blocking fashion via the executor:
Suggested approach: blocking waitpid off the event loop
if session.pid is not None: try: os.kill(session.pid, signal.SIGTERM) - # Use WNOHANG to avoid blocking the event loop - try: - os.waitpid(session.pid, os.WNOHANG) - except ChildProcessError: - pass + # Reap child off the event loop thread to avoid blocking + loop = asyncio.get_running_loop() + try: + await loop.run_in_executor( + None, os.waitpid, session.pid, 0 + ) + except ChildProcessError: + pass except (OSError, ChildProcessError): passThis ensures the child is reaped without blocking the event loop. If you're concerned about a child that ignores
SIGTERM, you could add a timeout and escalate toSIGKILL.code_puppy/mcp_/blocking_startup.py (1)
448-456:⚠️ Potential issue | 🟡 MinorUnawaited tasks will silently drop exceptions.
_startup_tasksare created but never awaited or gathered. If anystart_servercoroutine raises, Python will emit a "Task exception was never retrieved" warning at GC time. Consider gathering them (withreturn_exceptions=True) afterwait_all_readyto capture and handle errors:Proposed fix
# Wait for all to be ready results = await monitor.wait_all_ready(timeout) + # Collect startup task results to surface any exceptions + await asyncio.gather(*_startup_tasks, return_exceptions=True) + # Get the reportcode_puppy/cli_runner.py (1)
990-998:⚠️ Potential issue | 🔴 CriticalPre-existing bug:
responseis a tuple, not a single result.
run_prompt_with_attachmentsreturns(result, agent_task)(see Line 897 docstring). Line 990 assigns the tuple toresponse, then Line 995 checksresponse is None(alwaysFalsefor a tuple), and Line 998 accessesresponse.outputwhich will raiseAttributeError.This appears to be a pre-existing issue, but it's in the code path for the
-pflag (execute_single_prompt), so it will crash every time that flag is used.🐛 Proposed fix: unpack the tuple
- response = await run_prompt_with_attachments( + response, _task = await run_prompt_with_attachments( agent, prompt, spinner_console=message_renderer.console, )code_puppy/tools/command_runner.py (3)
967-1056:⚠️ Potential issue | 🟡 MinorEmpty/blank command validation is bypassed when
background=True.The
backgroundpath (line 967) runs before the empty-command guard at line 1056. A caller passingbackground=Truewithcommand=""will attempt to launch a subprocess with an empty shell command, which will either error cryptically or spawn a shell with no work.🛡️ Proposed fix: move validation before the background branch
+ if not command or not command.strip(): + emit_error("Command cannot be empty", message_group=group_id) + return ShellCommandOutput( + **{"success": False, "error": "Command cannot be empty"} + ) + # Handle background execution - runs command detached and returns immediately # This happens BEFORE user confirmation since we don't wait for the command if background:And remove the duplicate check at line 1056–1060 (or keep it as a belt-and-suspenders guard).
1135-1135:⚠️ Potential issue | 🟡 MinorOrphaned
time.time()call — result is discarded.
time.time()is called but its return value is not assigned. This appears to be a leftover from when astart_timewas captured here. It should either be removed or assigned.🧹 Proposed fix
else: - time.time() + pass
608-612:⚠️ Potential issue | 🟡 Minor
group_idtype annotation is incorrect.
group_id: str = Nonesays the parameter isstrbut defaults toNone. This should bestr | None = None(orOptional[str] = None) for correctness with type checkers.Proposed fix
def run_shell_command_streaming( process: subprocess.Popen, timeout: int = 60, command: str = "", - group_id: str = None, + group_id: str | None = None, silent: bool = False, ):
🤖 Fix all issues with AI agents
In `@code_puppy/agents/agent_manager.py`:
- Around line 422-425: The snapshot/persist pattern using _SESSION_LOCK and
_SESSION_AGENTS_CACHE is good, but _save_session_data writes to a shared .tmp
path and can be clobbered by concurrent callers; update _save_session_data to
write to a unique temporary file (use tempfile.NamedTemporaryFile or mkstemp to
create a per-call temp file) and then atomically rename it to the target session
file, or alternatively hold _SESSION_LOCK across the call to _save_session_data;
implement the unique-temp-file approach inside _save_session_data (referencing
_save_session_data, _SESSION_LOCK, and _SESSION_AGENTS_CACHE) to avoid global
.tmp collisions.
In `@code_puppy/mcp_/blocking_startup.py`:
- Around line 110-116: Remove the finalizer in the monitor thread that closes
self.log_file: the monitor should not close or set self.log_file to None because
start() hands the append-mode handle to the subprocess and stop() is responsible
for flushing/closing it; locate the finally block that calls
self.log_file.close() and sets self.log_file = None and delete that block so the
monitor only reads from the rotated file (it already uses its own with open(...)
reader) and leave lifecycle/cleanup solely to the stop() method which flushes
and closes self.log_file.
In `@code_puppy/mcp_/health_monitor.py`:
- Around line 561-568: The finalizer HealthMonitor.__del__ accesses
self.monitoring_tasks and can raise AttributeError if __init__ failed before
initializing monitoring_tasks; modify __del__ to defensively check for the
attribute (e.g., using hasattr or getattr with a default) or wrap the access in
try/except AttributeError so it only logs when monitoring_tasks exists and is
truthy; update references to __del__ and monitoring_tasks accordingly to avoid
masking the original constructor error.
In `@code_puppy/models.json`:
- Around line 54-60: The model entry keyed as "gpt-5.2" still has an inner
"name" field set to "gpt-5.1"; update the "name" value in that object to
"gpt-5.2" so the inner name matches the top-level key (look for the object with
key "gpt-5.2" and the "name" property and change it accordingly).
- Around line 96-101: The models.json entry for "claude-4-6-opus" mistakenly
sets name to "claude-opus-4-5" (duplicating the 4-5 entry); update the name
field in the "claude-4-6-opus" object to "claude-opus-4-6" so it correctly
points to Anthropic's Claude Opus 4.6 (check the "claude-4-6-opus" object and
replace the value of the name property currently set to "claude-opus-4-5" with
"claude-opus-4-6").
In `@code_puppy/session_storage.py`:
- Around line 130-133: The metadata atomic write (tmp_metadata ->
paths.metadata_path) currently lacks flush/fsync and temp-file cleanup on
failure; modify the tmp write in session_storage.py so after json.dump to
metadata_file you call metadata_file.flush() and
os.fsync(metadata_file.fileno()), then atomically replace tmp_metadata with
paths.metadata_path, and ensure any exception removes the tmp_metadata file
(e.g., try/except/finally around the write/replace to unlink tmp_metadata on
error); reference symbols: tmp_metadata, paths.metadata_path,
metadata.as_serialisable(), metadata_file.
- Around line 112-117: The temporary pickle write in save_session currently
writes tmp_pickle without fsync or cleanup on failure; update the save_session
implementation that creates pickle_data and signature via _sign_data and writes
to paths.pickle_path.with_suffix(".tmp") so that the write is wrapped in a
try/except BaseException: open the temp file as f, write _HEADER_MAGIC +
signature + pickle_data, then call f.flush() and os.fsync(f.fileno()), and only
after that perform tmp_pickle.replace(paths.pickle_path); in the except block
remove the tmp file (tmp_pickle.unlink(missing_ok=True) or similar) and
re-raise, and ensure os is imported if not already.
🧹 Nitpick comments (12)
code_puppy/tools/agent_tools.py (1)
497-499: Consider movinguuidimport to module level.
uuidis a stdlib module with no circular dependency risk, unlike the other lazy imports in this function. Importing it at the top of the file would be more consistent with the other stdlib imports (lines 2-12).code_puppy/api/pty_manager.py (1)
53-66:is_alive()reaps the child as a side effect ofos.waitpid.
os.waitpidwithWNOHANGwill reap a zombie child when it returns a non-zeropid_result. This means callingis_alive()has the side effect of consuming the child's exit status. After it returnsFalse, the PID is released and may be recycled by the OS — a subsequentos.kill(session.pid, SIGTERM)in_close_session_internalcould theoretically target an unrelated process.In practice the window is narrow and the kill is guarded by
try/except, so this isn't a blocker, but it's worth being aware of. If you want a side-effect-free liveness check, you could useos.kill(self.pid, 0)instead (which checks process existence without reaping):Alternative using signal 0
def is_alive(self) -> bool: """Check if the PTY session is still active.""" if IS_WINDOWS: return self.winpty_process is not None and self.winpty_process.isalive() else: if self.pid is None: return False try: - pid_result, status = os.waitpid(self.pid, os.WNOHANG) - if pid_result == 0: - return True # Still running - return False # Exited - except ChildProcessError: - return False # Already reaped + os.kill(self.pid, 0) + return True + except OSError: + return FalseNote:
os.kill(pid, 0)doesn't reap zombies, so the child will remain a zombie until explicitly waited on (in_close_session_internal). That's typically the desired behavior — liveness checks shouldn't have reaping side effects.code_puppy/mcp_/retry_manager.py (1)
316-337: Clean convenience wrapper — consider tightening thefunctype hint.The function is a well-structured delegation to
retry_with_backoffwith production-friendly defaults.Minor nit:
func: Callableis quite broad — since it must be a zero-arg async callable, a more precise type likeCallable[[], Awaitable[Any]]would catch misuse at type-check time. This applies equally toretry_with_backoff, so it could be addressed holistically.♻️ Optional: tighten the type hint
-from typing import Any, Callable, Dict, Optional +from typing import Any, Awaitable, Callable, Dict, OptionalThen in both
retry_with_backoffandretry_mcp_call:- func: Callable, + func: Callable[[], Awaitable[Any]],code_puppy/scheduler/daemon.py (1)
124-142: Atomic PID write is good; consider addingflush/fsyncfor consistency.The atomic write pattern matches the one in
storage.py, but omitsf.flush()andos.fsync(f.fileno())before theos.replace. For a PID file this is unlikely to matter in practice, but for consistency with the other atomic-write sites in this PR (e.g.,save_accounts), you may want to add them.Optional: add flush/fsync for consistency
try: with os.fdopen(fd, "w") as f: f.write(str(os.getpid())) + f.flush() + os.fsync(f.fileno()) os.replace(tmp_path, SCHEDULER_PID_FILE)code_puppy/session_storage.py (1)
24-26:_safe_loadsis misleadingly named — it provides no safety overpickle.loads.The function name implies restricted/safe deserialization, but it's just a direct pass-through to
pickle.loads. This is deceptive to future readers and auditors who may assume pickle deserialization has been hardened. Either implement an actualRestrictedUnpickleror rename to something neutral like_deserialize.Option A: rename to be honest about what it does
-def _safe_loads(data: bytes) -> Any: - """Deserialize pickle data.""" +def _deserialize(data: bytes) -> Any: + """Deserialize pickle data (no sandboxing — caller must trust the source).""" return pickle.loads(data) # noqa: S301code_puppy/tools/file_operations.py (1)
469-484: Validation runs inside the open file handle — minor inefficiency; also,itertoolsimport should be at module level.Two observations:
The
start_line/num_linesvalidation at lines 469–474 occurs after the file is already opened at line 468. Moving it before theopen()call would avoid the unnecessary file I/O on invalid input.
import itertoolsat line 478 is inside the function body. This is a stdlib module and is typically imported at the top of the file for clarity and to avoid repeated import-lookup overhead on every call with a line range.♻️ Suggested: move validation before open, and move import to module level
At the top of the file (with other stdlib imports):
+import itertoolsIn
_read_file, restructure so validation precedes the file open:def _read_file( context: RunContext, file_path: str, start_line: int | None = None, num_lines: int | None = None, ) -> ReadFileOutput: file_path = os.path.abspath(os.path.expanduser(file_path)) if not os.path.exists(file_path): error_msg = f"File {file_path} does not exist" return ReadFileOutput(content=error_msg, num_tokens=0, error=error_msg) if not os.path.isfile(file_path): error_msg = f"{file_path} is not a file" return ReadFileOutput(content=error_msg, num_tokens=0, error=error_msg) + if start_line is not None and start_line < 1: + error_msg = "start_line must be >= 1 (1-based indexing)" + return ReadFileOutput(content=error_msg, num_tokens=0, error=error_msg) + if num_lines is not None and num_lines < 1: + error_msg = "num_lines must be >= 1" + return ReadFileOutput(content=error_msg, num_tokens=0, error=error_msg) try: with open(file_path, "r", encoding="utf-8", errors="surrogateescape") as f: - if start_line is not None and start_line < 1: - error_msg = "start_line must be >= 1 (1-based indexing)" - return ReadFileOutput(content=error_msg, num_tokens=0, error=error_msg) - if num_lines is not None and num_lines < 1: - error_msg = "num_lines must be >= 1" - return ReadFileOutput(content=error_msg, num_tokens=0, error=error_msg) if start_line is not None and num_lines is not None: - import itertools start_idx = start_line - 1code_puppy/model_factory.py (1)
304-317: Specifyencoding="utf-8"on file operations.Lines 308–312 open files without an explicit encoding. On some platforms (notably Windows with older Python), the default encoding may not be UTF-8, which could corrupt the config if non-ASCII characters are ever introduced.
Proposed fix
with open( - pathlib.Path(__file__).parent / "models.json", "r" + pathlib.Path(__file__).parent / "models.json", "r", encoding="utf-8" ) as src: - with open(models_path, "w") as target: + with open(models_path, "w", encoding="utf-8") as target: target.write(src.read())tests/test_reopenable_async_client.py (1)
387-398: Consider asserting thattemp_client.close()is called.The production code's
finallyblock (Line 189 ofreopenable_async_client.py) ensures the temporary client is closed. Addingmock_temp_client.close.assert_called_once()would verify this cleanup isn't accidentally removed.♻️ Suggested assertion
result = client.build_request("GET", "https://example.com") assert result is mock_request mock_temp_client.build_request.assert_called_once() + mock_temp_client.close.assert_called_once()code_puppy/mcp_/circuit_breaker.py (1)
215-216: Docstring says "no lock" — consider clarifying to "caller must hold_sync_lock".The current docstring "synchronous, no lock" could be misread as "no synchronization needed." Since these helpers rely on the caller holding
_sync_lock, a clearer phrasing would prevent misuse.✏️ Suggested wording
- def _on_success_sync(self, checked_state: CircuitState | None = None) -> None: - """Handle successful operation (synchronous, no lock).""" + def _on_success_sync(self, checked_state: CircuitState | None = None) -> None: + """Handle successful operation (synchronous, caller must hold _sync_lock)."""- def _on_failure_sync(self, checked_state: CircuitState | None = None) -> None: - """Handle failed operation (synchronous, no lock).""" + def _on_failure_sync(self, checked_state: CircuitState | None = None) -> None: + """Handle failed operation (synchronous, caller must hold _sync_lock)."""code_puppy/version_checker.py (1)
16-21: Consider usingpackaging.versionfor more robust version parsing.
_version_tuplewill returnNonefor valid PEP 440 versions with pre-release/post-release/dev suffixes (e.g.,"1.2.3rc1","1.2.3.post1"). Theversion_is_newerfunction silently returnsFalsein these cases without fallback, potentially missing legitimate version updates. Theversions_are_equalfunction is safer, falling back to string comparison.The
packaginglibrary (available as a transitive dependency) handles all PEP 440 version forms correctly:♻️ Optional refactor using packaging.version
+from packaging.version import Version, InvalidVersion + + -def _version_tuple(version_str): - """Convert version string to tuple of ints for proper comparison.""" - try: - return tuple(int(x) for x in version_str.split(".")) - except (ValueError, AttributeError): - return None +def _parse_version(version_str): + """Parse version string into a comparable Version object.""" + try: + return Version(version_str) + except (InvalidVersion, TypeError): + return None def version_is_newer(latest, current): """Return True if latest version is strictly newer than current.""" - latest_tuple = _version_tuple(normalize_version(latest)) - current_tuple = _version_tuple(normalize_version(current)) - if latest_tuple is None or current_tuple is None: + latest_v = _parse_version(normalize_version(latest)) + current_v = _parse_version(normalize_version(current)) + if latest_v is None or current_v is None: return False - return latest_tuple > current_tuple + return latest_v > current_vcode_puppy/tools/command_runner.py (1)
1255-1286: Dead code in_run_command_innerexception handler.
stdoutandstderrcan never exist inlocals()here — this function only callsrun_in_executorwhich returns aShellCommandOutput, not raw strings. Theif "stdout" not in locals()checks always evaluate toTrue, and the subsequent truncation blocks (lines 1264–1276) are unreachable becausestdout/stderrwill always beNone.♻️ Simplified exception handler
except Exception as e: if not silent: emit_error(traceback.format_exc(), message_group=group_id) - if "stdout" not in locals(): - stdout = None - if "stderr" not in locals(): - stderr = None - - # Apply line length limits to stdout/stderr if they exist - truncated_stdout = None - if stdout: - stdout_lines = stdout.split("\n") - truncated_stdout = "\n".join( - [_truncate_line(line) for line in stdout_lines[-256:]] - ) - - truncated_stderr = None - if stderr: - stderr_lines = stderr.split("\n") - truncated_stderr = "\n".join( - [_truncate_line(line) for line in stderr_lines[-256:]] - ) return ShellCommandOutput( success=False, command=command, error=f"Error executing command {str(e)}", - stdout=truncated_stdout, - stderr=truncated_stderr, + stdout=None, + stderr=None, exit_code=-1, timeout=False, )code_puppy/mcp_/health_monitor.py (1)
144-146:last_check_time.popis outside the lock — acceptable here but inconsistent.
consecutive_failures.popis insideself._lock(line 145) butlast_check_time.pop(line 146) is outside. This is safe becausestop_monitoringawaits task cancellation first (lines 137–141), so no concurrent writer exists. But for consistency and future-proofing, consider moving it inside the lock.
| with _SESSION_LOCK: | ||
| _SESSION_AGENTS_CACHE[session_id] = agent_name | ||
| cache_snapshot = dict(_SESSION_AGENTS_CACHE) | ||
| _save_session_data(cache_snapshot) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good pattern: snapshot under lock, persist outside.
Taking cache_snapshot inside the lock and calling _save_session_data outside minimizes lock hold time — well done.
Minor concern: concurrent saves can clobber the temp file. Since persistence now happens outside the lock, two threads calling set_current_agent concurrently could race in _save_session_data on the shared .tmp temp file path (line 170). Consider either holding the lock through the save, or using a unique temp file name (e.g., via tempfile.NamedTemporaryFile).
Proposed fix using a unique temp file
def _save_session_data(sessions: dict[str, str]) -> None:
session_file = _get_session_file_path()
try:
session_file.parent.mkdir(parents=True, exist_ok=True)
cleaned_sessions = _cleanup_dead_sessions(sessions)
- temp_file = session_file.with_suffix(".tmp")
- with open(temp_file, "w", encoding="utf-8") as f:
+ import tempfile
+ fd, tmp_path = tempfile.mkstemp(
+ dir=session_file.parent, suffix=".tmp"
+ )
+ try:
+ with os.fdopen(fd, "w", encoding="utf-8") as f:
+ json.dump(cleaned_sessions, f, indent=2)
+ Path(tmp_path).replace(session_file)
+ except BaseException:
+ Path(tmp_path).unlink(missing_ok=True)
+ raise
- json.dump(cleaned_sessions, f, indent=2)
- temp_file.replace(session_file)
except (IOError, OSError):
pass🤖 Prompt for AI Agents
In `@code_puppy/agents/agent_manager.py` around lines 422 - 425, The
snapshot/persist pattern using _SESSION_LOCK and _SESSION_AGENTS_CACHE is good,
but _save_session_data writes to a shared .tmp path and can be clobbered by
concurrent callers; update _save_session_data to write to a unique temporary
file (use tempfile.NamedTemporaryFile or mkstemp to create a per-call temp file)
and then atomically rename it to the target session file, or alternatively hold
_SESSION_LOCK across the call to _save_session_data; implement the
unique-temp-file approach inside _save_session_data (referencing
_save_session_data, _SESSION_LOCK, and _SESSION_AGENTS_CACHE) to avoid global
.tmp collisions.
| finally: | ||
| if self.log_file is not None: | ||
| try: | ||
| self.log_file.close() | ||
| except Exception: | ||
| pass | ||
| self.log_file = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Monitor thread should not close log_file — risk of premature close and double-close.
self.log_file is the append-mode handle returned by start() and passed to stdio_client as errlog (line 201). The subprocess writes its stderr to this handle. When the monitor thread exits (normally or via exception), this finally block closes the handle while the subprocess may still be writing to it.
Additionally, stop() (lines 124–129) also flushes and closes self.log_file, creating a double-close race with this finally block.
The monitor thread only reads from the log file (via its own with open(...) at line 89). It should not own the lifecycle of the write handle. Remove the finally block here and let stop() be the sole owner of log_file cleanup.
Proposed fix
- finally:
- if self.log_file is not None:
- try:
- self.log_file.close()
- except Exception:
- pass
- self.log_file = None📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| finally: | |
| if self.log_file is not None: | |
| try: | |
| self.log_file.close() | |
| except Exception: | |
| pass | |
| self.log_file = None |
🤖 Prompt for AI Agents
In `@code_puppy/mcp_/blocking_startup.py` around lines 110 - 116, Remove the
finalizer in the monitor thread that closes self.log_file: the monitor should
not close or set self.log_file to None because start() hands the append-mode
handle to the subprocess and stop() is responsible for flushing/closing it;
locate the finally block that calls self.log_file.close() and sets self.log_file
= None and delete that block so the monitor only reads from the rotated file (it
already uses its own with open(...) reader) and leave lifecycle/cleanup solely
to the stop() method which flushes and closes self.log_file.
| def __del__(self) -> None: | ||
| """Warn if there are still running monitoring tasks on garbage collection.""" | ||
| if self.monitoring_tasks: | ||
| logger.warning( | ||
| f"HealthMonitor garbage collected with {len(self.monitoring_tasks)} " | ||
| f"active monitoring tasks. Use 'async with' or call close() to " | ||
| f"prevent orphaned tasks." | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
__del__ may raise AttributeError if __init__ failed before assigning monitoring_tasks.
If the constructor raises before line 74 (self.monitoring_tasks = {}), the finalizer will attempt to access self.monitoring_tasks and raise an AttributeError, masking the original error.
🛡️ Proposed defensive fix
def __del__(self) -> None:
"""Warn if there are still running monitoring tasks on garbage collection."""
- if self.monitoring_tasks:
+ if getattr(self, "monitoring_tasks", None):
logger.warning(
f"HealthMonitor garbage collected with {len(self.monitoring_tasks)} "
f"active monitoring tasks. Use 'async with' or call close() to "
f"prevent orphaned tasks."
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def __del__(self) -> None: | |
| """Warn if there are still running monitoring tasks on garbage collection.""" | |
| if self.monitoring_tasks: | |
| logger.warning( | |
| f"HealthMonitor garbage collected with {len(self.monitoring_tasks)} " | |
| f"active monitoring tasks. Use 'async with' or call close() to " | |
| f"prevent orphaned tasks." | |
| ) | |
| def __del__(self) -> None: | |
| """Warn if there are still running monitoring tasks on garbage collection.""" | |
| if getattr(self, "monitoring_tasks", None): | |
| logger.warning( | |
| f"HealthMonitor garbage collected with {len(self.monitoring_tasks)} " | |
| f"active monitoring tasks. Use 'async with' or call close() to " | |
| f"prevent orphaned tasks." | |
| ) |
🤖 Prompt for AI Agents
In `@code_puppy/mcp_/health_monitor.py` around lines 561 - 568, The finalizer
HealthMonitor.__del__ accesses self.monitoring_tasks and can raise
AttributeError if __init__ failed before initializing monitoring_tasks; modify
__del__ to defensively check for the attribute (e.g., using hasattr or getattr
with a default) or wrap the access in try/except AttributeError so it only logs
when monitoring_tasks exists and is truthy; update references to __del__ and
monitoring_tasks accordingly to avoid masking the original constructor error.
| "gpt-5.2": { | ||
| "type": "openai", | ||
| "name": "gpt-5.1", | ||
| "context_length": 272000, | ||
| "supported_settings": ["temperature", "top_p", "reasoning_effort", "verbosity"], | ||
| "supports_xhigh_reasoning": false | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
name field still says "gpt-5.1" — likely copy-paste oversight.
The top-level key was updated to "gpt-5.2" but the inner name on line 56 still reads "gpt-5.1". The codex variant on line 63 was correctly updated to "gpt-5.2-codex". If this is meant to point to a new model version, the name should be updated too.
Proposed fix
"gpt-5.2": {
"type": "openai",
- "name": "gpt-5.1",
+ "name": "gpt-5.2",
"context_length": 272000,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "gpt-5.2": { | |
| "type": "openai", | |
| "name": "gpt-5.1", | |
| "context_length": 272000, | |
| "supported_settings": ["temperature", "top_p", "reasoning_effort", "verbosity"], | |
| "supports_xhigh_reasoning": false | |
| }, | |
| "gpt-5.2": { | |
| "type": "openai", | |
| "name": "gpt-5.2", | |
| "context_length": 272000, | |
| "supported_settings": ["temperature", "top_p", "reasoning_effort", "verbosity"], | |
| "supports_xhigh_reasoning": false | |
| }, |
🤖 Prompt for AI Agents
In `@code_puppy/models.json` around lines 54 - 60, The model entry keyed as
"gpt-5.2" still has an inner "name" field set to "gpt-5.1"; update the "name"
value in that object to "gpt-5.2" so the inner name matches the top-level key
(look for the object with key "gpt-5.2" and the "name" property and change it
accordingly).
| "claude-4-6-opus": { | ||
| "type": "anthropic", | ||
| "name": "claude-opus-4-5", | ||
| "context_length": 200000, | ||
| "supported_settings": ["temperature", "top_p"] | ||
| "supported_settings": ["temperature", "extended_thinking", "budget_tokens", "interleaved_thinking"] | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
Anthropic Claude models versions list claude-opus-4-6
💡 Result:
Claude Opus model versions (API model IDs)
| Marketing name | Claude API model ID (exact) | Notes |
|---|---|---|
| Claude Opus 4.6 | claude-opus-4-6 |
Current Opus “latest” alias (no snapshot suffix shown). [1][2] |
| Claude Opus 4.5 | claude-opus-4-5-20251101 |
Snapshot-dated Opus 4.5. [3] |
| Claude Opus 4.1 | claude-opus-4-1-20250805 |
Snapshot-dated Opus 4.1. [3] |
| Claude Opus 4.0 (“Claude Opus 4”) | claude-opus-4-20250514 |
Snapshot-dated Opus 4.0. [3] |
Opus 4.6 announcement date: Feb 5, 2026. [2]
Sources: Anthropic Claude API docs (models overview + deprecations) and Anthropic announcement. [1][2][3]
claude-4-6-opus has name: "claude-opus-4-5" — duplicate of the 4-5 entry.
This entry is identical to claude-4-5-opus (lines 90–95). The name field should be "claude-opus-4-6" to point to Anthropic's Claude Opus 4.6 model.
Proposed fix
"claude-4-6-opus": {
"type": "anthropic",
- "name": "claude-opus-4-5",
+ "name": "claude-opus-4-6",
"context_length": 200000,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "claude-4-6-opus": { | |
| "type": "anthropic", | |
| "name": "claude-opus-4-5", | |
| "context_length": 200000, | |
| "supported_settings": ["temperature", "top_p"] | |
| "supported_settings": ["temperature", "extended_thinking", "budget_tokens", "interleaved_thinking"] | |
| }, | |
| "claude-4-6-opus": { | |
| "type": "anthropic", | |
| "name": "claude-opus-4-6", | |
| "context_length": 200000, | |
| "supported_settings": ["temperature", "extended_thinking", "budget_tokens", "interleaved_thinking"] | |
| }, |
🤖 Prompt for AI Agents
In `@code_puppy/models.json` around lines 96 - 101, The models.json entry for
"claude-4-6-opus" mistakenly sets name to "claude-opus-4-5" (duplicating the 4-5
entry); update the name field in the "claude-4-6-opus" object to
"claude-opus-4-6" so it correctly points to Anthropic's Claude Opus 4.6 (check
the "claude-4-6-opus" object and replace the value of the name property
currently set to "claude-opus-4-5" with "claude-opus-4-6").
| pickle_data = pickle.dumps(history) | ||
| signature = _sign_data(pickle_data) | ||
| tmp_pickle = paths.pickle_path.with_suffix(".tmp") | ||
| with tmp_pickle.open("wb") as pickle_file: | ||
| pickle_file.write(_HEADER_MAGIC + signature + pickle_data) | ||
| tmp_pickle.replace(paths.pickle_path) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Atomic pickle write lacks cleanup and fsync, unlike the pattern in storage.py.
save_accounts in storage.py (lines 264-278) wraps the temp-file write in a try/except BaseException to clean up on failure and calls f.flush() + os.fsync() before os.replace. This save_session implementation does neither — a crash mid-write leaves an orphan .tmp file, and without fsync the data may not be durable before the rename.
Proposed fix: add fsync and cleanup
pickle_data = pickle.dumps(history)
signature = _sign_data(pickle_data)
tmp_pickle = paths.pickle_path.with_suffix(".tmp")
- with tmp_pickle.open("wb") as pickle_file:
- pickle_file.write(_HEADER_MAGIC + signature + pickle_data)
- tmp_pickle.replace(paths.pickle_path)
+ try:
+ with tmp_pickle.open("wb") as pickle_file:
+ pickle_file.write(_HEADER_MAGIC + signature + pickle_data)
+ pickle_file.flush()
+ os.fsync(pickle_file.fileno())
+ tmp_pickle.replace(paths.pickle_path)
+ except BaseException:
+ try:
+ tmp_pickle.unlink(missing_ok=True)
+ except OSError:
+ pass
+ raise🤖 Prompt for AI Agents
In `@code_puppy/session_storage.py` around lines 112 - 117, The temporary pickle
write in save_session currently writes tmp_pickle without fsync or cleanup on
failure; update the save_session implementation that creates pickle_data and
signature via _sign_data and writes to paths.pickle_path.with_suffix(".tmp") so
that the write is wrapped in a try/except BaseException: open the temp file as
f, write _HEADER_MAGIC + signature + pickle_data, then call f.flush() and
os.fsync(f.fileno()), and only after that perform
tmp_pickle.replace(paths.pickle_path); in the except block remove the tmp file
(tmp_pickle.unlink(missing_ok=True) or similar) and re-raise, and ensure os is
imported if not already.
| tmp_metadata = paths.metadata_path.with_suffix(".tmp") | ||
| with tmp_metadata.open("w", encoding="utf-8") as metadata_file: | ||
| json.dump(metadata.as_serialisable(), metadata_file, indent=2) | ||
| tmp_metadata.replace(paths.metadata_path) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same missing cleanup/fsync issue for the metadata temp file.
Consistent with the pickle write above, the metadata atomic write should also flush, fsync, and clean up on failure.
Proposed fix
tmp_metadata = paths.metadata_path.with_suffix(".tmp")
- with tmp_metadata.open("w", encoding="utf-8") as metadata_file:
- json.dump(metadata.as_serialisable(), metadata_file, indent=2)
- tmp_metadata.replace(paths.metadata_path)
+ try:
+ with tmp_metadata.open("w", encoding="utf-8") as metadata_file:
+ json.dump(metadata.as_serialisable(), metadata_file, indent=2)
+ metadata_file.flush()
+ os.fsync(metadata_file.fileno())
+ tmp_metadata.replace(paths.metadata_path)
+ except BaseException:
+ try:
+ tmp_metadata.unlink(missing_ok=True)
+ except OSError:
+ pass
+ raise📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| tmp_metadata = paths.metadata_path.with_suffix(".tmp") | |
| with tmp_metadata.open("w", encoding="utf-8") as metadata_file: | |
| json.dump(metadata.as_serialisable(), metadata_file, indent=2) | |
| tmp_metadata.replace(paths.metadata_path) | |
| tmp_metadata = paths.metadata_path.with_suffix(".tmp") | |
| try: | |
| with tmp_metadata.open("w", encoding="utf-8") as metadata_file: | |
| json.dump(metadata.as_serialisable(), metadata_file, indent=2) | |
| metadata_file.flush() | |
| os.fsync(metadata_file.fileno()) | |
| tmp_metadata.replace(paths.metadata_path) | |
| except BaseException: | |
| try: | |
| tmp_metadata.unlink(missing_ok=True) | |
| except OSError: | |
| pass | |
| raise |
🤖 Prompt for AI Agents
In `@code_puppy/session_storage.py` around lines 130 - 133, The metadata atomic
write (tmp_metadata -> paths.metadata_path) currently lacks flush/fsync and
temp-file cleanup on failure; modify the tmp write in session_storage.py so
after json.dump to metadata_file you call metadata_file.flush() and
os.fsync(metadata_file.fileno()), then atomically replace tmp_metadata with
paths.metadata_path, and ensure any exception removes the tmp_metadata file
(e.g., try/except/finally around the write/replace to unlink tmp_metadata on
error); reference symbols: tmp_metadata, paths.metadata_path,
metadata.as_serialisable(), metadata_file.
…OS default assertions - Implement abstract provider_url property on AntigravityStreamingResponse (returns None) - Fix test_config_extended_part1/part2 to expect True as default for get_use_dbos()
terminal_screenshot() returns a ToolReturn object on success (for multimodal model consumption), not a plain dict. The callers in terminal_command_tools.py were subscripting it like a dict, causing TypeError: 'ToolReturn' object is not subscriptable. - Use isinstance checks to handle both ToolReturn and dict returns - Extract screenshot_path from ToolReturn.metadata on success - Remove stale base64_image references (binary content lives in ToolReturn.content, not as raw base64 in a dict) - Update test mock to use ToolReturn matching real behavior
pydantic-ai 1.56.0 added provider_url as an abstract property on StreamedResponse. GeminiStreamingResponse was missing it, causing 'Can't instantiate abstract class' errors at runtime. - Add _provider_url_str field defaulting to None - Implement provider_url property returning _provider_url_str - Pass _base_url when constructing the streaming response
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@code_puppy/tools/browser/terminal_command_tools.py`:
- Around line 213-223: The code assumes ToolReturn.metadata is a dict and calls
.get on it which can raise AttributeError if metadata is None; update the
handling of screenshot_result (the ToolReturn case in the block that sets
result["screenshot_path"] and result["media_type"]) to safely access metadata by
using a safe default (e.g., metadata = screenshot_result.metadata or {}; then
metadata.get("screenshot_path")) or use getattr(screenshot_result, "metadata")
or None and guard before calling .get; ensure result["screenshot_path"] is set
to None or a sensible default when metadata is absent and keep the existing
dict-handling branch unchanged.
- Around line 413-423: The screenshot-handling block duplicates logic from
run_terminal_command and misses the None guard for ToolReturn.metadata; update
the branch handling screenshot_result (and the one in run_terminal_command) to
treat screenshot_result.metadata as optional (e.g., metadata =
screenshot_result.metadata or {}) before accessing "screenshot_path", and
refactor by extracting a helper function (e.g.,
_extract_screenshot_info(screenshot_result)) that returns {"screenshot_path":
..., "media_type": "image/png"} for successful ToolReturn or dict results and {}
otherwise, then call that helper in both places to remove duplication.
🧹 Nitpick comments (3)
code_puppy/agents/event_stream_handler.py (1)
42-48: Consider narrowing the exception handler aroundcreate_task.
asyncio.create_taskwill raiseRuntimeErrorif there's no running event loop. The broadexcept Exceptionon Line 47 silently swallows this (and any other failure) atdebuglevel, which could mask real issues. This is pre-existing, but since you're touching related async code, worth considering logging atwarninglevel or catchingRuntimeErrorspecifically.code_puppy/gemini_model.py (1)
648-668: SSE stream parsing silently swallowsJSONDecodeError.If the API returns malformed JSON in a chunk, the
except json.JSONDecodeError: continueon Line 668 silently drops it. This is reasonable for SSE resilience, but consider logging atdebuglevel so streaming parse issues aren't completely invisible during troubleshooting.Optional: log malformed SSE chunks
try: yield json.loads(json_str) except json.JSONDecodeError: + logger.debug("Skipping malformed SSE chunk: %s", json_str[:200]) continuetests/tools/browser/test_terminal_command_tools.py (1)
607-633: Missing test forwait_for_terminal_outputwithToolReturnscreenshot result.There is a test for
run_terminal_commandwith aToolReturn-based screenshot (line 61), but no equivalent test forwait_for_terminal_outputwithcapture_screenshot=Truereturning aToolReturn. The new ToolReturn branch at lines 413–418 of the source file is untested.Consider adding a test similar to
test_run_command_success_with_screenshotthat mocksterminal_screenshotto return aToolReturnand assertsscreenshot_pathandmedia_typeon thewait_for_terminal_outputresult.
| if isinstance(screenshot_result, ToolReturn): | ||
| # Success: ToolReturn with metadata | ||
| result["screenshot_path"] = screenshot_result.metadata.get( | ||
| "screenshot_path" | ||
| ) | ||
| result["media_type"] = "image/png" | ||
| elif isinstance(screenshot_result, dict) and screenshot_result.get( | ||
| "success" | ||
| ): | ||
| result["screenshot_path"] = screenshot_result.get("screenshot_path") | ||
| result["media_type"] = "image/png" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential AttributeError if ToolReturn.metadata is None.
If terminal_screenshot() returns a ToolReturn whose metadata field is None (e.g., constructed without metadata), calling .get("screenshot_path") on None will raise AttributeError. Add a guard or default.
Proposed fix
if isinstance(screenshot_result, ToolReturn):
# Success: ToolReturn with metadata
- result["screenshot_path"] = screenshot_result.metadata.get(
- "screenshot_path"
- )
- result["media_type"] = "image/png"
+ metadata = screenshot_result.metadata or {}
+ result["screenshot_path"] = metadata.get("screenshot_path")
+ result["media_type"] = "image/png"🤖 Prompt for AI Agents
In `@code_puppy/tools/browser/terminal_command_tools.py` around lines 213 - 223,
The code assumes ToolReturn.metadata is a dict and calls .get on it which can
raise AttributeError if metadata is None; update the handling of
screenshot_result (the ToolReturn case in the block that sets
result["screenshot_path"] and result["media_type"]) to safely access metadata by
using a safe default (e.g., metadata = screenshot_result.metadata or {}; then
metadata.get("screenshot_path")) or use getattr(screenshot_result, "metadata")
or None and guard before calling .get; ensure result["screenshot_path"] is set
to None or a sensible default when metadata is absent and keep the existing
dict-handling branch unchanged.
| if isinstance(screenshot_result, ToolReturn): | ||
| # Success: ToolReturn with metadata | ||
| result["screenshot_path"] = screenshot_result.metadata.get( | ||
| "screenshot_path" | ||
| ) | ||
| result["media_type"] = "image/png" | ||
| elif isinstance(screenshot_result, dict) and screenshot_result.get( | ||
| "success" | ||
| ): | ||
| result["screenshot_path"] = screenshot_result.get("screenshot_path") | ||
| result["media_type"] = "image/png" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same metadata guard needed here; consider extracting shared screenshot-handling logic.
This block duplicates the ToolReturn handling from run_terminal_command (lines 213–223). Apply the same None guard for metadata, and consider extracting a small helper to avoid the duplication.
,
Proposed helper extraction
def _extract_screenshot_info(screenshot_result) -> Dict[str, Any]:
"""Extract screenshot info from either ToolReturn or dict result."""
if isinstance(screenshot_result, ToolReturn):
metadata = screenshot_result.metadata or {}
return {"screenshot_path": metadata.get("screenshot_path"), "media_type": "image/png"}
elif isinstance(screenshot_result, dict) and screenshot_result.get("success"):
return {"screenshot_path": screenshot_result.get("screenshot_path"), "media_type": "image/png"}
return {}Then in both call sites:
- if isinstance(screenshot_result, ToolReturn):
- ...
- elif isinstance(screenshot_result, dict) and screenshot_result.get(...):
- ...
+ result.update(_extract_screenshot_info(screenshot_result))🤖 Prompt for AI Agents
In `@code_puppy/tools/browser/terminal_command_tools.py` around lines 413 - 423,
The screenshot-handling block duplicates logic from run_terminal_command and
misses the None guard for ToolReturn.metadata; update the branch handling
screenshot_result (and the one in run_terminal_command) to treat
screenshot_result.metadata as optional (e.g., metadata =
screenshot_result.metadata or {}) before accessing "screenshot_path", and
refactor by extracting a helper function (e.g.,
_extract_screenshot_info(screenshot_result)) that returns {"screenshot_path":
..., "media_type": "image/png"} for successful ToolReturn or dict results and {}
otherwise, then call that helper in both places to remove duplication.
…ing single return Update handle_thinking_delta, handle_text_delta, and handle_tool_call_delta call sites in gemini_model.py and antigravity_model.py to iterate over returned events, matching the updated API that yields multiple events.
Fixed TypeError in both gemini_model.py and antigravity_model.py where handle_tool_call_delta() was incorrectly iterated over with a for loop. The method returns ModelResponseStreamEvent | None, not an iterator.
Summary by CodeRabbit