Skip to content

Conversation

@nm-openai
Copy link
Contributor

@nm-openai nm-openai commented Dec 4, 2025

Summary

Briefly describe the changes and the goal of this PR. Make sure the PR title summarizes the changes effectively.

Motivation

Why are these changes necessary? How do they improve the cookbook?


For new content

When contributing new content, read through our contribution guidelines, and mark the following action items as completed:

  • I have added a new entry in registry.yaml (and, optionally, in authors.yaml) so that my content renders on the cookbook website.
  • I have conducted a self-review of my content based on the contribution guidelines:
    • Relevance: This content is related to building with OpenAI technologies and is useful to others.
    • Uniqueness: I have searched for related examples in the OpenAI Cookbook, and verified that my content offers new insights or unique information compared to existing documentation.
    • Spelling and Grammar: I have checked for spelling or grammatical mistakes.
    • Clarity: I have done a final read-through and verified that my submission is well-organized and easy to understand.
    • Correctness: The information I include is correct and all of my code executes successfully.
    • Completeness: I have explained everything fully, including all necessary references and citations.

We will rate each of these areas on a scale from 1 to 4, and will only accept contributions that score 3 or higher on all areas. Refer to our contribution guidelines for more details.

Copy link
Contributor

@Lupie Lupie left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we approve these changes real quick

"\n",
"Key improvements\n",
"\n",
"* Faster and more token efficient: Matches GPT-5.1-Codex performance on SWE-Bench Verified with \\~30% fewer thinking tokens. We recommend “medium” reasoning effort as a good all-around daily driver model. \n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't love "daily driver model" for anyone whose first lang isn't english but it's not a big dealio

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are you cool with this instead @nm ?

Suggested change
"* Faster and more token efficient: Matches GPT-5.1-Codex performance on SWE-Bench Verified with \\~30% fewer thinking tokens. We recommend medium reasoning effort as a good all-around daily driver model. \n",
"* Faster and more token efficient: Matches GPT-5.1-Codex performance on SWE-Bench Verified with \\~30% fewer thinking tokens. We recommend `medium` reasoning effort as a good all-around, regular-use model. \n",

"Key improvements\n",
"\n",
"* Faster and more token efficient: Matches GPT-5.1-Codex performance on SWE-Bench Verified with \\~30% fewer thinking tokens. We recommend “medium” reasoning effort as a good all-around daily driver model. \n",
"* Higher intelligence and long-running autonomy: Codex is very capable and will work autonomously for hours to complete your hardest tasks. You can use “high” or “xhigh” reasoning effort for your hardest tasks. \n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"* Higher intelligence and long-running autonomy: Codex is very capable and will work autonomously for hours to complete your hardest tasks. You can use high or xhigh reasoning effort for your hardest tasks. \n",
"* Higher intelligence and long-running autonomy: Codex is very capable and will work autonomously for hours to complete your hardest tasks. You can use `high` or `xhigh` reasoning effort for your hardest tasks. \n",

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note that we need to change to xhigh in the other doc too–i put extra high

"\n",
"* Faster and more token efficient: Matches GPT-5.1-Codex performance on SWE-Bench Verified with \\~30% fewer thinking tokens. We recommend “medium” reasoning effort as a good all-around daily driver model. \n",
"* Higher intelligence and long-running autonomy: Codex is very capable and will work autonomously for hours to complete your hardest tasks. You can use “high” or “xhigh” reasoning effort for your hardest tasks. \n",
"* First-class compaction support: Compaction enables multi-hour reasoning without hitting context limits, and longer continuous user conversations without needing to start new chat sessions. \n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"* First-class compaction support: Compaction enables multi-hour reasoning without hitting context limits, and longer continuous user conversations without needing to start new chat sessions. \n",
"* First-class compaction support: Compaction enables multi-hour reasoning without hitting context limits and longer continuous user conversations without needing to start new chat sessions. \n",

"\n",
"# Getting Started\n",
"\n",
"If you already have a working Codex implementation, this model should work well with relatively minimal updates, but if you’re starting with a prompt and set of tools that’s optimized for GPT-5-series models, or a third-party model, we recommend making more significant changes. The best reference implementation is our fully open source codex-cli agent, available on [GitHub](https://github.com/openai/codex); we recommend cloning this repo and using codex (or any coding agent) to ask questions about how things are implemented. However, by working with customers we’ve also learned how to customize agent harnesses beyond this particular implementation.\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"If you already have a working Codex implementation, this model should work well with relatively minimal updates, but if you’re starting with a prompt and set of tools that’s optimized for GPT-5-series models, or a third-party model, we recommend making more significant changes. The best reference implementation is our fully open source codex-cli agent, available on [GitHub](https://github.com/openai/codex); we recommend cloning this repo and using codex (or any coding agent) to ask questions about how things are implemented. However, by working with customers we’ve also learned how to customize agent harnesses beyond this particular implementation.\n",
"If you already have a working Codex implementation, this model should work well with relatively minimal updates, but if you’re starting with a prompt and set of tools optimized for GPT-5-series models, or a third-party model, we recommend making more significant changes. The best reference implementation is our fully open-source codex-cli agent, available on [GitHub](https://github.com/openai/codex). Clone this repo and use Codex (or any coding agent) to ask questions about how things are implemented. From working with customers, we’ve also learned how to customize agent harnesses beyond this particular implementation.\n",

"## General\n",
"\n",
"- When searching for text or files, prefer using `rg` or `rg --files` respectively because `rg` is much faster than alternatives like `grep`. (If the `rg` command is not found, then use alternatives.)\n",
"- If a tool exists for an action, prefer to use the tool instead of shell commands (e.g read_file over cat). Strictly avoid raw `cmd`/terminal when a dedicated tool exists. Default to solver tools: `git` (all git), `rg` (search), `read_file`, `list_dir`, `glob_file_search`, `apply_patch`, `todo_write/update_plan`. Use `cmd`/`run_terminal_cmd` only when no listed tool can perform the action.\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"- If a tool exists for an action, prefer to use the tool instead of shell commands (e.g read_file over cat). Strictly avoid raw `cmd`/terminal when a dedicated tool exists. Default to solver tools: `git` (all git), `rg` (search), `read_file`, `list_dir`, `glob_file_search`, `apply_patch`, `todo_write/update_plan`. Use `cmd`/`run_terminal_cmd` only when no listed tool can perform the action.\n",
"- If a tool exists for an action, prefer to use the tool instead of shell commands (e.g., `read_file` over `cat`). Strictly avoid raw `cmd`/terminal when a dedicated tool exists. Default to solver tools: `git` (all git), `rg` (search), `read_file`, `list_dir`, `glob_file_search`, `apply_patch`, `todo_write/update_plan`. Use `cmd`/`run_terminal_cmd` only when no listed tool can perform the action.\n",

@dkundel-openai dkundel-openai merged commit fc6e8c9 into main Dec 4, 2025
1 check passed
@dkundel-openai dkundel-openai deleted the nm/add-5-1-codex-max-prompting-guide branch December 4, 2025 17:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants