-
-
Notifications
You must be signed in to change notification settings - Fork 2
Open
Description
Explore creating a CLI helper that can parse the OpenAI Codex token-usage output format and compute detailed cost breakdowns. The primary use case is taking a typical Codex usage string such as:
Token usage: total=112,632 input=67,873 (+ 1,148,032 cached) output=44,759 (reasoning 37,312)
Extracting the relevant token counts (input, cached, output, reasoning, etc.), looking up model pricing, and calculating & displaying the cost for each section and the total.
The tool should also support being invoked with individual token parameters (e.g. --input=… --output=… --reasoning=…) for situations where the usage string isn’t available or when doing custom cost estimation.
Model & Pricing Sources / Constraints
We need solid sources to get model/pricing info. Some options & caveats:
- We can lookup pricing on the OpenAI API pricing page:
- Also, Codex-specific pricing info appears on the OpenAI Codex pricing page:
- Using the 'Using Codex with your ChatGPT plan' doc helps understand which subscription tiers include Codex and what usage limits apply:
- Currently there is no public API endpoint from OpenAI that returns model pricing programmatically.
- Developers have confirmed this in forums.
- There’s a feature request in openai-python asking for this.
- Third-party/community maintained sources may have helpful data, but may lag or be inaccurate; these should be used with caution.
Proposal: Implementation Notes
- Maintain a pricing config (JSON/YAML) mapping relevant models (Codex, codex-mini, etc.) → cost per token category (input, output, cached, reasoning). Include metadata like “last verified” date.
- Support two input modes:
- Full usage string in Codex format (as above)
- Individual token flags (e.g.
--input,--cached,--output,--reasoning) for custom or partial data.
- Handle unknown/new models gracefully (warn user, allow manual override of pricing, fallback to a default model or default rates).
- Optionally a command or flag like
--update-pricingwhich attempts to fetch/prune pricing from official sources (scraping or via OpenAI API if/when one exists). Robust handling for errors or stale data. - If relevant, support currency options or conversion (e.g. default USD, maybe allow AUD etc.) to make it usable locally.
Example CLI Output Mockup
$ bin/codex-token-cost codex-mini "Token usage: total=112,632 input=67,873 (+ 1,148,032 cached) output=44,759 (reasoning 37,312)"
Model: codex-mini
Pricing: input $1.50 / 1M tokens, output $6.00 / 1M tokens, cached $0.75 / 1M tokens, reasoning $X / 1M tokens
• Input tokens: 67,873 → $0.102
• Cached tokens: 1,148,032 → $0.861
• Output tokens: 44,759 → $0.269
• Reasoning tokens: 37,312 → $0.XXX
─────────────────────────────────────────────Metadata
Metadata
Assignees
Labels
No labels