Loading...
Loading...
GPT-5.1-Codex is a specialized version of GPT-5.1 optimized for software engineering and coding workflows. It is designed for both interactive development sessions and long, independent execution of complex engineering tasks. The model supports building projects from scratch, feature development, debugging, large-scale refactoring, and code review. Compared to GPT-5.1, Codex is more steerable, adheres closely to developer instructions, and produces cleaner, higher-quality code outputs. Reasoning effort can be adjusted with the `reasoning.effort` parameter. Read the [docs here](https://openrouter.ai/docs/use-cases/reasoning-tokens#reasoning-effort-level) Codex integrates into developer environments including the CLI, IDE extensions, GitHub, and cloud tasks. It adapts reasoning effort dynamically—providing fast responses for small tasks while sustaining extended multi-hour runs for large projects. The model is trained to perform structured code reviews, catching critical flaws by reasoning over dependencies and validating behavior against tests. It also supports multimodal inputs such as images or screenshots for UI development and integrates tool use for search, dependency installation, and environment setup. Codex is intended specifically for agentic coding applications.
Quality Index
43.1
26th of 442
Top 6%
Coding Index
36.6
40th of 352
Top 11%
Math Index
95.7
7th of 268
Top 3%
Price/1M
$3.44
576th cheapest
1009% above median
Top 87%
Speed
127 tok/s
Top 20%
TTFT
4.02s
Context Window
400K
41st largest
Top 16%
Input
$1.25
per 1M tokens
Output
$10.00
per 1M tokens
Blended
$3.44
per 1M tokens
Cheaper than 13% of models. Median price is $0.31/1M tokens.
Daily
$3.44
Monthly
$103.14
127
tokens/sec
Faster than 80% of models
4.02
seconds
Faster than 10% of models
4.02
seconds
Faster than 26% of models
Market Median
46 tok/s
179% faster
Median TTFT
0.42s
862% slower
Throughput/Dollar
37
tok/s per $/1M
Speed Comparison
Context Window
400K
tokens
Larger than 84% of models
Max Output
128K
tokens
32% of context