Loading...
Loading...
GPT-5.1-Codex is a specialized version of GPT-5.1 optimized for software engineering and coding workflows. It is designed for both interactive development sessions and long, independent execution of complex engineering tasks. The model supports building projects from scratch, feature development, debugging, large-scale refactoring, and code review. Compared to GPT-5.1, Codex is more steerable, adheres closely to developer instructions, and produces cleaner, higher-quality code outputs. Reasoning effort can be adjusted with the `reasoning.effort` parameter. Read the [docs here](https://openrouter.ai/docs/use-cases/reasoning-tokens#reasoning-effort-level) Codex integrates into developer environments including the CLI, IDE extensions, GitHub, and cloud tasks. It adapts reasoning effort dynamically—providing fast responses for small tasks while sustaining extended multi-hour runs for large projects. The model is trained to perform structured code reviews, catching critical flaws by reasoning over dependencies and validating behavior against tests. It also supports multimodal inputs such as images or screenshots for UI development and integrates tool use for search, dependency installation, and environment setup. Codex is intended specifically for agentic coding applications.
Índice de Qualidade
43.1
26th de 442
Top 6%
Índice de Código
36.6
40th de 352
Top 11%
Índice de Matemática
95.7
7th de 268
Top 3%
Preço/1M
$3.44
576th mais barato
1009% acima da mediana
Top 87%
Velocidade
127 tok/s
Top 20%
TTFT
4.02s
Janela de Contexto
400K
41st maior
Top 16%
Entrada
$1.25
por 1M tokens
Saída
$10.00
por 1M tokens
Combinado
$3.44
por 1M tokens
Mais barato que 13% dos modelos. Preço mediano é $0.31/1M tokens.
Diário
$3.44
Mensal
$103.14
127
tokens/seg
Mais rápido que 80% dos modelos
4.02
segundos
Mais rápido que 10% dos modelos
4.02
segundos
Mais rápido que 26% dos modelos
Mediana do Mercado
46 tok/s
179% mais rápido
TTFT Mediano
0.42s
862% mais lento
Vazão/Dólar
37
tok/s por $/1M
Comparação de Velocidade
Janela de Contexto
400K
tokens
Maior que 84% dos modelos
Saída Máxima
128K
tokens
32% do contexto