Loading...
Loading...
OpenAI o4-mini is a compact reasoning model in the o-series, optimized for fast, cost-efficient performance while retaining strong multimodal and agentic capabilities. It supports tool use and demonstrates competitive reasoning and coding performance across benchmarks like AIME (99.5% with Python) and SWE-bench, outperforming its predecessor o3-mini and even approaching o3 in some domains. Despite its smaller size, o4-mini exhibits high accuracy in STEM tasks, visual problem solving (e.g., MathVista, MMMU), and code editing. It is especially well-suited for high-throughput scenarios where latency or cost is critical. Thanks to its efficient architecture and refined reinforcement learning training, o4-mini can chain tools, generate structured outputs, and solve multi-step tasks with minimal delay—often in under a minute.
Quality Index
33.1
80th of 442
Top 18%
Coding Index
25.6
103rd of 352
Top 29%
Math Index
90.7
22nd of 268
Top 9%
Price/1M
$1.93
537th cheapest
521% above median
Top 79%
Speed
148 tok/s
Top 15%
TTFT
17.25s
Context Window
200K
108th largest
Top 37%
Input
$1.10
per 1M tokens
Output
$4.40
per 1M tokens
Blended
$1.93
per 1M tokens
Cheaper than 21% of models. Median price is $0.31/1M tokens.
Daily
$1.93
Monthly
$57.75
148
tokens/sec
Faster than 85% of models
17.25
seconds
Faster than 4% of models
17.25
seconds
Faster than 15% of models
Market Median
46 tok/s
224% faster
Median TTFT
0.42s
4026% slower
Throughput/Dollar
77
tok/s per $/1M
Speed Comparison
Context Window
200K
tokens
Larger than 63% of models
Max Output
100K
tokens
50% of context