Loading...
Loading...
gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B parameters per forward pass and is optimized to run on a single H100 GPU with native MXFP4 quantization. The model supports configurable reasoning depth, full chain-of-thought access, and native tool use, including function calling, browsing, and structured output generation.
Quality Index
33.3
79th of 442
Top 18%
Coding Index
28.6
88th of 352
Top 25%
Math Index
93.4
14th of 268
Top 5%
Price/1M
$0.26
316th cheapest
15% below median
Top 47%
Speed
291 tok/s
Top 3%
TTFT
0.48s
Context Window
131K
145th largest
Top 63%
Input
$0.15
per 1M tokens
Output
$0.60
per 1M tokens
Blended
$0.26
per 1M tokens
Cheaper than 53% of models. Median price is $0.31/1M tokens.
Daily
$0.26
Monthly
$7.89
291
tokens/sec
Faster than 97% of models
0.48
seconds
Faster than 45% of models
7.36
seconds
Faster than 22% of models
Market Median
46 tok/s
538% faster
Median TTFT
0.42s
14% slower
Throughput/Dollar
1106
tok/s per $/1M
Speed Comparison
Context Window
131K
tokens
Larger than 37% of models
4.6M
4.6K
Multi-GPU
8x A100 / H100