Loading...
Loading...
gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B parameters per forward pass and is optimized to run on a single H100 GPU with native MXFP4 quantization. The model supports configurable reasoning depth, full chain-of-thought access, and native tool use, including function calling, browsing, and structured output generation.
Price/1M
$0.00
1st cheapest
100% below median
Top 27%
Context Window
131K
145th largest
Top 63%
Input
$0.00
per 1M tokens
Output
$0.00
per 1M tokens
Blended
$0.00
per 1M tokens
Cheaper than 73% of models. Median price is $0.31/1M tokens.
Daily
$0.00
Monthly
$0.00
Context Window
131K
tokens
Larger than 37% of models
Max Output
131K
tokens
100% of context
Context Window Comparison