Loading...
Loading...
NVIDIA Nemotron 3 Super is a 120B-parameter open hybrid MoE model, activating just 12B parameters for maximum compute efficiency and accuracy in complex multi-agent applications. Built on a hybrid Mamba-Transformer Mixture-of-Experts architecture with multi-token prediction (MTP), it delivers over 50% higher token generation compared to leading open models. The model features a 1M token context window for long-term agent coherence, cross-document reasoning, and multi-step task planning. Latent MoE enables calling 4 experts for the inference cost of only one, improving intelligence and generalization. Multi-environment RL training across 10+ environments delivers leading accuracy on benchmarks including AIME 2025, TerminalBench, and SWE-Bench Verified. Fully open with weights, datasets, and recipes under the NVIDIA Open License, Nemotron 3 Super allows easy customization and secure deployment anywhere — from workstation to cloud.
Price/1M
$0.00
1st cheapest
100% below median
Top 27%
Context Window
262K
61st largest
Top 25%
Input
$0.00
per 1M tokens
Output
$0.00
per 1M tokens
Blended
$0.00
per 1M tokens
Cheaper than 73% of models. Median price is $0.31/1M tokens.
Daily
$0.00
Monthly
$0.00
Context Window
262K
tokens
Larger than 75% of models
Max Output
262K
tokens
100% of context
Context Window Comparison