Loading...
Loading...
MiniMax-M2.1 is a lightweight, state-of-the-art large language model optimized for coding, agentic workflows, and modern application development. With only 10 billion activated parameters, it delivers a major jump in real-world capability while maintaining exceptional latency, scalability, and cost efficiency. Compared to its predecessor, M2.1 delivers cleaner, more concise outputs and faster perceived response times. It shows leading multilingual coding performance across major systems and application languages, achieving 49.4% on Multi-SWE-Bench and 72.5% on SWE-Bench Multilingual, and serves as a versatile agent “brain” for IDEs, coding tools, and general-purpose assistance. To avoid degrading this model's performance, MiniMax highly recommends preserving reasoning between turns. Learn more about using reasoning_details to pass back reasoning in our [docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#preserving-reasoning-blocks).
Quality Index
39.4
46th of 442
Top 10%
Coding Index
32.8
62nd of 352
Top 18%
Math Index
82.7
55th of 268
Top 21%
Price/1M
$0.53
392nd cheapest
69% above median
Top 58%
Speed
48 tok/s
Top 49%
TTFT
1.87s
Context Window
197K
131st largest
Top 38%
Input
$0.30
per 1M tokens
Output
$1.20
per 1M tokens
Blended
$0.53
per 1M tokens
Cheaper than 42% of models. Median price is $0.31/1M tokens.
Daily
$0.53
Monthly
$15.75
48
tokens/sec
Faster than 51% of models
1.87
seconds
Faster than 14% of models
43.80
seconds
Faster than 5% of models
Market Median
46 tok/s
5% faster
Median TTFT
0.42s
348% slower
Throughput/Dollar
91
tok/s per $/1M
Speed Comparison
Context Window
197K
tokens
Larger than 62% of models