Loading...
Loading...
Devstral Small 1.1 is a 24B parameter open-weight language model for software engineering agents, developed by Mistral AI in collaboration with All Hands AI. Finetuned from Mistral Small 3.1 and released under the Apache 2.0 license, it features a 128k token context window and supports both Mistral-style function calling and XML output formats. Designed for agentic coding workflows, Devstral Small 1.1 is optimized for tasks such as codebase exploration, multi-file edits, and integration into autonomous development agents like OpenHands and Cline. It achieves 53.6% on SWE-Bench Verified, surpassing all other open models on this benchmark, while remaining lightweight enough to run on a single 4090 GPU or Apple silicon machine. The model uses a Tekken tokenizer with a 131k vocabulary and is deployable via vLLM, Transformers, Ollama, LM Studio, and other OpenAI-compatible runtimes.
Quality Index
15.2
258th of 442
Top 59%
Coding Index
12.1
236th of 352
Top 67%
Math Index
29.3
186th of 268
Top 69%
Price/1M
$0.15
258th cheapest
52% below median
Top 40%
Speed
188 tok/s
Top 8%
TTFT
0.34s
Context Window
131K
145th largest
Top 63%
Input
$0.10
per 1M tokens
Output
$0.30
per 1M tokens
Blended
$0.15
per 1M tokens
Cheaper than 60% of models. Median price is $0.31/1M tokens.
Daily
$0.15
Monthly
$4.50
188
tokens/sec
Faster than 92% of models
0.34
seconds
Faster than 56% of models
0.34
seconds
Faster than 57% of models
Market Median
46 tok/s
312% faster
Median TTFT
0.42s
18% faster
Throughput/Dollar
1252
tok/s per $/1M
Speed Comparison
Context Window
131K
tokens
Larger than 37% of models