Loading...
Loading...
Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license, it features both pre-trained and instruction-tuned versions designed for efficient local deployment. The model achieves 81% accuracy on the MMLU benchmark and performs competitively with larger models like Llama 3.3 70B and Qwen 32B, while operating at three times the speed on equivalent hardware. [Read the blog post about the model here.](https://mistral.ai/news/mistral-small-3/)
Preço/1M
$0.06
202nd mais barato
81% abaixo da mediana
Top 30%
Janela de Contexto
33K
291st maior
Top 91%
Entrada
$0.05
por 1M tokens
Saída
$0.08
por 1M tokens
Combinado
$0.06
por 1M tokens
Mais barato que 70% dos modelos. Preço mediano é $0.31/1M tokens.
Diário
$0.06
Mensal
$1.72
Janela de Contexto
33K
tokens
Maior que 9% dos modelos
Saída Máxima
16K
tokens
50% do contexto
Comparação de Janela de Contexto