Loading...
Loading...
DeepSeek-V3.2-Exp is an experimental large language model released by DeepSeek as an intermediate step between V3.1 and future architectures. It introduces DeepSeek Sparse Attention (DSA), a fine-grained sparse attention mechanism designed to improve training and inference efficiency in long-context scenarios while maintaining output quality. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config) The model was trained under conditions aligned with V3.1-Terminus to enable direct comparison. Benchmarking shows performance roughly on par with V3.1 across reasoning, coding, and agentic tool-use tasks, with minor tradeoffs and gains depending on the domain. This release focuses on validating architectural optimizations for extended context lengths rather than advancing raw task accuracy, making it primarily a research-oriented model for exploring efficient transformer designs.
Preço/1M
$0.30
338th mais barato
2% abaixo da mediana
Top 50%
Janela de Contexto
164K
135th maior
Top 41%
Entrada
$0.27
por 1M tokens
Saída
$0.41
por 1M tokens
Combinado
$0.30
por 1M tokens
Mais barato que 50% dos modelos. Preço mediano é $0.31/1M tokens.
Diário
$0.30
Mensal
$9.15
Janela de Contexto
164K
tokens
Maior que 59% dos modelos
Saída Máxima
66K
tokens
40% do contexto
Comparação de Janela de Contexto