Loading...
Loading...
Qwen3-235B-A22B-Instruct-2507 is a multilingual, instruction-tuned mixture-of-experts language model based on the Qwen3-235B architecture, with 22B active parameters per forward pass. It is optimized for general-purpose text generation, including instruction following, logical reasoning, math, code, and tool usage. The model supports a native 262K context length and does not implement "thinking mode" (<think> blocks). Compared to its base variant, this version delivers significant gains in knowledge coverage, long-context reasoning, coding benchmarks, and alignment with open-ended tasks. It is particularly strong on multilingual understanding, math reasoning (e.g., AIME, HMMT), and alignment evaluations like Arena-Hard and WritingBench.
Preço/1M
$0.08
214th mais barato
75% abaixo da mediana
Top 32%
Janela de Contexto
262K
61st maior
Top 25%
Entrada
$0.07
por 1M tokens
Saída
$0.10
por 1M tokens
Combinado
$0.08
por 1M tokens
Mais barato que 68% dos modelos. Preço mediano é $0.31/1M tokens.
Diário
$0.08
Mensal
$2.35
Janela de Contexto
262K
tokens
Maior que 75% dos modelos
Comparação de Janela de Contexto