Loading...
Loading...
Qwen3-Coder-480B-A35B-Instruct is a Mixture-of-Experts (MoE) code generation model developed by the Qwen team. It is optimized for agentic coding tasks such as function calling, tool use, and long-context reasoning over repositories. The model features 480 billion total parameters, with 35 billion active per forward pass (8 out of 160 experts). Pricing for the Alibaba endpoints varies by context length. Once a request is greater than 128k input tokens, the higher pricing is used.
Price/1M
$0.42
370th cheapest
34% above median
Top 54%
Context Window
262K
61st largest
Top 25%
Input
$0.22
per 1M tokens
Output
$1.00
per 1M tokens
Blended
$0.42
per 1M tokens
Cheaper than 46% of models. Median price is $0.31/1M tokens.
Daily
$0.42
Monthly
$12.45
Context Window
262K
tokens
Larger than 75% of models
Context Window Comparison