Loading...
Loading...
LongCat-Flash-Chat is a large-scale Mixture-of-Experts (MoE) model with 560B total parameters, of which 18.6B–31.3B (≈27B on average) are dynamically activated per input. It introduces a shortcut-connected MoE design to reduce communication overhead and achieve high throughput while maintaining training stability through advanced scaling strategies such as hyperparameter transfer, deterministic computation, and multi-stage optimization. This release, LongCat-Flash-Chat, is a non-thinking foundation model optimized for conversational and agentic tasks. It supports long context windows up to 128K tokens and shows competitive performance across reasoning, coding, instruction following, and domain benchmarks, with particular strengths in tool use and complex multi-step interactions.
Price/1M
$0.35
346th cheapest
13% above median
Top 52%
Context Window
131K
145th largest
Top 63%
Input
$0.20
per 1M tokens
Output
$0.80
per 1M tokens
Blended
$0.35
per 1M tokens
Cheaper than 48% of models. Median price is $0.31/1M tokens.
Daily
$0.35
Monthly
$10.50
Context Window
131K
tokens
Larger than 37% of models
Max Output
131K
tokens
100% of context
Context Window Comparison