Loading...
Loading...
Trinity-Large-Preview is a frontier-scale open-weight language model from Arcee, built as a 400B-parameter sparse Mixture-of-Experts with 13B active parameters per token using 4-of-256 expert routing. It excels in creative writing, storytelling, role-play, chat scenarios, and real-time voice assistance, better than your average reasoning model usually can. But we’re also introducing some of our newer agentic performance. It was trained to navigate well in agent harnesses like OpenCode, Cline, and Kilo Code, and to handle complex toolchains and long, constraint-filled prompts. The architecture natively supports very long context windows up to 512k tokens, with the Preview API currently served at 128k context using 8-bit quantization for practical deployment. Trinity-Large-Preview reflects Arcee’s efficiency-first design philosophy, offering a production-oriented frontier model with open weights and permissive licensing suitable for real-world applications and experimentation.
Price/1M
$0.00
1st cheapest
100% below median
Top 27%
Context Window
131K
222nd largest
Top 64%
Input
$0.00
per 1M tokens
Output
$0.00
per 1M tokens
Blended
$0.00
per 1M tokens
Cheaper than 73% of models. Median price is $0.31/1M tokens.
Daily
$0.00
Monthly
$0.00
Context Window
131K
tokens
Larger than 36% of models
Context Window Comparison