Loading...
Loading...
LFM2-8B-A1B is an efficient on-device Mixture-of-Experts (MoE) model from Liquid AI’s LFM2 family, built for fast, high-quality inference on edge hardware. It uses 8.3B total parameters with only ~1.5B active per token, delivering strong performance while keeping compute and memory usage low—making it ideal for phones, tablets, and laptops.
Quality Index
7.0
426th of 442
Top 97%
Coding Index
2.3
330th of 352
Top 94%
Math Index
25.3
197th of 268
Top 74%
Price/1M
$0.00
1st cheapest
100% below median
Top 27%
Speed
0 tok/s
TTFT
0.00s
Context Window
33K
291st largest
Top 91%
Input
$0.00
per 1M tokens
Output
$0.00
per 1M tokens
Blended
$0.00
per 1M tokens
Cheaper than 73% of models. Median price is $0.31/1M tokens.
Daily
$0.00
Monthly
$0.00
0
tokens/sec
Faster than 0% of models
0.00
seconds
Faster than 61% of models
0.00
seconds
Faster than 61% of models
Market Median
46 tok/s
100% slower
Median TTFT
0.42s
100% faster
Speed Comparison
Context Window
33K
tokens
Larger than 9% of models