Loading...
Loading...
Gemma 3n E4B-it is optimized for efficient execution on mobile and low-resource devices, such as phones, laptops, and tablets. It supports multimodal inputs—including text, visual data, and audio—enabling diverse tasks such as text generation, speech recognition, translation, and image analysis. Leveraging innovations like Per-Layer Embedding (PLE) caching and the MatFormer architecture, Gemma 3n dynamically manages memory usage and computational load by selectively activating model parameters, significantly reducing runtime resource requirements. This model supports a wide linguistic range (trained in over 140 languages) and features a flexible 32K token context window. Gemma 3n can selectively load parameters, optimizing memory and computational efficiency based on the task or device capabilities, making it well-suited for privacy-focused, offline-capable applications and on-device AI solutions. [Read more in the blog post](https://developers.googleblog.com/en/introducing-gemma-3n/)
Price/1M
$0.00
1st cheapest
100% below median
Top 27%
Context Window
8K
331st largest
Top 96%
Input
$0.00
per 1M tokens
Output
$0.00
per 1M tokens
Blended
$0.00
per 1M tokens
Cheaper than 73% of models. Median price is $0.31/1M tokens.
Daily
$0.00
Monthly
$0.00
Context Window
8K
tokens
Larger than 4% of models
Max Output
2K
tokens
25% of context
Context Window Comparison