Loading...
Loading...
Llama 3.2 11B Vision is a multimodal model with 11 billion parameters, designed to handle tasks combining visual and textual data. It excels in tasks such as image captioning and visual question answering, bridging the gap between language generation and visual reasoning. Pre-trained on a massive dataset of image-text pairs, it performs well in complex, high-accuracy image analysis. Its ability to integrate visual understanding with language processing makes it an ideal solution for industries requiring comprehensive visual-linguistic AI applications, such as content creation, AI-driven customer service, and research. Click here for the [original model card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/MODEL_CARD_VISION.md). Usage of this model is subject to [Meta's Acceptable Use Policy](https://www.llama.com/llama3/use-policy/).
Preço/1M
$0.05
196th mais barato
84% abaixo da mediana
Top 29%
Janela de Contexto
131K
145th maior
Top 63%
Entrada
$0.05
por 1M tokens
Saída
$0.05
por 1M tokens
Combinado
$0.05
por 1M tokens
Mais barato que 71% dos modelos. Preço mediano é $0.31/1M tokens.
Diário
$0.05
Mensal
$1.47
Janela de Contexto
131K
tokens
Maior que 37% dos modelos
Saída Máxima
16K
tokens
13% do contexto
Comparação de Janela de Contexto
268.7K
1.6K
16-24 GB
RTX 4090 / M2 Max