โ† Back to all models
๐Ÿ‘๏ธ

Meta: Llama 3.2 11B Vision Instruct

MetaยทVision
๐Ÿ”ฅ 78trending

Llama 3.2 11B Vision is a multimodal model with 11 billion parameters, designed to handle tasks combining visual and textual data. It excels in tasks such as image captioning and visual question answering, bridging the gap between language generation and visual reasoning. Pre-trained on a massive dataset of image-text pairs, it performs well in complex, high-accuracy image analysis. Its ability to integrate visual understanding with language processing makes it an ideal solution for industries requiring comprehensive visual-linguistic AI applications, such as content creation, AI-driven customer service, and research. Click here for the [original model card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/MODEL_CARD_VISION.md). Usage of this model is subject to [Meta's Acceptable Use Policy](https://www.llama.com/llama3/use-policy/).

#text+image->text#top-provider
๐Ÿงฎ

Undisclosed

Parameters

๐Ÿ“

131K tokens

Context Window

๐Ÿ”’

Proprietary

License

๐Ÿ“…

Sep 25, 2024

Released

๐Ÿ’ฐ Pricing

Input

$0.05

per 1M tokens

Output

$0.05

per 1M tokens

๐Ÿ”Œ

API Available

This model is accessible via API for integration into your applications.