Mistral: Mixtral 8x7B Instruct
Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parameters. Instruct model fine-tuned by Mistral. #moe
Undisclosed
Parameters
33K tokens
Context Window
Proprietary
License
Dec 10, 2023
Released
๐ฐ Pricing
Input
$0.54
per 1M tokens
Output
$0.54
per 1M tokens
API Available
This model is accessible via API for integration into your applications.
โญ Related Models
Claude 3.5 Sonnet
Anthropic
The model that defined a generation. Fast, smart, and incredibly capable across coding, analysis, and creative tasks.
Claude 3.5 Haiku
Anthropic
Ultra-fast and cost-effective. Best for high-volume tasks where speed matters more than peak intelligence.
GPT-4o Mini
OpenAI
Compact and affordable. Surprisingly capable for its price point, ideal for high-volume applications.