The top open-weight AI models you can run yourself.
Last updated: April 2026
Meta
Meta's flagship open-weight MoE model with 400B total parameters and 17B active. Strong multilingual and coding performance.
DeepSeek
671B MoE model with only 37B active parameters. Open-weight, excels at math, coding, and Chinese language tasks.
Alibaba
Alibaba's latest open-weight model family with hybrid thinking modes. Strong across coding, math, and multilingual tasks.
Reasoning-specialized model trained with reinforcement learning. Shows chain-of-thought reasoning transparently.
Efficient MoE model with 109B total parameters. Fits on a single H100 GPU while delivering strong performance.
Moonshot AI
1T+ MoE architecture with strong long-context and multi-step reasoning. Open weights, competitive with top models.
Reasoning-focused model that thinks step by step. Open-weight alternative to o1/o3 for reasoning tasks.
Mistral
Mistral's flagship with strong multilingual support, function calling, and code generation. Open weights.
Black Forest Labs
Open-weight version of Flux for local deployment and fine-tuning. Strong community and LoRA ecosystem.
Google
Google's open-weight model family. Small, efficient, and capable — runs on a single GPU or even a laptop.
Open-source video generation supporting text-to-video and image-to-video with high temporal coherence.
MiniMax
456B parameter MoE model with 4M token context — one of the longest context windows available. Strong at long-doc tasks.
Mature and battle-tested open model. Excellent balance of performance, efficiency, and fine-tunability.
OpenAI
Latest speech recognition model with improved accuracy across 100+ languages, real-time streaming, and speaker diarization.
Refined version of Llama 3.1 70B with performance approaching 405B while maintaining 70B efficiency.
Microsoft
Small but mighty. Microsoft's compact model punches well above its weight class in reasoning and coding.
The largest dense open-weight model. Competitive with GPT-4 class models across benchmarks.
Stability AI
Latest in the Stable Diffusion family. Open-weight with multiple size variants for different hardware.
2026

