โ† Back to all models
๐Ÿ’ฌ

Llama 4 Scout

MetaยทText Generation
๐Ÿ”ฅ 82trending
Visit โ†—

Efficient MoE model with 109B total parameters. Fits on a single H100 GPU while delivering strong performance.

#moe#open-weight#efficient
๐Ÿงฎ

109B

Parameters

๐Ÿ“

512K tokens

Context Window

๐Ÿ”“

Open Source

License

๐Ÿ“…

Apr 5, 2025

Released

โšก Strengths

โœ“Fits on single GPU
โœ“Open weights
โœ“512K context
โœ“Cost-effective deployment

๐ŸŽฏ Use Cases

Self-hostingCost-sensitive appsFine-tuningEdge deployment
๐Ÿ”Œ

API Available

This model is accessible via API for integration into your applications.