Qwen: Qwen2.5 Coder 7B Instruct
Qwen2.5-Coder-7B-Instruct is a 7B parameter instruction-tuned language model optimized for code-related tasks such as code generation, reasoning, and bug fixing. Based on the Qwen2.5 architecture, it incorporates enhancements like RoPE, SwiGLU, RMSNorm, and GQA attention with support for up to 128K tokens using YaRN-based extrapolation. It is trained on a large corpus of source code, synthetic data, and text-code grounding, providing robust performance across programming languages and agentic coding workflows. This model is part of the Qwen2.5-Coder family and offers strong compatibility with tools like vLLM for efficient deployment. Released under the Apache 2.0 license.
Undisclosed
Parameters
33K tokens
Context Window
Proprietary
License
Apr 15, 2025
Released
๐ฐ Pricing
Input
$0.03
per 1M tokens
Output
$0.09
per 1M tokens
API Available
This model is accessible via API for integration into your applications.
โญ Related Models
Codestral
Mistral
Mistral's code-specialized model. Trained specifically for code generation, completion, and understanding.
Qwen 3
Alibaba
Alibaba's latest open-weight model family with hybrid thinking modes. Strong across coding, math, and multilingual tasks.
Qwen 2.5 72B
Alibaba
Mature and battle-tested open model. Excellent balance of performance, efficiency, and fine-tunability.