Qwen: Qwen3 235B A22B Thinking 2507
Qwen3-235B-A22B-Thinking-2507 is a high-performance, open-weight Mixture-of-Experts (MoE) language model optimized for complex reasoning tasks. It activates 22B of its 235B parameters per forward pass and natively supports up to 262,144 tokens of context. This "thinking-only" variant enhances structured logical reasoning, mathematics, science, and long-form generation, showing strong benchmark performance across AIME, SuperGPQA, LiveCodeBench, and MMLU-Redux. It enforces a special reasoning mode (</think>) and is designed for high-token outputs (up to 81,920 tokens) in challenging domains. The model is instruction-tuned and excels at step-by-step reasoning, tool use, agentic workflows, and multilingual tasks. This release represents the most capable open-source variant in the Qwen3-235B series, surpassing many closed models in structured reasoning use cases.
Undisclosed
Parameters
131K tokens
Context Window
Proprietary
License
Jul 25, 2025
Released
๐ฐ Pricing
Input
$0.15
per 1M tokens
Output
$1.50
per 1M tokens
API Available
This model is accessible via API for integration into your applications.
โญ Related Models
Claude 4 Opus
Anthropic
Anthropic's most powerful reasoning model with extended thinking. Excels at complex analysis, multi-step math, advanced coding, and nuanced writing.
Claude 4 Sonnet
Anthropic
Balanced intelligence and speed. Strong reasoning with faster response times and lower cost than Opus.
o3
OpenAI
OpenAI's most powerful reasoning model. Uses chain-of-thought to solve complex math, science, and coding problems.