Tongyi DeepResearch 30B A3B
Tongyi DeepResearch is an agentic large language model developed by Tongyi Lab, with 30 billion total parameters activating only 3 billion per token. It's optimized for long-horizon, deep information-seeking tasks and delivers state-of-the-art performance on benchmarks like Humanity's Last Exam, BrowserComp, BrowserComp-ZH, WebWalkerQA, GAIA, xbench-DeepSearch, and FRAMES. This makes it superior for complex agentic search, reasoning, and multi-step problem-solving compared to prior models. The model includes a fully automated synthetic data pipeline for scalable pre-training, fine-tuning, and reinforcement learning. It uses large-scale continual pre-training on diverse agentic data to boost reasoning and stay fresh. It also features end-to-end on-policy RL with a customized Group Relative Policy Optimization, including token-level gradients and negative sample filtering for stable training. The model supports ReAct for core ability checks and an IterResearch-based 'Heavy' mode for max performance through test-time scaling. It's ideal for advanced research agents, tool use, and heavy inference workflows.
Undisclosed
Parameters
131K tokens
Context Window
Proprietary
License
Sep 18, 2025
Released
๐ฐ Pricing
Input
$0.09
per 1M tokens
Output
$0.45
per 1M tokens
API Available
This model is accessible via API for integration into your applications.
โญ Related Models
Claude 3.5 Sonnet
Anthropic
The model that defined a generation. Fast, smart, and incredibly capable across coding, analysis, and creative tasks.
Claude 3.5 Haiku
Anthropic
Ultra-fast and cost-effective. Best for high-volume tasks where speed matters more than peak intelligence.
GPT-4o Mini
OpenAI
Compact and affordable. Surprisingly capable for its price point, ideal for high-volume applications.