โ† Back to all models
๐Ÿ’ฌ

Tongyi DeepResearch 30B A3B

alibabaยทText Generation
๐Ÿ”ฅ 52trending

Tongyi DeepResearch is an agentic large language model developed by Tongyi Lab, with 30 billion total parameters activating only 3 billion per token. It's optimized for long-horizon, deep information-seeking tasks and delivers state-of-the-art performance on benchmarks like Humanity's Last Exam, BrowserComp, BrowserComp-ZH, WebWalkerQA, GAIA, xbench-DeepSearch, and FRAMES. This makes it superior for complex agentic search, reasoning, and multi-step problem-solving compared to prior models. The model includes a fully automated synthetic data pipeline for scalable pre-training, fine-tuning, and reinforcement learning. It uses large-scale continual pre-training on diverse agentic data to boost reasoning and stay fresh. It also features end-to-end on-policy RL with a customized Group Relative Policy Optimization, including token-level gradients and negative sample filtering for stable training. The model supports ReAct for core ability checks and an IterResearch-based 'Heavy' mode for max performance through test-time scaling. It's ideal for advanced research agents, tool use, and heavy inference workflows.

#text->text#top-provider
๐Ÿงฎ

Undisclosed

Parameters

๐Ÿ“

131K tokens

Context Window

๐Ÿ”’

Proprietary

License

๐Ÿ“…

Sep 18, 2025

Released

๐Ÿ’ฐ Pricing

Input

$0.09

per 1M tokens

Output

$0.45

per 1M tokens

๐Ÿ”Œ

API Available

This model is accessible via API for integration into your applications.