โ† Back to all models
๐Ÿ’ฌ

Inception: Mercury 2

inceptionยทText Generation
๐Ÿ”ฅ 65trending

Mercury 2 is an extremely fast reasoning LLM, and the first reasoning diffusion LLM (dLLM). Instead of generating tokens sequentially, Mercury 2 produces and refines multiple tokens in parallel, achieving >1,000 tokens/sec on standard GPUs. Mercury 2 is 5x+ faster than leading speed-optimized LLMs like Claude 4.5 Haiku and GPT 5 Mini, at a fraction of the cost. Mercury 2 supports tunable reasoning levels, 128K context, native tool use, and schema-aligned JSON output. Built for coding workflows where latency compounds, real-time voice/search, and agent loops. OpenAI API compatible. Read more in the [blog post](https://www.inceptionlabs.ai/blog/introducing-mercury-2).

#text->text#top-provider
๐Ÿงฎ

Undisclosed

Parameters

๐Ÿ“

128K tokens

Context Window

๐Ÿ”’

Proprietary

License

๐Ÿ“…

Mar 4, 2026

Released

๐Ÿ’ฐ Pricing

Input

$0.25

per 1M tokens

Output

$0.75

per 1M tokens

๐Ÿ”Œ

API Available

This model is accessible via API for integration into your applications.