AI Models / Compare
Mercury 2
Mercury 2 is an extremely fast reasoning LLM, and the first reasoning diffusion LLM (dLLM).
Instead of generating tokens sequentially, Mercury 2 produces and refines multiple tokens in parallel, achie
- Creator
- Inception Labs
- Lifecycle
- Active
- Context
- 128.0K
- Max output
- 50.0K
- Released
- Mar 4, 2026
- Status
- unknown
- Input
- $0.25 / 1M tokens
- Output
- $0.75 / 1M tokens
- Cached read
- $0.03 / 1M tokens
- Cached write
- — / 1M tokens
- Batch discount
- —%
- Source
- OpenRouter
- Verified
- Apr 5, 2026 (High)
Capabilities
- Modalities
- text→text
- Capabilities
- reasoningpromptCachingfunctionCallingstructuredOutputs