losclouds

AI Models / Compare

Mercury 2

Mercury 2 is an extremely fast reasoning LLM, and the first reasoning diffusion LLM (dLLM). Instead of generating tokens sequentially, Mercury 2 produces and refines multiple tokens in parallel, achie

Creator
Inception Labs
Lifecycle
Active
Context
128.0K
Max output
50.0K
Released
Mar 4, 2026
Status
unknown
Input
$0.25 / 1M tokens
Output
$0.75 / 1M tokens
Cached read
$0.03 / 1M tokens
Cached write
/ 1M tokens
Batch discount
%
Source
OpenRouter
Verified
Apr 5, 2026 (High)

Capabilities

Modalities
texttext
Capabilities
reasoningpromptCachingfunctionCallingstructuredOutputs
Official Links

Benchmark Coverage

BenchmarkVersionScoreDateSourceNotes

Release History

ReleaseAliasLifecycleRelease DateDeprecationShutdownSummary
Mercury 2inception-mercury-2ActiveMar 4, 2026Model available via OpenRouter.

Host Coverage

HostTypeContextPricing NoteDifferences
OpenRouteraggregator128.0K$0.25/1M in · $0.75/1M out via OpenRouter
Migration Guidance

Change Events
DateTypeTitleDescriptionSource
Mar 4, 2026family_addedMercury 2 publishedModel made available via OpenRouter.OpenRouter

Other models from Inception Labs

Mercury, Mercury Coder