MiniMax M1
MiniMax-M1 is a large-scale, open-weight reasoning model designed for extended context and high-efficiency inference. It leverages a hybrid Mixture-of-Experts (MoE) architecture paired with a custom "
- Input
- $0.40 / 1M tokens
- Output
- $2.20 / 1M tokens
- Cached read
- — / 1M tokens
- Cached write
- — / 1M tokens
- Batch discount
- —%
- Source
- OpenRouter
- Verified
- Apr 5, 2026 (High)
Capabilities
- Modalities
- text→text
- Capabilities
- reasoningfunctionCalling
Official Links
Benchmark Coverage
| Benchmark | Version | Score | Date | Source | Notes |
|---|
Release History
| Release | Alias | Lifecycle | Release Date | Deprecation | Shutdown | Summary |
|---|---|---|---|---|---|---|
| MiniMax M1 | minimax-minimax-m1 | Active | Jun 17, 2025 | — | — | Model available via OpenRouter. |
Host Coverage
| Host | Type | Context | Pricing Note | Differences |
|---|---|---|---|---|
| OpenRouter | aggregator | 1.0M | $0.40/1M in · $2.20/1M out via OpenRouter | — |
Migration Guidance
Change Events
| Date | Type | Title | Description | Source |
|---|---|---|---|---|
| Jun 17, 2025 | family_added | MiniMax M1 published | Model made available via OpenRouter. | OpenRouter |
Other models from MiniMax
MiniMax M2, MiniMax M2-her, MiniMax M2.1, MiniMax M2.5, MiniMax M2.5, MiniMax M2.7, MiniMax-01