losclouds

AI Models / Compare

Mixtral 8x7B Instruct

Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parame

Creator
Mistral AI
Lifecycle
Active
Context
32.8K
Max output
16.4K
Released
Dec 10, 2023
Status
up
Input
$0.54 / 1M tokens
Output
$0.54 / 1M tokens
Cached read
/ 1M tokens
Cached write
/ 1M tokens
Batch discount
%
Source
OpenRouter
Verified
Apr 5, 2026 (High)

Capabilities

Modalities
texttext
Capabilities
functionCallingstructuredOutputs
Official Links

Benchmark Coverage

BenchmarkVersionScoreDateSourceNotes

Release History

ReleaseAliasLifecycleRelease DateDeprecationShutdownSummary
Mixtral 8x7B Instructmistralai-mixtral-8x7b-instructActiveDec 10, 2023Model available via OpenRouter.

Host Coverage

HostTypeContextPricing NoteDifferences
OpenRouteraggregator32.8K$0.54/1M in · $0.54/1M out via OpenRouter
Migration Guidance

Change Events
DateTypeTitleDescriptionSource
Dec 10, 2023family_addedMixtral 8x7B Instruct publishedModel made available via OpenRouter.OpenRouter

Other models from Mistral AI

Codestral, Codestral 2508, Devstral 2, Devstral 2 2512, Devstral Medium, Devstral Small 1.1, Ministral 3 14B 2512, Ministral 3 3B 2512, Ministral 3 8B 2512, Mistral 7B Instruct v0.1, Mistral Large, Mistral Large 2407, Mistral Large 2411, Mistral Large 3, Mistral Large 3 2512, Mistral Medium 3, Mistral Medium 3.1, Mistral Nemo, Mistral Small 3, Mistral Small 3.1 24B, Mistral Small 3.2 24B, Mistral Small 4, Mistral Small 4, Mistral Small Creative, Mixtral 8x22B Instruct, Pixtral Large 2411, Saba, Voxtral Small 24B 2507