AI Models / Compare
LFM2-24B-A2B
LFM2-24B-A2B is the largest model in the LFM2 family of hybrid architectures designed for efficient on-device deployment. Built as a 24B parameter Mixture-of-Experts model with only 2B active paramete
- Creator
- Liquid AI
- Lifecycle
- Active
- Context
- 32.8K
- Max output
- —
- Released
- Feb 25, 2026
- Status
- unknown
- Input
- $0.03 / 1M tokens
- Output
- $0.12 / 1M tokens
- Cached read
- — / 1M tokens
- Cached write
- — / 1M tokens
- Batch discount
- —%
- Source
- OpenRouter
- Verified
- Apr 5, 2026 (High)
Capabilities
- Modalities
- text→text