AI Models / Compare
GLM 4.5
GLM-4.5 is our latest flagship foundation model, purpose-built for agent-based applications. It leverages a Mixture-of-Experts (MoE) architecture and supports a context length of up to 128k tokens. GL
- Creator
- Zhipu AI
- Lifecycle
- Active
- Context
- 131.1K
- Max output
- 98.3K
- Released
- Jul 25, 2025
- Status
- unknown
- Input
- $0.60 / 1M tokens
- Output
- $2.20 / 1M tokens
- Cached read
- $0.11 / 1M tokens
- Cached write
- — / 1M tokens
- Batch discount
- —%
- Source
- OpenRouter
- Verified
- Apr 5, 2026 (High)
Capabilities
- Modalities
- text→text
- Capabilities
- reasoningpromptCachingfunctionCallingstructuredOutputs
Other models from Zhipu AI
GLM 4 32B, GLM 4.5 Air, GLM 4.5 Air, GLM 4.5V, GLM 4.6, GLM 4.6V, GLM 4.7, GLM 4.7 Flash, GLM 5, GLM 5 Turbo, GLM 5V Turbo