Z AI
GLM 4.5V
GLM-4.5V is a vision-language foundation model for multimodal agent applications. Built on a Mixture-of-Experts (MoE) architecture with 106B parameters and 12B activated parameters, it achieves state-of-the-art results in video understanding, image Q&A, OCR, and document parsing, with strong gains in front-end web coding, grounding, and spatial reasoning. It offers a hybrid inference mode: a "thinking mode" for deep reasoning and a "non-thinking mode" for fast responses. Reasoning behavior can be toggled via the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config)
- Input / 1M tokens
- $0.600
- Output / 1M tokens
- $1.80
- Context window
- 66K tokens
- Provider
- Z AI
- Cached input / 1M
- $0.110
- Knowledge cutoff
- 2024-12-31
Performance
Median streaming throughput and first-token latency measured by Artificial Analysis.
- Output tokens / sec
- 40 t/s
- Time to first token
- 29.35s
Benchmarks
Intelligence, coding, and math indexes plus the underlying evaluation scores.
- Intelligence Index
- 13
- Coding Index
- 11
- Math Index
- 15
- MMLU-Pro
- 75.1%
- GPQA
- 57.3%
- HLE
- 3.6%
- LiveCodeBench
- 35.2%
- SciCode
- 18.8%
- MATH-500
- —
- AIME
- —
Benchmarks via Artificial Analysis