Qwen
Qwen3 32B
Qwen3-32B is a dense 32.8B parameter causal language model from the Qwen3 series, optimized for both complex reasoning and efficient dialogue. It supports seamless switching between a "thinking" mode for tasks like math, coding, and logical inference, and a "non-thinking" mode for faster, general-purpose conversation. The model demonstrates strong performance in instruction-following, agent tool use, creative writing, and multilingual tasks across 100+ languages and dialects. It natively handles 32K token contexts and can extend to 131K tokens using YaRN-based scaling.
- Input / 1M tokens
- $0.080
- Output / 1M tokens
- $0.240
- Context window
- 41K tokens
- Provider
- Qwen
- Cached input / 1M
- $0.040
- Knowledge cutoff
- 2025-03-31
Performance
Median streaming throughput and first-token latency measured by Artificial Analysis.
- Output tokens / sec
- —
- Time to first token
- —