Mistral logo

Mistral

Saba

Mistral Saba is a 24B-parameter language model specifically designed for the Middle East and South Asia, delivering accurate and contextually relevant responses while maintaining efficient performance. Trained on curated regional datasets, it supports multiple Indian-origin languages—including Tamil and Malayalam—alongside Arabic. This makes it a versatile option for a range of regional and multilingual applications. Read more at the blog post [here](https://mistral.ai/en/news/mistral-saba)

Input / 1M tokens
$0.200
Output / 1M tokens
$0.600
Context window
33K tokens
Provider
Mistral
Cached input / 1M
$0.020
Knowledge cutoff
2024-09-30

Performance

Median streaming throughput and first-token latency measured by Artificial Analysis.

Output tokens / sec
0 t/s
Time to first token
0.00s

Benchmarks

Intelligence, coding, and math indexes plus the underlying evaluation scores.

Intelligence Index
12
Coding Index
Math Index
MMLU-Pro
61.1%
GPQA
42.4%
HLE
4.1%
LiveCodeBench
SciCode
24.1%
MATH-500
67.7%
AIME
13.0%

Benchmarks via Artificial Analysis