Anthropic
Anthropic

Claude Haiku 4.5

2025-10-15

Claude Haiku 4.5 is Anthropic's fastest and most cost-efficient model, delivering near-frontier intelligence at a fraction of the cost of larger Claude models. It matches Claude Sonnet 4's coding performance at one-third the cost and over twice the speed, scoring 73.3% on SWE-bench Verified — placing it among the world's top coding models. With support for extended thinking, tool use, computer use, and a 200K-token context window, it is ideal for real-time applications, parallelized sub-agents, and high-volume deployments.

Anthropic FreeAnthropic ProAnthropic Max (5x)Anthropic Max (20x)API|VisionReasoningWeb Search|Proprietary Model
Knowledge Cutoff
2025-07
Input → Output Format
Context Memory
200KIN64KOUT
Cost/1M Words
$1IN$5OUT
Calculate Cost

AI Performance Evaluation

Arena Overall Score
1408
±3
As of 2026-04-23
Overall Rank
No.90
63,329 Votes
Arena by Ability
Hard Prompts
1437±4No.74
Expert Knowledge
1447±10No.62
Instruction Following
1412±5No.70
Conversation Memory
1422±6No.69
Creative
1385±7No.80
Coding
1476±6No.53
Math
1392±10No.115
Arena by Occupation
Creative Writing
1395±6No.82
Social Sciences
1422±7No.91
Media
1382±6No.82
Business
1414±6No.74
Healthcare
1416±11No.108
Legal
1409±10No.98
Software
1459±5No.64
Mathematics
1420±11No.76
Overall
AA Intelligence Index
37%↓1%
LiveBench
43%↓17%
ForecastBench
59%↑0%
Reasoning & Math
AA Math Index
84%↑10%
GPQA Diamond
67%↓14%
HLE
9.7%↓7%
MMLU-Pro
76%↓6%
AIME 2025
84%↑10%
LB Reasoning
34%↓26%
LB Math
58%↓16%
LB Data
45%↓4%
Coding
AA Coding Index
33%↓1%
LiveCodeBench
62%↓4%
LB Coding
72%↓1%
LB Agentic
33%↓10%
TAU2
55%↓19%
TerminalBench
27%↓4%
SciCode
43%↑2%
Language & Instructions
IFBench
54%↓2%
AA-LCR
70%↑9%
Hallucination (HHEM)
9.8%↑0%
Factual (HHEM)
90%↑0%
LB Language
57%↓15%
LB IF
18%↓28%
Output Speed
Standard Mode
99tok/s↑17
First Output 0.51s
Reasoning Mode
151tok/s↑63
First Output 13.70s