A coding benchmark built from real competitive programming problems. Continuously updated to prevent data contamination. Score is accuracy (%).
Source: Artificial Analysis| Rank | Model | |
|---|---|---|
| #1 | Gemini 3 Flash | 90.8% |
| #2 | Anthropic Claude Opus 4.5 | 87.1% |
| #3 | DeepSeek DeepSeek V3.2 | 86.2% |
| #4 | Grok Grok 4.1 Fast (Reasoning) | 82.2% |
| #5 | Baidu ERNIE 5.0 Thinking | 81.2% |
| #6 | LG AI Research K-EXAONE | 80.7% |
| #7 | Gemini 2.5 Pro | 80.1% |
| #8 | OpenAI GPT-5 Nano | 76.3% |
| #9 | Anthropic Claude Sonnet 4.5 | 71.4% |
| #10 | OpenAI GPT OSS 120B | 70.7% |
| #11 | Gemini 2.5 Flash | 69.5% |
| #12 | Anthropic Claude Sonnet 4 | 65.5% |
| #13 | Anthropic Claude Opus 4.1 | 65.4% |
| #14 | Gemini 2.5 Flash Lite | 64.1% |
| #15 | Anthropic Claude Opus 4 | 63.6% |
| #16 | Anthropic Claude Haiku 4.5 | 61.5% |
| #17 | OpenAI GPT-5 Mini | 54.5% |
| #18 | OpenAI GPT-5 | 54.3% |
| #19 | Baidu ERNIE 4.5 300B A47B | 46.7% |
| #20 | OpenAI GPT-4.1 | 45.7% |
| #21 | Grok Grok 4.1 Fast | 39.9% |
| #22 | Meta Llama 4 Maverick | 39.7% |
| #23 | Amazon Nova 2 Lite | 34.6% |
| #24 | Meta Llama 4 Scout | 29.9% |
| #25 | Mistral AI Mistral Small 4 | 11.1% |