A benchmark measuring how accurately a model follows complex instructions with constraints on format, length, and style. Score is compliance rate (%).
Grok
Grok 4.20 (Reasoning)
DeepSeek
DeepSeek V4 Flash
Google
Gemini 3 Flash
Gemini 3.1 Flash Lite
Gemini 3.1 Pro
DeepSeek V4 Pro
Z.ai
GLM-5.1
Meta
Muse Spark
MiniMax
MiniMax M2.7
Gemma 4 31B
Alibaba
Qwen3.6 Plus
OpenAI
GPT-5.4
GLM-5
MiniMax M2.5
NVIDIA
Nemotron 3 Super
Moonshot AI
Kimi K2.5
Xiaomi
MiMo-V2-Pro
LG AI Research
K-EXAONE
GPT-5 Nano
GPT-5.4 Mini
GPT-5.4 Nano
DeepSeek V3.2
Anthropic
Claude Opus 4.7
GPT OSS 120B
Claude Opus 4.5
Claude Sonnet 4.5
Arcee AI
Trinity Large Thinking
Claude Opus 4.1
Claude Sonnet 4
Claude Haiku 4.5
Claude Opus 4
Claude Opus 4.6
Grok 4.1 Fast (Reasoning)
Qwen3.5 397B A17B
Gemini 2.5 Flash
Grok 4.20
Gemini 2.5 Pro
Mistral AI
Mistral Small 4
GPT-5 Mini
GPT-5
Meituan
Longcat Flash Chat
GPT-4.1
Llama 4 Maverick
Claude Sonnet 4.6
Gemini 2.5 Flash Lite
Baidu
ERNIE 5.0 Thinking
Amazon
Nova 2 Lite
Llama 4 Scout
ERNIE 4.5 300B A47B
Grok 4.1 Fast