MiniMax M2.7 is a next-generation flagship model that builds on M2.5 with a self-evolving training paradigm — autonomously running over 100 rounds of scaffold optimization during training, achieving a 30% performance improvement. It is built for complex agentic workflows including Agent Teams, dynamic tool search, and elaborate productivity tasks. The model scores 56.22% on SWE-Pro (matching GPT-5.3-Codex) and 57.0% on Terminal Bench 2, demonstrating system-level comprehension. Based on a 230B sparse MoE architecture, it offers frontier performance at just $0.30 per million input tokens.
API|Open Modelproprietary
Knowledge Cutoff
Unknown
Input → Output Format
Context Memory
205KIN128KOUT
AI Performance Evaluation
Arena Overall Score
1404
±6As of 2026-04-23
Overall Rank
No.95
10,307 Votes
Arena by Ability
Hard Prompts
1426±8No.90
Expert Knowledge
1429±20No.85
Instruction Following
1395±11No.94
Conversation Memory
1406±14No.97
Creative
1350±17No.117
Coding
1466±11No.68
Math
1402±22No.99
Arena by Occupation
Creative Writing
1369±13No.114
Social Sciences
1407±15No.110
Media
1347±15No.122
Business
1413±14No.80
Healthcare
1423±24No.102
Legal
1406±23No.103
Software
1456±9No.69
Mathematics
1413±24No.88
Source:Arena Intelligence
Overall
AA Intelligence Index
50%↑11%
LiveBench
65%↑5%
Reasoning & Math
GPQA Diamond
87%↑6%
HLE
28%↑11%
LB Reasoning
75%↑15%
LB Math
81%↑7%
LB Data
56%↑7%
Coding
AA Coding Index
42%↑8%
LB Coding
55%↓19%
LB Agentic
50%↑7%
TAU2
85%↑12%
TerminalBench
39%↑8%
SciCode
47%↑6%
Language & Instructions
IFBench
76%↑19%
AA-LCR
69%↑7%
LB Language
67%↓5%
LB IF
61%↑15%
Output Speed
Standard Mode
47tok/s↓35
First Output 53.78s