1 / 3
Swipe to compare

MiniMax M2.7 is a next-generation flagship model that builds on M2.5 with a self-evolving training paradigm — autonomously running over 100 rounds of scaffold optimization during training, achieving a 30% performance improvement. It is built for complex agentic workflows including Agent Teams, dynamic tool search, and elaborate productivity tasks. The model scores 56.22% on SWE-Pro (matching GPT-5.3-Codex) and 57.0% on Terminal Bench 2, demonstrating system-level comprehension. Based on a 230B sparse MoE architecture, it offers frontier performance at just $0.30 per million input tokens.

Author
MiniMaxMiniMax
Release Date
2026-03-18
Knowledge Cutoff
Unknown
License
Open Model
I/O Format
Context Length
205K / 128K
API I/O (1M)
$0.3 / $1.2
How to Use
API Access
Output Speed
47 tok/s
Arena Overall
1404
Intelligence Index
49.6
Coding Index
41.9
Math Index
LiveBench
65.0
ForecastBench
GPQA Diamond
87.4%
HLE
28.1%
MMLU-Pro
AIME 2025
MATH-500
LB Reasoning
74.8
LB Math
80.5
LB Data Analysis
56.3
LiveCodeBench
LB Coding
54.9
LB Agentic
50.0
TAU2
84.8%
TerminalBench
39.4%
SciCode
47.0%
IFBench
75.7%
AA-LCR
0.7
Hallucination (HHEM)
Factual Consistency (HHEM)
LB Language
66.8
LB Instruction Following
61.1