MiniMax M2.7 is a next-generation flagship model that builds on M2.5 with a self-evolving training paradigm — autonomously running over 100 rounds of scaffold optimization during training, achieving a 30% performance improvement. It is built for complex agentic workflows including Agent Teams, dynamic tool search, and elaborate productivity tasks. The model scores 56.22% on SWE-Pro (matching GPT-5.3-Codex) and 57.0% on Terminal Bench 2, demonstrating system-level comprehension. Based on a 230B sparse MoE architecture, it offers frontier performance at just $0.30 per million input tokens.
MiniMax M2.7 is a next-generation flagship model that builds on M2.5 with a self-evolving training paradigm — autonomously running over 100 rounds of scaffold optimization during training, achieving a 30% performance improvement. It is built for complex agentic workflows including Agent Teams, dynamic tool search, and elaborate productivity tasks. The model scores 56.22% on SWE-Pro (matching GPT-5.3-Codex) and 57.0% on Terminal Bench 2, demonstrating system-level comprehension. Based on a 230B sparse MoE architecture, it offers frontier performance at just $0.30 per million input tokens.