1 / 3
Swipe to compare

K-EXAONE is LG AI Research's Korean-specialized frontier large language model, employing a Mixture-of-Experts (MoE) architecture with 236B total parameters and only 23B active during inference for efficient frontier-level performance. Its Hybrid Attention Mechanism combines sliding window attention with global attention, reducing memory and computational requirements by 70% compared to the previous generation. An expanded 150K-word tokenizer and Multi-Token Prediction (MTP) boost inference speed by 150%. The model supports a 260K-token context length (approximately 400 A4 pages) and ranked 1st in 10 out of 13 categories in South Korea's national AI foundation model evaluation. It placed 7th globally on the Artificial Analysis Intelligence Index — the only model from outside the US and China in the global top 10. With a KGC-SAFETY score of 96.2, it leads in Korean sociocultural safety standards, and its A100-grade GPU compatibility makes frontier AI accessible to organizations with limited infrastructure.

Author
LG AI ResearchLG AI Research
Release Date
2026-01-12
Knowledge Cutoff
2024-12
License
Open Model
I/O Format
Context Length
256K
API I/O (1M)
$0.2 / $0.8
How to Use
API Access
Output Speed
Arena Overall
Intelligence Index
32.0
Coding Index
Math Index
LiveBench
ForecastBench
GPQA Diamond
HLE
MMLU-Pro
83.8%
AIME 2025
92.8%
MATH-500
LB Reasoning
LB Math
LB Data Analysis
LiveCodeBench
80.7%
LB Coding
LB Agentic
TAU2
73.2%
TerminalBench
SciCode
IFBench
67.3%
AA-LCR
Hallucination (HHEM)
Factual Consistency (HHEM)
LB Language
LB Instruction Following