LG AI Research
LG AI Research

K-EXAONE

2026-01-12

K-EXAONE is LG AI Research's Korean-specialized frontier large language model, employing a Mixture-of-Experts (MoE) architecture with 236B total parameters and only 23B active during inference for efficient frontier-level performance. Its Hybrid Attention Mechanism combines sliding window attention with global attention, reducing memory and computational requirements by 70% compared to the previous generation. An expanded 150K-word tokenizer and Multi-Token Prediction (MTP) boost inference speed by 150%. The model supports a 260K-token context length (approximately 400 A4 pages) and ranked 1st in 10 out of 13 categories in South Korea's national AI foundation model evaluation. It placed 7th globally on the Artificial Analysis Intelligence Index — the only model from outside the US and China in the global top 10. With a KGC-SAFETY score of 96.2, it leads in Korean sociocultural safety standards, and its A100-grade GPU compatibility makes frontier AI accessible to organizations with limited infrastructure.

API|Reasoning|Open ModelK-EXAONE
Knowledge Cutoff
2024-12
Input → Output Format
Context Memory
256K
Cost/1M Words
$0.2IN$0.8OUT
Calculate Cost

AI Performance Evaluation

Overall
AA Intelligence Index
32%↓6%
Reasoning & Math
MMLU-Pro
84%↑2%
AIME 2025
93%↑19%
Coding
LiveCodeBench
81%↑15%
TAU2
73%↑0%
Language & Instructions
IFBench
67%↑11%