AI Research🇰🇷 한국어

Diffusion LLM Part 3: LLaDA -- Building an 8B LLM with Masked Diffusion

Variable Masking, Fisher Consistency, In-Context Learning, Reversal Curse -- how LLaDA built a real LLM with diffusion.

Diffusion LLM Part 3: LLaDA -- Building an 8B LLM with Masked Diffusion

Diffusion LLM Part 3: LLaDA -- Building an 8B LLM with Masked Diffusion

In Part 2, we explored how D3PM and MDLM define Diffusion in discrete spaces. We also confirmed that Absorbing State Diffusion using [MASK] tokens is the most effective approach for text.

However, prior work remained at relatively small scales. The question "Can we actually build a real LLM with Diffusion?" was answered by LLaDA (Large Language Diffusion with mAsking).

Nie et al. (2025) scaled Masked Diffusion to 8B parameters, directly compared it against LLaMA3 8B, and demonstrated that Diffusion LLMs can possess the core capabilities of AR models -- In-Context Learning and Instruction Following.

Core Idea: Variable Masking Ratio

🔒

Sign in to continue reading

Create a free account to access the full content.

Related Posts