Learning-Deep-Learning

DLCM: Dynamic Large Concept Models: Latent Reasoning in an Adaptive Semantic Space

January 2026

tl;dr: Concept models are latent CoT models with intrpretability.

Overall impression

Collapse multiple language token into one “concept”, reasons in concept space, and then decode back into language tokens.

This is better than latent CoT methods such as Coconut in that it can be decoded back into langauge space where latent CoT cannot explicitly be decoded into language tokens.

Concpet models essentially are doing specdec not only in last output space but also in model space.

Caution: The speed up in concept model will canabalize specdec.

Key ideas

Technical details

Notes