Universal Properties of Activation Sparsity in Modern Large Language Models
Dec 6, 2025·
,,,,,,,·
0 min read

Filip Szatkowski
Patryk Będkowski
Alessio Devoto
Jan Dubiński
Pasquale Minervini
Mikołaj Piórczyński
Simone Scardapane
Bartosz Wójcik
Abstract
Input-dependent activation sparsity is a notable property of deep learning models, which has been extensively studied in networks with ReLU activations and is associated with efficiency, robustness, and interpretability. However, the approaches developed for ReLU-based models depend on exact zero activations and do not transfer directly to modern large language models~(LLMs), which have abandoned ReLU in favor of other activation functions. As a result, current work on activation sparsity in LLMs is fragmented, model-specific, and lacks consensus on which components to target. We propose a general framework to assess sparsity robustness and present a systematic study of the phenomenon in the FFN layers of modern LLMs, including diffusion LLMs. Our findings reveal universal patterns of activation sparsity in LLMs, provide insights into this phenomenon, and offer practical guidelines for exploiting it in model design and acceleration.
Publication
In UniReps Workshop - Unifying Representations in Neural Models, NeurIPS 2025