I am a PhD student at the Warsaw University of Technology, supervised by professor Tomasz Trzciński. My research focuses on efficiency in deep learning, spanning adaptive computation, early-exits, activation sparsity, speculative decoding, and continual learning.
During my PhD, I have published at top conferences such as NeurIPS and ICML, and collaborated with European institutions including the Computer Vision Center in Barcelona and Sapienza University of Rome. I also have industry experience, most recently from working as an Applied Scientist Intern at Amazon AWS AI, and earlier as NLP Intern at Samsung R&D in Warsaw. I am also active in the Polish ML community, organizing major events such as the ML in PL conferences and summer schools, as well as the ELLIS Doctoral Symposium 2025 in Warsaw.
We challenge the use of calibration metrics in early-exit models and show cases where calibration fails to accurately reflect the network performance. We argue for failure prediction as a more reliable performance proxy that better correlates with efficiency gains in early-exit networks.
We propose a general framework for assessing sparsity robustness in modern LLMs and conduct a systematic study of activation sparsity such models. Our study reveals universal patterns of sparsity in LLMs and provides practical guidelines for model acceleration and design.
We investigates intermediate representations in neural networks during class-incremental learning and propose to leverage them via auxiliary early-exit classifiers. Interestingly, we find out that in continual learning scenarios networks enhanced with such classiers are not only more efficient, but also show improved performance and reduced forgetting across task sequences.
We propose a method to convert dense transformers to dynamic Mixture-of-Experts models, which leverages natural activation sparsity in the neural networks. Crucially, we propose to enforce activation sparsity during short (continual) training process via additional sparsity regularization, and argue for use of dynamic-k expert routing in MoEfied models. Finally, we show how with efficient implementation our method achieves computational efficiency while maintaining the performance.
We propose Zero-Time Waste, an early exit network architecture that reduces computational waste via cascading connections between early-exit classifiers and ensembling mechanism. ZTW achieves better efficiency-accuracy trade-offs in pre-trained models and offers a practical architectural solution for deployment of early exit neural networks.