We investigates intermediate representations in neural networks during class-incremental learning and propose to leverage them via auxiliary early-exit classifiers. Interestingly, we find out that in continual learning scenarios networks enhanced with such classiers are not only more efficient, but also show improved performance and reduced forgetting across task sequences.
Jul 1, 2025
We conduct an investigation into the stability gap in continual learning and identify the critical role of the classification head in continual learning. We then suggest nearest mean classifer as a potential solution for improved model stability.
Jan 1, 2025
We examine knowledge distillation in exemplar-free continual learning and find out that allowing the adaptation of teacher network during the learning process through batch normalization updates improves knowledge transfer across several continual learning methods.
Jan 1, 2024
We propose progressive latent replay mechanism that enhances generative rehearsal in continual learning by efficiently managing memory and computational resources while maintaining model performance.
Nov 1, 2022