Lecture slides and coding scripts from the 2025 Osnabrück University block course.
We dove into research in continual learning and recurrence on two Friday afternoons, and reproduced results from two research papers, from scratch (using plenty of help from ChatGPT), on two Saturdays.
S2/3: Reproducing and analyzing Cheung et al. 2019
Cheung et al.'s continual learning algorithm is unique in the zoo of CL algorithms. It is a generalized gating-based architectural algorithm:
- Randomly-sampled binary vectors (-1,1) in high-dimensions are mostly orthogonal.
- Using such vectors as contexts for tasks (elementwise multiplied to layer activations) alleviates catastrophic inteference between tasks.
- The algorithm ensures that learning-related representational drift is constrained to the null-space of the task readout thereby not taking a performance hit.
S5/6: Reproducing and analyzing Thorat et al. 2021
Thorat et al.'s approach sheds light on what computations occur inside recurrent neural networks that allow them to refine their category inference over time.
- RCNNs with multiplicative modulation as the feedforward-recurrence interaction show clutter reduction at the input.
- Additionally, the information about an auxiliary variable—the location of the intact digit—increases with time and layer depth.
- This clutter reduction is not observed with additive modulation, although the same decoding results are seen! This difference between multiplicative and additive modulation-driven computations is intriguing!







