Skip to content

novelmartis/Hands-on-NeuroAI

Repository files navigation

Hands-on-NeuroAI

Lecture slides and coding scripts from the 2025 Osnabrück University block course.

We dove into research in continual learning and recurrence on two Friday afternoons, and reproduced results from two research papers, from scratch (using plenty of help from ChatGPT), on two Saturdays.


S2/3: Reproducing and analyzing Cheung et al. 2019

Cheung et al.'s continual learning algorithm is unique in the zoo of CL algorithms. It is a generalized gating-based architectural algorithm:

Cheung setup

  1. Randomly-sampled binary vectors (-1,1) in high-dimensions are mostly orthogonal.

Random-vector alignment

  1. Using such vectors as contexts for tasks (elementwise multiplied to layer activations) alleviates catastrophic inteference between tasks.

CL performance

  1. The algorithm ensures that learning-related representational drift is constrained to the null-space of the task readout thereby not taking a performance hit.

Nullspace drift


S5/6: Reproducing and analyzing Thorat et al. 2021

Thorat et al.'s approach sheds light on what computations occur inside recurrent neural networks that allow them to refine their category inference over time.

Refinement setup

  1. RCNNs with multiplicative modulation as the feedforward-recurrence interaction show clutter reduction at the input.

Clutter reduction

  1. Additionally, the information about an auxiliary variable—the location of the intact digit—increases with time and layer depth.

Decoding results

  1. This clutter reduction is not observed with additive modulation, although the same decoding results are seen! This difference between multiplicative and additive modulation-driven computations is intriguing!

Additive modulation

About

Lecture slides and coding scripts from the UOS block course.

Topics

Resources

License

Stars

Watchers

Forks