> [!meta]+ Metadata
> Professor: Benjamin Grewe
> Assistant Lecturers: Pau Aceituno, Martino Sorbaro
> Guest Lecturers: Giacomo Indiveri, Friedemann Zenke, Christoph von der Malsburg
> Academic Year: Fall 2022
>[!PDFs]+ PDFs
> (2022) Elia's Lecture Notes: [[ETH - Deep Learning in Artificial & Biological Neuronal Networks - PDF.pdf]]
> (2019) TAs Lecture Notes: [[HS2019_LearningInDeepArtificialAndBiologicalNeuronalNetworks.pdf]]
- ## Introduction
- Lecture Notes: [[ETH/ETH - Deep Learning in Artificial & Biological Neuronal Networks/Lecture Notes - ETH Deep Learning in Artificial & Biological Neuronal Networks/Introduction]]
- Extracted Topics:
- [[Course Overview]]
- [[Human Brain - Deep Networks Analogies]]
- ## Plasticity in the Brain
- Lecture Notes: [[Plasticity in the Brain]]
- Extracted Topics:
- [[Synaptic Plasticity]]
- [[The Hippocampus as a Model System to Study Neural Plasticity]]
- [[Non-Synaptic Plasticity]]
- ## Training Methods for Deep ANNs
- Lecture Notes: [[Training Methods for Deep ANNs]]
- Extracted Topics:
- [[The Backpropagation of the Error Method (BP)]]
- [[Feedback Alignment (FA)]]
- [[Target Propagation]]
- [[Local (layer-wise) Training for Deep Neural Networks]]
- [[The Deep Feedback Control Method]]
- ## Learning Rules
- Lecture Notes: [[Learning Rules]]
- Extracted Topics:
- [[Why (local) Neuronal Learning Rules are Important?]]
- [[Perceptron Learning Rule]]
- [[ADALINE & Delta Learning Rule]]
- [[Hebbian Learning Rule]]
- [[Oja's Learning Rule]]
- [[Covariance Learning Rule]]
- [[Sanger's Learning Rule]]
- [[Calcium Rule]]
- [[Sejnowski's Infomax Network (ICA) Rule & Bienenstock-Cooper-Monroe (BCM) Rule]]
- [[Triplet Rule (Pfister & Gerstner)]]
- [[Extensions of Hebbian Learning Rules (Neo-Hebbian)]]
- ## Reinforcement Learning
- Lecture Notes: [[Reinforcement Learning]]
- Extracted Topics:
- [[Introduction - Reinforcement Learning & the Brain]]
- [[Rescorla - Wagner Rule]]
- [[Temporal Difference Rule & Q-Learning]]
- [[Basic Components of Reinforcement Learning - Policy & Value Functions]]
- [[Markov Chains (MC), Markov Reward Processes (MRPs) & Markov Decision Processes (MDPs)]]
- [[Bellmann (Expectation) Equation]]
- [[Deep Reinforcement (Q) Learning]]
- ## Un- and Self-Supervised Learning
- Lecture Notes: [[Un- and Self-Supervised Learning]]
- Extracted Topics:
- [[Unsupervised Learning - Introduction & Motivation]]
- [[Unsupervised Learning (UL) in the Brain]]
- [[Sparse Coding & Relation to Neuroscience]]
- [[Non-Probabilistic UL - PCA, ICA (Infomax)]]
- [[Non-Probabilistic UL - Autoencoders & Supervised Autoencoders]]
- [[Non-Probabilistic UL - Contracting Autoencoders]]
- [[Non-Probabilistic UL - Denoising & Sparse Autoencoders]]
- [[Non-Probabilistic UL - "Homomorphism" Autoencoders]]
- [[Non-Probabilistic UL - Competitive Network Learning]]
- [[Probabilistic (Generative) Unsupervised Learning]]
- [[Probabilistic (Generative) UL - Boltzmann Machines]]
- [[Probabilistic (Generative) UL - Contrastive Divergence]]
- [[Self-Supervised Learning - Pixel-RNN]]
- ## Meta-Learning
- Lecture Notes: [[Meta-Learning]]
- Extracted Topics:
- [[Meta-Learning with ANNs - What is Meta-Learning?]]
- [[Meta-Learning with ANNs - Metric-Based (Prototypical, Siamese, Matching and Relation Networks)]]
- [[Meta-Learning with ANNs - Model-Based (Meta & Hyper Networks)]]
- [[Meta-Learning with ANNs - Optimization-Based (Model-Agnostic Meta-Learning)]]
- [[Meta-Learning in the Brain]]
- ## Continual Learning
- Lecture Notes: [[Continual Learning]]
- Extracted Topics:
- [[Continual Learning - Introduction]]
- [[Continual Learning - Strategies]]
- [[Continual Learning - Regularization Methods (Elastic Weight Consolidation & Synaptic Intelligence)]]
- [[Continual Learning - Data Replay Methods]]
- [[Continual Learning & the Brain]]
- ## Why Spikes?
- Lecture Notes: [[Why Spikes?]]
- Extracted Topics:
- [[What is a Neuronal Spike]]
- [[Digital vs Non-Digital Communication]]
- [[Non-Spiking Biological Systems & Different Types of Action Potentials]]
- [[How To Measure Spiking Activity in a Biological Neuron]]
- [[Temporal Coding Schemes with Spikes]]
- [[Deep Learning with "Time to First Spike"]]
- [[Neuronal Spiking Dynamics (Hodgkin-Huxley Model)]]
- ## Deep Learning with Spikes
- Lecture Notes: [[Deep Learning With Spikes]]
- Extracted Topics:
- [[Spiking Neuron Models]]
- [[Supervised Learning in Multi-Layer Spiking Networks - Introduction]]
- [[Supervised Learning in Multi-Layer Spiking Networks - Seq2Seq Learning]]
- [[Neuromorphic Hardware & Spiking Neural Networks]]
- ## Learning in Recurrent Neuronal Network
- Lecture Notes: [[Learning in Recurrent Neuronal Networks]]
- Extracted Topics:
- [[RNNs in the Brain - Circuit-Level Recurrence - Anatomical & Functional Evidence]]
- [[RNNs in Machine Learning & Back-Propagation Through Time]]
- [[RNNs in Theoretical Neuroscience - Hopfield Networks, Reservoir Computing & Self-Organizing Recurrent Networks (SORN)]]
- [[Long-Short-Term Memory (LSTM) Networks]]
- ## Predictive Coding
- Lecture Notes: [[Predictive Coding]]
- Extracted Topics:
- [[Information Coding]]
- [[Temporal Predictions]]
- [[Predictive Coding - Circuits & Learning]]
- [[Predictive Coding - Problems]]
- [[Bayesian Brain & Free Energy Principle]]
- ## Neuromorphic Intelligence
- Lecture Notes: [[Neuromorphic Intelligence]]
- Extracted Topics:
- [[Neuromorphic Intelligence - Introduction & Relation to Neuroscience]]
- [[Neuromorphic Engineering Approach]]
- [[Neuromorphic Synapse Analog Circuits]]
- [[Neuromorphic Processors]]
- [[Neuromorphic Pros & Cons]]
- [[Neuromorphic Applications]]
- ## Attention Is All You Need
- Lecture Notes: [[Attention Is All You Need]]
- Extracted Topics:
- [[Attention in Neuroscience - Selective Attention & Visual Saliency]]
- [[Attention Models in Machine Learning - Self-Attention, Transformers and GPTs]]
- ## How Can Biological Learning Be So Efficient?
- Lecture Notes: [[How Can Biological Learning Be So Efficient?]]
- Extracted Topics:
- [[Ontogenesis, Kolmogorov Complexity & Self-Reinforcing Networks]]
- [[Representation, Perception & Learning]]