**RNNs in Theoretical Neuroscience**
**Hopfield Network**
A Hopfield network is a form of recurrent artificial neural network popularized by John Hopfield in 1982, but described earlier by Little in 1974. Hopfield nets serve as content-addressable ("associative") memory systems with binary threshold nodes. They are guaranteed to converge to a local minimum, but will sometimes converge to a false pattern (wrong local minimum) rather than the stored pattern (expected local minimum).
![[ETH/ETH - Introduction to Neuroinformatics/Images - ETH Introduction to Neuroinformatics/image252.png]]
A Hopfield network has various units, which have a binary state (1/0). The units update asynchronously or synchronously with the following rule:
![[ETH/ETH - Introduction to Neuroinformatics/Images - ETH Introduction to Neuroinformatics/image253.png]]
Here, $S_{i}$ is the *i-th* unit of the Hopfield network and $\theta_{i}$ is the threshold. One can define an energy term as:
$E = - \frac{1}{2}\sum_{i,j}^{}w_{ij}S_{i}S_{j} + \sum_{i}^{}\theta_{i}S_{i}$
With each update step, the energy either stays constant or decreases.
**Reservoir Computing**
![[ETH/ETH - Introduction to Neuroinformatics/Images - ETH Introduction to Neuroinformatics/image254.png]]
Reservoir computing is a framework for computation that may be viewed as an extension of neural networks. Typically an input signal is fed into a fixed (random) dynamical system called a reservoir (for example an RNN). Hereby, the dynamics of the reservoir map the input to a higher dimension. Then, a simple readout mechanism is trained to read the state of the reservoir and map it to the desired output. The main benefit is that training is performed only at the readout stage and the reservoir is fixed. A really cool thought is, that basically every (abstract or physical) dynamical system can be used as the reservoir, including a water tank, an electronic circuit or parts of the brain itself.
- **Echo State Network**: Recurrent neural network with a random and sparsely connected (1%) hidden layer / reservoir, works in discrete time, different activity update modes.
- **Liquid State Machine**: Biologically plausible, spiking RNN reservoir, randomly connected, continuous in time, asynchronous integration, uses linear discriminate units.
Where is the memory? In the dynamic traces of activity.
**Learning Dynamics/Algorithms**
- Learning via Backpropagation Through Time (**BPTT**)
- Learning via Real Time Recurrent Learning (**RTRL**)
- **Reservoir Computing**: reservoir capture features of the dynamics can be used to generate them.
- **FORCE**: First-Order, Reduced and Controlled Error (FORCE) learning. In all three cases shown in the following figure, a recurrent generator network with firing rates **r** drives a linear readout unit with output *z* through weights **w** (red) that are modified during training. Only connections shown in red are subject to modification.
![[ETH/ETH - Introduction to Neuroinformatics/Images - ETH Introduction to Neuroinformatics/image255.png]]
![[ETH/ETH - Introduction to Neuroinformatics/Images - ETH Introduction to Neuroinformatics/image256.png]]
(A) Feedback to the generator network (large network circle) is provided by the readout unit. (B) Feedback to the generator is provided by a separate feedback network (smaller network circle). Neurons of the feedback network are recurrently connected and receive input from the generator network through synapses, which are modified during training. (C) A network with no external feedback. Instead, feedback is generated within the network and modified by applying FORCE learning to the synapses internal to the network.
**Self-Organizing Recurrent Networks (SORN)**
![[ETH/ETH - Introduction to Neuroinformatics/Images - ETH Introduction to Neuroinformatics/image257.png]]
It combines three distinct forms of local plasticity to learn spatio-temporal patterns in its input while maintaining its dynamics in a healthy regime suitable for learning. The SORN learns to encode information in the form of trajectories through its high-dimensional state space reminiscent of recent biological finding on cortical coding. All three forms of plasticity are shown to be essential for the network's success.
1. STDP rule.
2. Weight Normalization Rule.
3. Distributed Processing Rule.
Process:
1. Start with random network.
2. Add biological-like learning rules.
3. Train readout layer linearly.