**Covariance Rule**
Another modification of the Hebbian rule above uses an idea similar to mean-centering of the data, but instead of transforming the data, the weights w are updated using mean-centered inputs x and outputs y.
![[ETH/ETH - Deep Learning in Artificial & Biological Neuronal Networks/Images - ETH Deep Learning in Artificial & Biological Neuronal Networks/image73.png]]
The last line depicts the difference between the mean of the product of x and y, $\left\langle y \times x \right\rangle$ and the product of the means $\left\langle y \right\rangle \times \left\langle x \right\rangle$. This rule solves a similar problem as Oja's rule, specifically the blowing up of weights over the training process. By subtracting the means when updating, weights updates can be negative as well as positive. The weights w increase when pre- and post-synaptic firing are positively correlated, and the change is proportional to the covariance of the firing rates.