**Perceptron Learning Rule** The perceptron learning rule was inspired by the model of neurons at the time, chiefly outlined in the McCulloch and Pitts Neuron. This model describes basic action potential propagation and involves multiple presynaptic neurons connected to the soma of a postsynaptic neuron. An action potential is triggered when sufficiently many presynaptic neurons (which may each contribute differently to the postsynaptic neuron based on their synaptic strengths) are activated such that the joint effects of their action potentials in the postsynaptic neuron exceeds the activation threshold, which triggers an action potential through the postsynaptic neuron. Analogously, the perceptron learning rule involves multiple input values, which are each connected with varying weights to an output node. The value emitted by the output node depends on whether the weighted contributions of the input values exceeds a specific threshold. Learning is the process of adjusting the weights of each input as well as the output threshold to achieve the desired goal. This can be denoted by a threshold linear transformation. Given an input vector of values, the output falls into two cases depending on whether the linear transformation is above or below threshold. Specifically, for an input vector x, corresponding weights w, and a threshold b, the output can be denoted as: ![[ETH/ETH - Deep Learning in Artificial & Biological Neuronal Networks/Images - ETH Deep Learning in Artificial & Biological Neuronal Networks/image67.png]] As a result, from a machine learning classification perspective, the perceptron learning rule describes a linear classifier as its decision is based on the result of a linear transformation. The standard algorithm (developed by Rosenblatt) for training according to this learning rule involves looping through every data sample, updating the weights w if and only if the current data sample x is misclassified, detailed below: ![[ETH/ETH - Deep Learning in Artificial & Biological Neuronal Networks/Images - ETH Deep Learning in Artificial & Biological Neuronal Networks/image68.png]]