### Lateral Geniculate Nucleus (LGN): A Few Facts
|![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image52.png]] |![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image53.png]] |
|---|---|
The LGN presents 6 layers as it can be seen in the Macaque section shown to the left. Also, in the picture, the figures in the middle and to the right show that both LGN (left and right of the brain) receive inputs from both eyes, but these inputs are not mixed and remain segregated in the various layers.
Layers 1 and 2 are **Magnocellular** and are bigger cells with particular high contrast sensitivity but not color selective. While layers 3, 4, 5 and 6 are **Parvocellular** which are dimensionally smaller than Magnocellular. Although not particularly color sensitive either, they do show some sensitivity for red/green color opponency. **Koniocellular** layers are the smallest cells and show sensitivity for short-wave lengths, i.e., blue/yellow color opponency. The cells in the LGN share the center-surround structure of ganglion cells.
LGN plays a major role as a relay station to project the visual information towards other areas of the cortex. However, it doesn't seem to play any additional role in the processing
of the information, compared to the retina.
**Primary Visual Cortex** (V1 = Areal 17 = Striate Cortex)
|![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image54.png]] |![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image55.png]] |
|---|---|
### Cortical Processing
**V1 "Simple Cells" & Selectivity for Stimulus Orientation and Direction**
**Orientation** refers to the "orientation" of the bar, while **Direction** refers to the "direction of motion" of the bar.
|![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image56.png]] |![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image57.png]] |![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image58.png]] |
|---|---|---|
### Receptive Field of a Complex Cell
Complex cells show superimposed on/off subregions. Also, the receptive fields of these cells do not have clear orientations. Nonetheless, they are extremely orientation selective,
even though it is not easy to identify their preferred orientation preference. Complex cells represent the majority of cells in the cortex.
|![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image59.png]] |![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image60.png]] |
|---|---|
### Hubel & Wiesel's Feedforward Model of Simple Cells
In the following successful model, we have 4 LGN neurons projecting on a particular simple cell in the cortex. Now, on the left we can see the receptive fields of the LGN cells, which happen to be aligned in space. Then, the receptive field of the simple cell will be maximally activated when the on-centers of the LGN cells are stimulated, which occurs when a properly oriented bar is presented. This model is accurate qualitatively, but not quantitatively. Indeed, it has been proven that orientation selectivity is enhanced also by
cortical connections.
![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image62.png]]
### Hubel & Wiesel's Feedforward Model of Complex Cells
In the following model, the complex cell receives inputs from 3 simple cells. We should imagine the simple cells receptive fields as superimposed rather than spread out like this, which makes it untruthfully easy to determine the preferred orientation of the complex cells.
![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image61.png]]
### Perceptual and Neural Sensitivity: Data from a Monkey
The plot to the right shows the relation between spatial frequency and contrast sensitivity, indicating the level of contrast and frequency at which an individual (x-dotted line) and a single cell (o-dotted line) can detect the stimulus. From the graph we can notice that a single cell is particularly tuned to a narrow range of spatial frequencies.
![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image63.png|500]]
### For Simple Cells, Knowing the Receptive Field is Knowing Spatial Frequency Tuning
As we have seen for the ganglion cells, knowing the shape of the receptive field allows us to infer/predict the spatial frequency tuning of a cell. If you make a map of the receptive fields of these simple cells and you know how many on/off regions they have, their size and sensitivity, you can predict the spatial frequency tuning with an almost perfect match. We can notice that the more complex a cell is in terms of on/off regions the narrower the spatial frequency tuning will be. This shows that not all the cells have the same spatial frequency selectivity and that they "peak" at different frequencies. This, in turn, explains the "whole-individual" perceptual sensitivity to spatial frequencies against contrast sensitivity, that we have seen before.
|![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image64.png]] |![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image65.png]] |
|---|---|
### Selectivity in V1 is Extremely Sharp
![[image66.jpeg|500]]
The grating at the center of the gratings' matrix here shows the optimal grating for a particular cell (specificity). All the gratings around it represent gratings to which the cell does not respond anymore. And we can see that just by changing the orientation a little bit (vertical axis), keeping the spatial frequency constant, the cell doesn't respond anymore. And the same can be observed in the opposite case (horizontal axis), where we change the frequency keeping the orientation constant.
### Retinotopy
**Cortical Representation Measured with 2-Deoxy-Glucose**
Retinotopy means that you can recognize the image as projected in the retina if we inspect the pattern of cortical activity in V1. There is an orderly representation (tonotopy) of neighboring points in the retina among neighboring neurons in V1. As you can see on the right figure, the right hemisphere is tonotopically organized in monkey's V1.
The Simpsons' image shows how the image would be represented in the cortex, i.e., upside-down, reversed and magnified in the foveal region.
|![[image67.jpeg]] |![[image68.jpeg]] |
|---|---|
|![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image69.png]] |![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image70.png]] |
### Cortical Magnification
The foveal region of the retina is "magnified" in V1 and takes most of the cortical area (as it can be seen in the top right figure). In the figure below we can see the relation between the distance of cortical representation and the degree of eccentricity in the retina. Thus, showing that the foveal region is magnified and the relation is not linear, but rather exponential.
![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image71.png|500]]
### Ocular Dominance
|![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image73.png]] |![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image74.png]] |![[image72.jpeg]] |
|---|---|---|
The leftmost figure shows how the preferred orientation of the various cells in V1 varies gradually (oblique electrode insertions), thus signifying a "columnar" organization of cells in **orientation preference columns**. With the same type of measuring technique, it was realized that certain eyes where dominated by stimuli received from one particular eye, thus indicating a "columnar" organization of cells in **ocular dominance columns**. The middle picture shows another technique to assess ocular dominance regions in the monkeys cortex (red: left eye, blue: right eye). V1 is the first area where the information of right and left eyes are combined.
### The "Ice-Cube" Model of Hubel & Wiesel
![[image75.jpeg]]
This model explains how visual information would be analyzed by the brain. Each "module" (composed of a green and light blue set of cells) analyzes a specific region of the visual field. Where the green and light blue represent the column of ocular dominance, and the other subdivisions represent the orientation preferences of cells in these columns. The subsequent module would analyze the neighboring part of the visual field, thus allowing retinotopy.
**Orientation and Ocular Dominance Columns**
**Orientation Columns Measured with Optical Imaging**
![[image76.jpeg|500]]
As it can be expected, the model proposed by Hubel & Wiesel is not perfectly accurate, as in biology we rarely see such well defined columns. What it has been measured and represented to the right is the organization of cells in orientation columns.
![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image77.png]]
While in this figure above, we have a representation of the relation between ocular dominance (thick dark lines) and orientation preference columns (grey lines). It is peculiar to notice that most of the intersections between the two types of columns tends to be perpendicular.
**Columnar Organization**
The column is the elementary unit of calculation in the visual cortex. The signals from both eyes for all orientations are represented in one column.
![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image78.png]]
### Linear Model of V1 Simple Cells
Responses are a weighted average of the stimulus intensity, where the receptive field is the map of the weights.
![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image79.png|500]]
**The Linear Model**
At the bottom of the figure we can see a light stimulus, represented by a sine wave, followed by the response of the cell. The modelling idea is that we have a receptive field as shown in the top left of the figure (vertically oriented in this case) and the "product" of light stimulus and receptive field is passed through a trivial non-linearity, i.e., a threshold. We obtain a "rectified sinewave", where only the positive part "makes it through" the filter.
**For a Linear Cell, Knowing the Receptive Field is Knowing Everything**
Through this linear model, you can predict the responses of neurons, which are dependent on where the light is shown on the receptive field.
![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image80.png|500]]
**Dependence of Responses on Orientation**
The same thing is true for orientation, you can multiply (convolve) the receptive field by stimuli of different orientations and the model will return the predicted neuronal response.
![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image81.png|500]]
### Nonlinearities in V1 Responses
Over the study of V1 cells, it was noticed that different types of non-linearities take place in V1 cells. We will further discuss, which are the situations that make the linear model insufficient and violate the "laws" governing the linear systems.
Linear systems L(x) obey:
- **Homogeneity**: L(a\*x) = a L(x)
- **Superposition** (Additivity): L(x+y) = L(x) + L(y)
**A Basic Nonlinearity: Thresholding**
![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image82.png|500]]
In the figure below, we see the membrane potential recorded via intracellular electrode. It analyses how the membrane potential varies in the presence of light stimulus. We can see that it varies sinusoidally and when it reaches a threshold it generates a peak, i.e., an Action Potential. In the lower part of the figure, we can see the measurement made via an extracellular electrode, thus indicating the AP. On the right part of the figure, we see the predicted curves via the linear model, which resemble quite accurately the measured ones.
**A Violation of Homogeneity: Saturation**
![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image83.png|500]]
The figure above shows that by increasing the contrast in the image, the neuronal response increases as well. However, it does not increase linearly, indeed the response for a 100% contrast, is just a slightly greater than the response for 50% contrast. The graph to the right shows this relationship, which is linear for low contrasts, but it is not linear for higher contrasts (saturation). Thus, violating homogeneity of linear systems.
**Saturation Depends on Contrast**
The figure below explains that saturation does not always occur at the same contrast. Indeed, with different orientations of light stimuli, saturation occurs at different levels. Thus, showing that saturation is not a biophysical limit of neurons, but due to something else.
![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image84.png|500]]
**A Violation of Superposition: Masking**
![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image85.png|700]]
In the figure above we have a cell whose preferred orientation is horizontal and shows practically no response to vertical stimuli. In this experiment, the concept of masking was explained. It refers to the phenomenon where the response of a V1 neuron to a visual stimulus is suppressed by the presence of a nearby stimulus, referred to as the "mask". Specifically, the authors proposed that the response of a V1 neuron to a stimulus is suppressed by the activity of other neurons whose receptive fields overlap with the neuron's receptive field, but whose preferred orientations are different.
### A Nonlinear Model of V1 Simple Cells
![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image86.png|500]]
To include the concept of masking in the model of V1 neurons, the model in the above figure has been proposed. It includes a divisive inhibition term in the model, which represent the inhibition of a neuron's response by the activity of other neurons. This divisive inhibition term is sensitive to the similarity between the preferred orientations of the neuron and its neighboring neurons. When the preferred orientations are similar, the inhibition is stronger, leading to greater masking.
Overall, the masking effect helps to sharpen the tuning of V1 neurons for specific visual features by reducing their responses to stimuli that are similar but not identical to their preferred features. This tuning is important for extracting relevant visual information from a complex visual scene.
### Adaptation
![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image87.png|200]]
Motion-after effect due to adaptation. It is a perceptual phenomenon in which a stationary stimulus appears to move in the opposite direction after prolonged exposure to a moving stimulus. This effect occurs because the visual system adapts to the prolonged motion in one direction, causing a temporary imbalance in the neural response to motion.
The motion aftereffect can be explained by the process of adaptation, which is a fundamental property of the visual system. Adaptation refers to the ability of neurons in the visual system to adjust their sensitivity to a particular stimulus feature based on prior exposure. In the case of the motion aftereffect, prolonged exposure to a moving stimulus causes neurons in the visual system to adapt to the specific direction and speed of motion. This adaptation results in a decreased response to the adapted motion and an increased response to the opposite motion, which leads to the perception of motion in the opposite direction.
Adaptation is an important mechanism for the visual system to adjust to changes in the environment and maintain sensitivity to relevant stimuli. It allows the visual system to filter
out unchanging or irrelevant features of a visual scene and focus on relevant information. In addition to the motion aftereffect, adaptation is responsible for a wide range of perceptual phenomena, including color aftereffects, orientation aftereffects, and size aftereffects.
**Adaptation in a V1 Neuron (Violates Linearity)**
![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image88.png|250]]
The figure above describes one of the first experiments that showed contrast adaptation in the cortex. They presented a low-contrast stimulus to one eye. Then in the other eye they presented a high-contrast stimulus. Finally, they present again the low-contrast stimulus to the left eye and they measured that the response is lower than the response elicited during the first stimulus presentation. In addition, we can see that the adaptation occurs also during the presentation of the high-contrast stimulus as the neuronal response decreases over time.
**Contrast Adaptation Controls V1 Neuron Sensitivity**
|![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image90.png]] |![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image89.png]] |
|---|---|
### Learning - Hebbian Learning
Learning represents another type of nonlinearity. By learning we mean the change in neuronal response based on the experience that the neuron has had. In neuroscience, we often discuss about Hebbian Learning. This concept postulates that the synapses between two neurons are plastic, dynamic. "Neurons that fire together, wire together". The fact that we have dynamic synapses represents a non-linearity, as the response to the same stimulus could potentially vary in time. Is Hebbian Learning happening in V1 simple cells?
![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image91.png|500]]
**Evidence for Hebbian Learning in V1**
The figure below shows that the orientation tuning of simple cells in cats V1. In the experiment, the visual response of a single cell to a light bar was drove to a "high" level (via electrode stimulation) when presented with an initially nonpreferred orientation (S+), and alternately reducing it to a "low" level when presented with the preferred orientation (S). The goal was to test the possible role of neuronal coactivity in controlling the plasticity of orientation selectivity. Approx. 40% of the tested cells showed significant long-lasting changes in their relative orientation preference. The results support the hypothesis that covariance levels between pre- and post-synaptic activity determine the sign and the amplitude of the modification of efficacy or cortical synapses. Thus, meaning that V1 simple cells feature Hebbian learning nonlinearities.
|![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image92.png]] |![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image93.png]] |
|---|---|
### V1 Simple & Complex Cells
Both simple and complex cells are orientation selective. In the case of the simple cells, presentation of the bars in different areas of the receptive fields have differential effects due to on/off regions, while this doesn't occur in complex cells that show a constant response across the whole receptive field.
![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image94.png|450]]
**Simple vs Complex Cells Responses**
The figure above is interesting to compare the responses of simple cells and complex cells to sinusoidal stimuli. In the case of a simple cell, we obtain a rectified sinewave as we have also seen before. In the case of the complex cell, that receives inputs from two simple cells with opposite receptive fields, we have a doubled frequency of the sinewave, **frequency doubling**.
**Could Simple and Complex Cells be Extremes of a Continuum?**
![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image95.png|450]]
Simple cells in V1 have segregated On/Off subregions in their receptive fields, while complex cells have overlapping On/Off subregions. These two cell types form the extremes at each end of a continuum of receptive field types. Hubel & Wiesel suggested a hierarchical scheme of processing whereby spatially offset simple cells drive complex cells. In their pioneering studies of V1, Hubel & Wiesel described the existence of two classes of cells, which they termed "simple" and "complex". The original classification scheme was based on a number of partly subjective tests of linear spatial summation. Later, investigators adopted an objective classification method based on the ration between the amplitude of the first harmonic of the response and the mean spike rate (or the F1/F0 ratio) when the neuron is stimulated with drifting sinusoidal gratings. This measure is bimodally distributed over the population and divides neurons into two classes that correspond closely to the classical definition by Hubel & Wiesel. Here we show that a simple rectification model can predict the observed bimodal distribution of F1/F0 in V1 when the distributions of the intracellular response modulation and mean are unimodal. Thus, contrary to common belief, the bimodality of F1/F0 does not necessarily imply the existence of two discrete cell classes.
### End-stopping and Hypercomplex Cells
![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image96.png]]
Hubel & Wiesel found out that certain cells, called hypercomplex cells, showed a decrease in response when the stimulus exceeds the border of the cell receptive field. This phenomenon takes the name of **surround suppression**.
**Surround Suppression Example**
![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image97.png]]
Decreasing firing rate with increasing stimulus size outside the classical receptive field.
**Surround and Saliency**
![[ETH/ETH - Systems Neuroscience/Images - ETH Systems Neuroscience/image98.png]]
In the figure, we have the same sinewave grating presented 3 times with different surrounds. It can be noticed that changing the surround gives us the sense of an increased contrast. When the surround stimulus orientation is different, the stimulus becomes more salient and "pops out" from the background.