LGN-Retinal Filter II
Now that we know how to crate a filter is time to use it to calculate how an LGN neuron would react to an incoming stimulus.
In this entry we will create a white noise stimulus in order to see how an LGN neuron reacts to it, this approach has the advantage that we can then
recover the filter by
In the same spirit of the last post, we will define the spatial and time parameters that determine the lengths and resolutions in those dimensions:
#Time parameters
dt = 1.0 # resolution of the response (in milliseconds)
dt_kernel = 5.0 # resolution of the kernel (in milliseconds)
dt_stimuli = 10.0 # resolution of the stimuli (in milliseconds)
kernel_size = 25 # The size of the kernel
T_simulation = 2 * 10 ** 2.0 # Total time of the simulation in ms
Nt_simulation = int(T_simulation / dt) #Simulation points
N_stimuli = int(T_simulation / dt_stimuli) #Number of stimuli
# Space parameters
dx = 1.0
Lx = 20.0
Nx = int(Lx / dx)
dy = 1.0
Ly = 20.0
Ny = int(Ly / dy )
Now, we call our kernel which we have wrapped-up as a function from the work in the last post:
# Call the kernel
# Size of center area in the center-surround profile
sigma_center = 15
# Size of surround area in the center-surround profile
sigma_surround = 3
kernel = create_kernel(dx, Lx, dy, Ly, sigma_surround,
sigma_center, dt_kernel, kernel_size)
With this in our hand we can use the numpy random functions to create our white noise stimuli, we use here the realization of white noise call ternary noise which consists on values of -1, 0 and 1 assigned randomly to each pixel in our stimuli:
# Call the stimuli
stimuli = np.random.randint(-1, 2, size=(N_stimuli, Nx, Ny))
Before we can proceed to calculate the convolution we need to do some preliminary work. The convolution problem involves three time scales with different resolutions. We have first the resolution of the response dt , the resolution of the kernel dt_kernel and finally the resolution of the stimulus dt_stimuli.Operations with the kernel involve jumping from one scale to another constantly so we need a mechanism to keep track of that. In short, we would like to have a mechanism that transforms from some coordinates to the others in one specific place and not scatter all over the place.
Furthermore, in the convolution the kernel is multiplied by a specific point of images for each point in time. For the sake of efficiency we would like to have a mechanism that does this for once. With this in mind I have built a set of indexes for each scale that allow us to associate each element on the indexes of the response to its respective set of images. Also, we have a vector that associates every possible delay time in the kernel to the set of indexes in the response. We illustrate the mechanisms in the next figure
We can appreciate the three different times scales in the image. Furthermore, we have a set of indexes called delay indexes that maps each response to its respective image and also other set of indexes called delay indexes that map each of the delays to his respective response. We can create this set of indexes with the following code:
# Scale factors
input_to_image = dt / dt_stimuli # Transforms input to image
kernel_to_input = dt_kernel / dt # Transforms kernel to input
input_to_kernel = dt / dt_kernel # Transforms input to kernel
working_indexes = np.arange(Nt_simulation).astype(int)
# From here we remove the start at put the ones
remove_start = int(kernel_size * kernel_to_input)
signal_indexes = np.arange(remove_start,
Nt_simulation).astype(int)
# Calculate kernel
kernel_times = np.arange(kernel_size)
kernel_times = kernel_times.astype(int)
# Delay indexes
delay_indexes = np.floor(kernel_times * kernel_to_input)
delay_indexes = delay_indexes.astype(int)
# Image Indexes
stimuli_indexes = np.zeros(working_indexes.size)
stimuli_indexes = np.floor(working_indexes * input_to_image)
stimuli_indexes = stimuli_indexes.astype(int)
Now, we can calculate the response of a neuron with a center-surround receptive field by performing the convolution between its filter and the stimuli. We also plot the stimuli to see how it looks:
for index in signal_indexes:
delay = stimuli_indexes[index - delay_indexes]
# Do the calculation
signal[index] = np.sum(kernel[kernel_times,...]
* stimuli[delay,...])
t = np.arange(remove_start*dt, T_simulation, dt)
plt.plot(t, signal[signal_indexes], '-',
label='Kernel convoluted with noise')
plt.legend()
plt.xlabel('Time (ms)')
plt.ylabel('Convolution')
plt.grid()
plt.show()
We can see that the resolution of the response is as good as the resolution of the filter and this explains the discontinuities in the figure above.
As a sanity check we can calculate a voltage triggered average to recover the sta:
## Calculate the STA
kernel_size = kernel_times.size
Nside = np.shape(stimuli)[2]
sta = np.zeros((kernel_size ,Nside, Nside))
for tau, delay_index in zip(kernel_times, delay_indexes):
# For every tau we calculate the possible delay
# and take the appropriate image index
delay = stimuli_indexes[signal_indexes - delay_index]
# Now we multiply the voltage for the appropriate images
weighted_stimuli = np.sum( signal[signal_indexes, np.newaxis, np.newaxis] * stimuli[delay,...], axis=0)
# Finally we divide for the sample size
sta[tau,...] = weighted_stimuli / signal_indexes.size
Which we can plot in a convenient way with the following set of instructions:
## Visualize the STA
closest_square_to_kernel = int(np.sqrt(kernel_size)) ** 2
# Define the color map
cdict1 = {'red': ((0.0, 0.0, 0.0),
(0.5, 0.0, 0.1),
(1.0, 1.0, 1.0)),
'green': ((0.0, 0.0, 0.0),
(1.0, 0.0, 0.0)),
'blue': ((0.0, 0.0, 1.0),
(0.5, 0.1, 0.0),
(1.0, 0.0, 0.0))
}
from matplotlib.colors import LinearSegmentedColormap
blue_red1 = LinearSegmentedColormap('BlueRed1', cdict1)
n = int( np.sqrt(closest_square_to_kernel))
# Plot the filters
for i in range(closest_square_to_kernel):
plt.subplot(n,n,i + 1)
plt.imshow(sta[i,:,:], interpolation='bilinear',
cmap=blue_red1)
plt.colorbar()
plt.show()



No comments:
Post a Comment