tag:blogger.com,1999:blog-739034131395681722024-03-14T08:04:28.611+01:00Neural Ensemble NewsNeural Ensemble Newseilifhttp://www.blogger.com/profile/09717715572079097672noreply@blogger.comBlogger75125tag:blogger.com,1999:blog-73903413139568172.post-57130922851388574312020-08-08T21:27:00.002+02:002020-08-08T21:29:37.846+02:00CARLsim5 Released!<h2 style="text-align: left;"><span style="font-size: x-large;"><span style="font-family: Arial; font-weight: 700; white-space: pre-wrap;">Introduction</span></span></h2><span id="docs-internal-guid-2cdfef22-7fff-222f-d8b0-6447d7f689eb"><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="font-family: Arial; font-size: 10pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">CARLsim5 is an efficient, easy-to-use, GPU-accelerated library for simulating large-scale spiking neural network (SNN) models with a high degree of biological detail. It allows execution of networks of Izhikevich spiking neurons with realistic synaptic dynamics using multiple off-the-shelf GPUs and x86 CPUs. The simulator provides a PyNN-like programming interface in C/C++, which allows for details and parameters to be specified at the synapse, neuron, and network level.</span></p><br /><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="font-family: Arial; font-size: 10pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">The present release, CARLsim 5, builds on the efficiency and scalability of earlier releases (Nageswaran et al., 2009; Richert et al., 2011, and Beyeler et al., 2015; Chou et al., 2018). The functionality of the simulator has been greatly expanded by the addition of a number of features that enable and simplify the creation, tuning, and simulation of complex networks with spatial structure.</span></p><br /><h2 style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span style="font-family: Arial; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 700; vertical-align: baseline; white-space: pre-wrap;"><span style="font-size: x-large;">New Features</span></span></h2><div><span style="font-family: Arial; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 700; vertical-align: baseline; white-space: pre-wrap;"><span style="font-size: x-small;"><br /></span></span></div><h3 style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span style="font-family: Arial; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"><span style="font-size: medium;">1. PyNN Compatibility</span></span></h3><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="font-size: 14.6667px; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"><span style="font-family: Arial;">pyCARL is a interface between the simulator-independent language PyNN and a CARLsim5 based back-end. In other words, you can write the code for a SNN model once, using the PyNN API and the Python programming language, and then run it without modification on the CARLsim5 simulator that PyNN supports.</span></span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"></p><h4 style="text-align: left;"><span style="font-size: 14.6667px; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: normal; vertical-align: baseline; white-space: pre-wrap;"><span style="font-family: Arial;">Principal APIs supported: </span></span></h4><ul style="text-align: left;"><li><span id="docs-internal-guid-2cdfef22-7fff-222f-d8b0-6447d7f689eb"><span style="font-size: 14.6667px; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"><span style="font-family: Arial;">Neuron Models: pyCARL currently supports Izhikevich spiking neurons with either current-based or conductance-based synapses. Support for LIF neurons is planned for the future. Different groups of neurons can be created from a one-dimensional array to a three-dimensional grid.</span></span></span></li><li><span><span style="font-size: 14.6667px; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"><span style="font-family: Arial;">Synapse: pyCARL supports the following synapse models</span></span></span></li><ul><li><span><span style="font-size: 14.6667px; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"><span style="font-family: Arial;">Static Synapse - A fixed weight and delay synapse.</span></span></span></li><li><span><span style="font-size: 14.6667px; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"><span style="font-family: Arial;">Spike-timing-dependent plasticity - STDP mechanisms can be constructed using weight-dependence and timing-dependent models.</span></span></span></li></ul><li><span><span style="font-size: 14.6667px; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"><span style="font-family: Arial;">Connector Types: The pyCARL interface currently supports the following connectors:</span></span></span></li><ul><li><span><span style="font-size: 14.6667px; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"><span style="font-family: Arial;">AllToAllConnector - Each neuron in the pre-synaptic population is connected to every neuron in the post-synaptic population.</span></span></span></li><li><span><span style="font-size: 14.6667px; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"><span style="font-family: Arial;">OneToOneConnector - The neuron with index i in the pre-synaptic population is then connected to the neuron with index i in the post-synaptic population.</span></span></span></li><li><span><span style="font-size: 14.6667px; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"><span style="font-family: Arial;">FixedProbabilityConnector - Each possible connection between all pre-synaptic neurons and all post-synaptic neurons is created with probability p.</span></span></span></li></ul><li><span><span style="font-size: 14.6667px; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"><span style="font-family: Arial;">Spike sources: pyCARL currently supports a poisson source (SpikeSourcePoisson) and array-based spike source (SpikeSourceArray). </span></span></span></li><li><span><span style="font-size: 14.6667px; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"><span style="font-family: Arial;">Monitoring: Currently, pyCARL support spike and connection monitoring. CARLsim SpikeMonitors and ConnectionMonitors are internally defined for every group (Population) and connection (projection) in an application.</span></span></span></li><li><span><span style="font-size: 14.6667px; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"><span style="font-family: Arial;">Homeostasis: Homeostatic synaptic scaling has been observed experimentally and may serve to stabilize plasticity mechanisms that can otherwise undergo run-away behaviors. CARLsim implements a version of homeostatic synaptic scaling that helps stabilize STDP.</span></span></span></li></ul><span style="font-size: 14.6667px; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"><span style="font-family: Arial;"><i></i></span></span><p></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="font-size: 14.6667px; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"><span style="font-family: Arial;">pyCARL is a work in progress and newer APIs and features will continue to be supported via the interface. Its</span></span><span style="font-family: Arial;"><span style="font-size: 14.6667px; white-space: pre-wrap;"> sources and installation instructions are now available as a part of CARLsim5’s software release. Please refer to the CARLsim5 documentation </span></span><span style="font-family: Arial; font-size: 14.6667px; white-space: pre-wrap;"><a href="https://uci-carl.github.io/CARLsim5/ch14_pyCARL.html">https://uci-carl.github.io/CARLsim5/ch14_pyCARL.html</a> </span><span style="font-family: Arial; font-size: 14.6667px; white-space: pre-wrap;">for the pyCARL installation instructions.</span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="font-size: 14.6667px; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"><span style="font-family: Arial;"><br /></span></span></p><h3 style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span style="font-family: Arial; font-size: medium; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"><b>2. Neuron monitor</b></span></h3><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="font-family: Arial; font-size: 11pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">Neuron monitor now supports observing the voltage and current traces of individual neurons. This provides a useful tool for users to analyze the network dynamics during the simulation. </span></p><br /><h3 style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span style="font-family: Arial; font-size: medium; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"><b>3. Docker images for Windows users and computer cluster users</b></span></h3><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="font-family: Arial; font-size: 11pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">It was not convenient for windows users to use CARLsim for the requirement of installing of </span><span style="background-color: white; color: #24292e; font-family: Arial; font-size: 11pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">Microsoft Visual Studio 2015 and the update of Visual Studio. As a result, we release docker images in which CARLsim5 has been installed in an ubuntu system and ready for use immediately. </span></p><br /><h3 style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span style="font-family: Arial; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"><b>4. Saving and Loading</b></span></h3><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="font-family: Arial; font-size: 11pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">CARLsim5 now supports saving the network during the run time. The saved network could be loaded again via reading the saved file when setting up the network in a new simulation. All information regarding to the connections including weights, delays, source and target neurons will be saved. For more information, see </span><a href="https://uci-carl.github.io/CARLsim5/ch8_saving_loading.html" style="text-decoration-line: none;"><span style="color: #1155cc; font-family: Arial; font-size: 11pt; font-variant-east-asian: normal; font-variant-numeric: normal; text-decoration-line: underline; text-decoration-skip-ink: none; vertical-align: baseline; white-space: pre-wrap;">https://uci-carl.github.io/CARLsim5/ch8_saving_loading.html</span></a></p><br /><h3 style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span style="font-family: Arial; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"><span style="font-size: medium;">5. Improved ECJ Interface (Coming soon)</span></span></h3><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="font-family: Arial; font-size: 11pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">CARLsim5 supports ECJ-23 for evolutionary parameter tuning for now. The new ECJ-28 will released and integrated into CARLsim5 soon.</span></p><div><span style="font-family: Arial; font-size: 11pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"><br /></span></div></span><style type="text/css">
@page { margin: 0.79in }
p { margin-bottom: 0.1in; direction: ltr; line-height: 120%; text-align: left; orphans: 2; widows: 2 }
a:link { color: #0563c1 }
</style>Jinwei Xinghttp://www.blogger.com/profile/01319207626743324115noreply@blogger.com0tag:blogger.com,1999:blog-73903413139568172.post-74053914646176010532018-08-10T19:48:00.000+02:002018-08-12T12:25:06.244+02:00NeuroML2/LEMS is moving into Neural Mass Models and whole brain networks<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: justify;">
<span style="font-family: "arial"; font-size: 11pt; white-space: pre-wrap;">In the last months, as part of the Google Summer of Code 2018, I have been working on a project that aimed to implement neuronal models which represent averaged population activity on NeuroML2/LEMS. The project was supported by the INCF organisation and my mentor, Padraig Gleeson, and I had 3 months to shape and bring to life all the ideas that we had in our heads. This blog post summarises the core motivation of the project, the technical challenges, what I have done, and future steps.</span><br />
<span style="font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span>
<span style="font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><span id="docs-internal-guid-2b42ffb6-7fff-9f82-bb90-62c669712720"><span style="font-size: 16pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline;">Background</span></span></span><br />
<span style="font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">NeuroML version 2 and LEMS were introduced in order to standardise the description of neuroscience computational models and facilitate the shareability of results among different research groups</span><span style="color: #2c3f51; font-family: "arial"; font-size: 6.6pt; vertical-align: super; white-space: pre-wrap;">1</span><span style="font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">. However, so far, NeuroML2/LEMS have focused on modelling spiking neurons and how information is exchanged between them in networks. With the introduction of neural mass models, NeuroML2/LEMS can be extended to study interactions between large-scale systems such as cortical regions and indeed whole brain dynamics. To achieve this, my project consisted of implementing the basic structures needed in NeuroML2/LEMS to simulate Neural Mass Models and compare the results with previously published papers. </span><br />
<span style="font-family: "arial"; font-size: 16pt; white-space: pre-wrap;"><br /></span>
<span style="font-family: "arial"; font-size: 16pt; white-space: pre-wrap;">What I did</span><br />
<span style="font-family: "arial"; font-size: 11pt; white-space: pre-wrap;">During the project I focused on the implementation of three previously described Neural Mass Models into NeuroML2/LEMS:</span><br />
<span style="font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /><span style="font-size: 11pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline;">1. </span><span style="font-size: 11pt; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 700; vertical-align: baseline;">del Molino et al., 2017</span><span style="color: #2c3f51; font-size: 6.6pt; vertical-align: super;">2</span><span style="font-size: 11pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline;">: This study analyses the dynamics and interaction between excitatory neurons and three types of interneurons (parvalbumin (PV), somatostatin (SST) and vasoactive intestinal peptide (VIP) expressing). It first looks at the interactions between single units representing each population and then it scales up to analyse the interaction between a network with multiple interacting units in each population. A detailed description of the model that I have implemented in NeuroML and an illustrative Jupyter notebook that reproduces the main findings from the paper can be found at this </span><a href="https://github.com/OpenSourceBrain/del-Molino2017" style="text-decoration-line: none;"><span style="color: #1155cc; font-size: 11pt; vertical-align: baseline;">git repository</span></a><span style="font-size: 11pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline;">.</span></span><br />
<span style="font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><span style="font-size: 11pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline;"><br /></span></span>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-g1jGdV5iEVU/W23R9-cv-MI/AAAAAAAAAFA/Shhfgl5cLxQ3wrSNoysIVmjSFIODbnZ0ACK4BGAYYCw/s1600/Blog.png" imageanchor="1"><img border="0" height="131" src="https://2.bp.blogspot.com/-g1jGdV5iEVU/W23R9-cv-MI/AAAAAAAAAFA/Shhfgl5cLxQ3wrSNoysIVmjSFIODbnZ0ACK4BGAYYCw/s400/Blog.png" width="400" /></a></div>
<span style="color: #666666; font-family: "arial"; font-size: 11pt; white-space: pre-wrap;"><br /></span>
<span style="color: #666666; font-family: "arial"; font-size: 11pt; white-space: pre-wrap;">Overview of the del Molino et al., 2017 model implemented in NeuroML2/LEMS. The scheme on the left illustrate how the different populations are connected and the entry point of the top-down modulatory input. Once the dynamics of the interaction between single units have been analysed, we scale up to look at the interaction of a network of multiple interacting units in each population. The network population is illustrated on the right</span><br />
<span style="font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span>
<span style="font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">2. </span><span style="font-family: "arial"; font-size: 11pt; font-weight: 700; vertical-align: baseline; white-space: pre-wrap;">Wilson and Cowan, 1972</span><span style="color: #2c3f51; font-family: "arial"; font-size: 6.6pt; vertical-align: super; white-space: pre-wrap;">3</span><span style="font-family: "arial"; font-size: 11pt; font-weight: 700; vertical-align: baseline; white-space: pre-wrap;">:</span><span style="font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> This classic model describes the interaction between populations of excitatory and inhibitory neurons. In this project, I have implemented a NEURON interpretation of the Wilson and Cowan model into NeuroML and compared the dynamics of the model by looking at the dynamics of the generated results. The repository with the Wilson and Cowan simulations can be found </span><a href="https://github.com/OpenSourceBrain/WilsonCowan" style="text-decoration-line: none;"><span style="color: #1155cc; font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">here</span></a><span style="font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">.</span><br />
<span style="font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-qVGIODQ60CE/W23STSB6iXI/AAAAAAAAAFQ/4JZhe9YRjUY--_TaguMCOZCHMOSSBxNsQCK4BGAYYCw/s1600/Blog_img.png" imageanchor="1"><img border="0" height="122" src="https://3.bp.blogspot.com/-qVGIODQ60CE/W23STSB6iXI/AAAAAAAAAFQ/4JZhe9YRjUY--_TaguMCOZCHMOSSBxNsQCK4BGAYYCw/s400/Blog_img.png" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<span style="color: black; font-family: "arial"; font-size: 11pt; margin-left: 1em; margin-right: 1em; vertical-align: baseline;"><br /></span></div>
<div class="separator" style="clear: both; text-align: center;">
<span style="color: black; font-family: "arial"; font-size: 11pt; margin-left: 1em; margin-right: 1em; vertical-align: baseline;"><br /></span></div>
<span style="color: #666666; font-family: "arial"; font-size: 11pt; white-space: pre-wrap;">Illustration of the population modelled with Wilson and Cowan simulation and how the dynamics over time change with (Drive) and without (No Drive) an additional external input current </span><br />
<span style="font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span>
<span style="font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">3. </span><span style="font-family: "arial"; font-size: 11pt; font-weight: 700; vertical-align: baseline; white-space: pre-wrap;">Mejias et al., 2016</span><span style="color: #2c3f51; font-family: "arial"; font-size: 6.6pt; vertical-align: super; white-space: pre-wrap;">4</span><span style="font-family: "arial"; font-size: 11pt; font-weight: 700; vertical-align: baseline; white-space: pre-wrap;">:</span><span style="font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> analyses the dynamics across multiple scales of the primate cortex (intralaminar, interlaminar, interareal and whole cortex). This </span><a href="https://github.com/OpenSourceBrain/MejiasEtAl2016" style="text-decoration-line: none;"><span style="color: #1155cc; font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">git repo</span></a><span style="font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> so far implements the intralaminar and interlaminar simulations of the cortex in Python and provides the methods needed for analysing the results from the NeuroML2/LEMS simulation. It also contains a first model of the interlaminar simulation using NeuroML2/LEMS that will be further extended to simulate the firing rate at interlaminar, interareal and whole cortex level.</span><br />
<span style="color: #666666; font-family: "arial"; font-size: 11pt; white-space: pre-wrap;"><br /></span>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-aU1Zhqf0JPo/W27N9LOwrRI/AAAAAAAAAFo/3NshbNrTTFAQKyUJngoNmNXzfaUVTAQ3ACK4BGAYYCw/s1600/Mejias-2016.png" imageanchor="1"><img border="0" height="178" src="https://1.bp.blogspot.com/-aU1Zhqf0JPo/W27N9LOwrRI/AAAAAAAAAFo/3NshbNrTTFAQKyUJngoNmNXzfaUVTAQ3ACK4BGAYYCw/s320/Mejias-2016.png" width="320" /></a></div>
<span style="color: #666666; font-family: "arial"; font-size: 11pt; white-space: pre-wrap;"><span id="docs-internal-guid-87b69d2b-7fff-0b08-8a02-067cbec3775a"></span><br /></span>
<span style="color: #666666; font-family: "arial"; font-size: 11pt; white-space: pre-wrap;">Illustration of the Mejias et al., 2016 models implemented so far: While at the intralaminar level analyses the dynamics of the excitatory (in red) and inhibitory population (in blue) for each layer are considered independent, in the interlaminar the interaction between supra- (Layer 2/3) and infragranular (Layer 5/6) layers are taken into account </span><br />
<h2 dir="ltr" style="line-height: 1.38; margin-bottom: 6pt; margin-top: 18pt; text-align: justify;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 16pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">The technical challenges </span></h2>
<span style="font-family: "arial"; font-size: 11pt; white-space: pre-wrap;">In order to be able to simulate Neural Mass Models, we had to extend previously defined NeuroML2 components used to simulate spiking models. To this end we defined two new core components: </span><br />
<br />
<ul>
<li><span style="font-family: "arial";"><span style="font-size: 14.6667px; white-space: pre-wrap;"><span id="docs-internal-guid-e477b008-7fff-8dd4-b442-05fd264abd52"><span style="font-size: 11pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline;"><i>baseRateUnit</i></span><span style="font-size: 11pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline;">: which extends the </span><span style="font-size: 11pt; font-style: italic; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline;">baseCellMembPot</span><span style="font-size: 11pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline;"> but instead of exposing the membrane potential it exposes the population’s firing rate.</span></span></span></span></li>
<li><span style="font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><span id="docs-internal-guid-0935755b-7fff-5552-0725-7a0c8bc2eabc"><span style="font-size: 11pt; font-style: italic; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline;">rateSynapse</span></span>: In the spiking models a change in current is only triggered if the cell membrane exceeds a specific threshold. In a rate base model, however, there is a continuous transmission of currents between the populations. Therefore we extended the </span><span style="font-family: "arial"; font-size: 11pt; font-style: italic; vertical-align: baseline; white-space: pre-wrap;">baseSynapse </span><span style="font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">component so that it allows the continuous transmission of currents between the population using continuous connections. </span></li>
</ul>
<div>
<span id="docs-internal-guid-541aabfc-7fff-832a-c703-542833057165"></span></div>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: justify;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">The detailed implementation of the two components can be found </span><a href="https://github.com/OpenSourceBrain/del-Molino2017/blob/master/NeuroML/RateBased.xml" style="text-decoration: none;"><span style="background-color: transparent; color: #1155cc; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: underline; vertical-align: baseline; white-space: pre;">here</span></a><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">.</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: justify;">
<span style="font-family: "arial"; font-size: 11pt; white-space: pre-wrap;"><span style="font-family: "arial"; font-size: 16pt; vertical-align: baseline;"><br /></span></span>
<span style="font-family: "arial"; font-size: 11pt; white-space: pre-wrap;"><span id="docs-internal-guid-01e79bd8-7fff-6012-bcd0-2eb847fab49f"><span style="font-family: "arial"; font-size: 16pt; vertical-align: baseline;">The future</span></span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: justify;">
<span style="font-family: "arial"; font-size: 11pt; white-space: pre-wrap;">The projects I have worked on during these 3 months were a proof of concept to explore how NeuroML2/LEMS can be extended to simulate Neural Mass Models. They provided valuable insight into the necessary components to extend NeuroML2/LEMS to large-scale dynamics and a proof of concept that the generated signal is comparable to those generated with other tools.</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: justify;">
<span style="font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: justify;">
<span style="font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">These are, however, just the first steps into a very interesting direction: just imagine how incredible it would be to extend this data to a whole brain simulation. One possible candidate would be to use mouse data for the simulation (e.g. from the <a href="http://brain-map.org/">Allen Institute mouse connectivity datasets</a></span><span style="font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">). So stay tuned for future updates!</span><br />
<span style="font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><span style="font-family: "arial"; font-size: 16pt; vertical-align: baseline;"><br /></span></span>
<span style="font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><span id="docs-internal-guid-414cf3b8-7fff-3cd9-b824-0ab9087a55ec"><span style="font-family: "arial"; font-size: 16pt; vertical-align: baseline;">The experience</span></span></span><br />
<span style="font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><span id="docs-internal-guid-eff71f1b-7fff-0053-327a-e4a1236a34a6"><span style="font-family: "arial"; font-size: 11pt; vertical-align: baseline;">Working on this project was not only a great way of getting to learn the intricacies of NeuroML, get a better understanding of Neural Mass Models but it was also a great opportunity to get my hands dirty with the code. It was also very satisfying to produce in a short time something from the beginning to the end. In addition to all these, it was also my first contact with the open source community. Thank you very much Padraig, for the help and the guidance during these months!</span></span></span><br />
<span style="font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><span style="font-family: "arial"; font-size: 16pt; vertical-align: baseline;"><br /></span></span>
<span style="font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><span id="docs-internal-guid-a8ee7391-7fff-c572-b415-0aecc4e13afb"><span style="font-family: "arial"; font-size: 16pt; vertical-align: baseline;">References</span></span></span><br />
<div dir="ltr" style="line-height: 1.92; margin-bottom: 12pt; margin-top: 0pt; text-align: justify;">
<span style="font-size: x-small;"><span style="background-color: transparent; color: #2c3f51; font-family: "arial"; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">1 </span><span style="background-color: transparent; color: #2c3f51; font-family: "arial"; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Cannon, R. C., Gleeson, P., Crook, S., Ganapathy, G., Marin, B., Piasini, E., & Silver, R. A. (2014). LEMS: a language for expressing complex biological models in concise and hierarchical form and its use in underpinning NeuroML 2. Frontiers in neuroinformatics, 8, 79 </span><a href="https://doi.org/10.3389/fninf.2014.00079" style="text-decoration: none;"><span style="background-color: transparent; color: #1155cc; font-family: "arial"; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;">https://doi.org/10.3389/fninf.2014.00079</span></a><span style="background-color: transparent; color: #2c3f51; font-family: "arial"; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> .</span></span></div>
<div dir="ltr" style="line-height: 1.92; margin-bottom: 12pt; margin-top: 0pt; text-align: justify;">
<span style="font-size: x-small;"><span style="background-color: transparent; color: #2c3f51; font-family: "arial"; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">2</span><span style="background-color: transparent; color: #2c3f51; font-family: "arial"; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> Garcia Del Molino, Luis Carlos, Guangyu Robert Yang, Jorge F. Mejias, and Xiao-Jing Wang. 2017a. “Paradoxical Response Reversal of Top-down Modulation in Cortical Circuits with Three Interneuron Types.” eLife 6 (December). </span><a href="https://doi.org/10.7554/eLife.29742" style="text-decoration: none;"><span style="background-color: transparent; color: #1980e6; font-family: "arial"; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;">https://doi.org/10.7554/eLife.29742</span></a><span style="background-color: transparent; color: #2c3f51; font-family: "arial"; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> . </span></span></div>
<div dir="ltr" style="line-height: 1.92; margin-bottom: 12pt; margin-top: 0pt; text-align: justify;">
<span style="font-size: x-small;"><span style="background-color: transparent; color: #2c3f51; font-family: "arial"; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">3</span><span style="background-color: transparent; color: #2c3f51; font-family: "arial"; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> Wilson, H. R., & Cowan, J. D. (1972). Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical journal, 12(1), 1-24 </span><a href="http://dx.doi.org/0.1016/S0006-3495(72)86068-5" style="text-decoration: none;"><span style="background-color: transparent; color: #1980e6; font-family: "arial"; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;">http://dx.doi.org/0.1016/S0006-3495(72)86068-5</span></a><span style="background-color: transparent; color: #2c3f51; font-family: "arial"; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> . </span></span></div>
<div dir="ltr" style="line-height: 1.92; margin-bottom: 12pt; margin-top: 0pt; text-align: justify;">
<span style="font-size: x-small;"><span style="color: #2c3f51; font-family: "arial"; vertical-align: baseline; white-space: pre-wrap;">4</span><span style="color: #2c3f51; font-family: "arial"; vertical-align: baseline; white-space: pre-wrap;"> Mejias, Jorge F., John D. Murray, Henry Kennedy, and Xiao-Jing Wang. 2016a. “Feedforward and Feedback Frequency-Dependent Interactions in a Large-Scale Laminar Network of the Primate Cortex.” </span><a href="https://doi.org/10.1101/065854"><span style="color: #1980e6; font-family: "arial"; vertical-align: baseline; white-space: pre-wrap;">https://doi.org/10.1101/065854</span></a><span style="color: #2c3f51; font-family: "arial"; vertical-align: baseline; white-space: pre-wrap;"> . </span></span></div>
<span style="font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span>
<span style="font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: justify;">
<span style="background-color: transparent; color: #666666; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"><br /></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: justify;">
<span style="background-color: transparent; color: #666666; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"><br /></span></div>
</div>
Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-73903413139568172.post-90309677235964602132017-04-20T11:11:00.000+02:002017-04-20T11:11:45.619+02:00PyNN 0.9.0 released<div>
<p>I'm happy to announce the release of PyNN 0.9.0!</p>
<p>This version of PyNN adopts the new, simplified <a class="reference external" href="http://neuralensemble.org/neo">Neo</a> object model, first released as Neo 0.5.0, for the data structures returned by <code>Population.get_data()</code>. For more information on the new Neo API, see the <a class="reference external" href="http://neo.readthedocs.io/en/0.5.0/releases/0.5.0.html">Neo release notes</a></p>
<p>The main difference for a PyNN user is that the <code>AnalogSignalArray</code> class has been renamed to <code>AnalogSignal</code>, and similarly the <code>Segment.analogsignalarrays</code> attribute is now called <code>Segment.analogsignals</code></p>
<h3>What is PyNN?</h3>
<p><a href="http://neuralensemble.org/PyNN/">PyNN</a> (pronounced 'pine') is a simulator-independent language for building neuronal network models.</p>
<p>In other words, you can write the code for a model once, using the PyNN API and the Python programming language, and then run it without modification on any simulator that PyNN supports (currently <a href="http://www.neuron.yale.edu/neuron">NEURON</a>, <a href="http://www.nest-simulator.org/">NEST</a> and <a href="http://www.briansimulator.org/">Brian</a> as well as the <a href="http://apt.cs.manchester.ac.uk/projects/SpiNNaker/">SpiNNaker</a> and <a href="http://www.kip.uni-heidelberg.de/vision/research/">BrainScaleS</a> neuromorphic hardware systems).</p>
<p>Even if you don't wish to run simulations on multiple simulators, you may benefit from writing your simulation code using PyNN's powerful, high-level interface. In this case, you can use any neuron or synapse model supported by your simulator, and are not restricted to the standard models.</p>
<p>The code is released under the <a href="http://www.cecill.info/">CeCILL licence</a> (GPL-compatible).</p>
</div>
Andrew Davisonhttp://www.blogger.com/profile/13733080438835986816noreply@blogger.com0tag:blogger.com,1999:blog-73903413139568172.post-26340742364313930962016-09-13T13:24:00.001+02:002016-09-13T13:24:10.983+02:00Neo 0.5.0-alpha1 released<div dir="ltr" style="text-align: left;" trbidi="on">
<p>We are pleased to announce the first alpha release of Neo 0.5.0.</p>
<p>Neo is a Python library which provides data structures for working with electrophysiology data, whether from biological experiments or from simulations, together with a large library of input-output modules to allow reading from a large number of different electrophysiology file formats (and to write to a somewhat smaller subset, including HDF5 and Matlab).</p>
<p>For Neo 0.5, we have taken the opportunity to simplify the Neo object model. Although this will require an initial time investment for anyone who has written code with an earlier version of Neo, the benefits will be greater simplicity, both in your own code and within the Neo code base, which should allow us to move more quickly in fixing bugs, improving performance and adding new features. For details of what has changed and what has been added, see the <a href="http://neo.readthedocs.io/en/neo-0.5.0alpha1/releases/0.5.0.html">Release notes</a>.</p>
<p>If you are already using Neo for your data analysis, we encourage you to give the alpha release a try. The more feedback we get about the alpha release, the quicker we can find and fix bugs. If you do find a bug, please <a href="https://github.com/NeuralEnsemble/python-neo/issues">create a ticket</a>. If you have questions, please post them on the <a href="https://groups.google.com/forum/#!forum/neuralensemble">mailing list</a> or in the comments below.</p>
<dl>
<dt>Documentation:</dt>
<dd><a href="http://neo.readthedocs.io/en/neo-0.5.0alpha1/">http://neo.readthedocs.io/en/neo-0.5.0alpha1/</a></dd>
<dt>Licence:</dt>
<dd>Modified BSD</dd>
<dt>Source code:</dt>
<dd><a href="https://github.com/NeuralEnsemble/python-neo">https://github.com/NeuralEnsemble/python-neo</a></dd>
</dl>
</div>Andrew Davisonhttp://www.blogger.com/profile/13733080438835986816noreply@blogger.com2tag:blogger.com,1999:blog-73903413139568172.post-25915152408014865102016-05-26T23:02:00.000+02:002016-05-26T23:02:54.633+02:00Updated Docker images for biological neuronal network simulations with Python<div dir="ltr" style="text-align: left;" trbidi="on">
The NeuralEnsemble <a href="https://www.docker.com/">Docker</a> images for biological neuronal network simulations with Python have been updated to contain <a href="http://www.nest-simulator.org/">NEST</a> 2.10, <a href="http://www.neuron.yale.edu/neuron/">NEURON</a> 7.4, <a href="http://briansimulator.org/">Brian</a> 2.0rc1 and <a href="http://neuralensemble.org/PyNN/">PyNN</a> 0.8.1.<br />
<br />
In addition, the default images (which are based on <a href="http://neuro.debian.net/">NeuroDebian</a> Jessie) now use Python 3.4. Images with Python 2.7 and Brian 1.4 are also available (using the "py2" tag). There is also an image with older versions (NEST 2.2 and PyNN 0.7.5).<br />
<br />
The images are intended as a quick way to get simulation projects up-and-running on Linux, OS X and Windows. They can be used for teaching or as the basis for reproducible research projects that can easily be shared with others.<br />
<br />
The images are available on <a href="https://hub.docker.com/u/neuralensemble/">Docker Hub</a>.<br />
<br />
To quickly get started, once you have Docker installed, run<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">docker pull neuralensemble/simulation</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">docker run -i -t neuralensemble/simulation /bin/bash</span><br />
<br />
For Python 2.7:<br />
<br />
<span style="font-family: '"courier new"', '"courier"', monospace;">docker pull neuralensemble/simulation:py2</span><br />
<br />
For older versions:<br />
<br />
<span style="font-family: '"courier new"', '"courier"', monospace;">docker pull neuralensemble/pynn07</span><br />
<br />
For ssh/X11 support, use the "simulationx" image instead of "simulation". <a href="https://github.com/NeuralEnsemble/neuralensemble-docker/blob/master/simulationx/README.md">Full instructions are available here</a>.<br />
<br />
If anyone would like to help out, or suggest other tools that should be installed, please contact me, or <a href="https://github.com/NeuralEnsemble/neuralensemble-docker/issues">open a ticket on Github</a>.</div>
Andrew Davisonhttp://www.blogger.com/profile/13733080438835986816noreply@blogger.com1tag:blogger.com,1999:blog-73903413139568172.post-18811734814172480972016-05-26T22:51:00.001+02:002016-05-26T22:51:44.244+02:00PyNN 0.8.1 released<div dir="ltr" style="text-align: left;" trbidi="on">
Having forgotten to blog about the release of PyNN 0.8.0, here is an announcement of PyNN 0.8.1!<div>
<br />
For all the API changes between PyNN 0.7 and 0.8 see the <a href="http://neuralensemble.org/docs/PyNN/releases/0.8.0.html">release notes for 0.8.0</a>. The main change with PyNN 0.8.1 is support for NEST 2.10.<br />
<br />
PyNN 0.8.1 can be installed with <span style="font-family: Courier New, Courier, monospace;">pip</span> from <a href="https://pypi.python.org/pypi/PyNN/0.8.1">PyPI</a>.<br />
<h3 style="text-align: left;">
</h3>
<h3 style="text-align: left;">
<br /></h3>
<h3 style="text-align: left;">
What is PyNN?</h3>
<br />
<a href="http://neuralensemble.org/PyNN/">PyNN</a> (pronounced 'pine' ) is a simulator-independent language for building neuronal network models.<br />
<br />
In other words, you can write the code for a model once, using the PyNN API and the Python programming language, and then run it without modification on any simulator that PyNN supports (currently <a href="http://www.neuron.yale.edu/neuron">NEURON</a>, <a href="http://www.nest-simulator.org/">NEST</a> and <a href="http://www.briansimulator.org/">Brian</a> as well as the <a href="http://apt.cs.manchester.ac.uk/projects/SpiNNaker/">SpiNNaker</a> and <a href="http://www.kip.uni-heidelberg.de/vision/research/">BrainScaleS</a> neuromorphic hardware systems).<br />
<br />
Even if you don't wish to run simulations on multiple simulators, you may benefit from writing your simulation code using PyNN's powerful, high-level interface. In this case, you can use any neuron or synapse model supported by your simulator, and are not restricted to the standard models.<br />
<br />
The code is released under the <a href="http://www.cecill.info/">CeCILL licence</a> (GPL-compatible).<br />
<div>
<br /></div>
</div>
</div>
Andrew Davisonhttp://www.blogger.com/profile/13733080438835986816noreply@blogger.com0tag:blogger.com,1999:blog-73903413139568172.post-22398901698028435412016-04-01T10:53:00.003+02:002016-04-02T00:55:55.381+02:00EU Human Brain Project Releases Platforms to the Public"<b><i>Geneva, 30 March 2016</i></b> — The Human Brain
Project (HBP) is pleased to announce the release of initial versions of
its six Information and Communications Technology (ICT) Platforms to
users outside the Project. These Platforms are designed to help the
scientific community to accelerate progress in neuroscience, medicine,
and computing.<br />
<br />
[...]<br />
<br />
<br />
The six HBP Platforms are:<br />
<ul>
<li> The <b>Neuroinformatics Platform: </b>registration, search, analysis of neuroscience data.</li>
<li> The <b>Brain Simulation Platform:</b> reconstruction and simulation of the brain.</li>
<li> The <b>High Performance Computing Platform:</b> computing and storage facilities to run complex simulations and analyse large data sets.</li>
<li> The <b>Medical Informatics Platform:</b> searching of real patient data to understand similarities and differences among brain diseases.</li>
<li> The <b>Neuromorphic Computing Platform</b>: access to computer systems that emulate brain microcircuits and apply principles similar to the way the brain learns.</li>
<li> The <b>Neurorobotics Platform:</b> testing of virtual models of the brain by connecting them to simulated robot bodies and environments.</li>
</ul>
All the Platforms can be accessed via the <a href="https://collab.humanbrainproject.eu/">HBP Collaboratory</a>,
a web portal where users can also find guidelines, tutorials and
information on training seminars. Please note that users will need to
register to access the Platforms and that some of the Platform resources
have capacity limits."<br />
<br />
... More in the official press release <a href="https://www.humanbrainproject.eu/en_GB/-/huma?redirect=https%3A%2F%2Fwww.humanbrainproject.eu%2Fen_GB%2Fhome%3Fp_p_id%3D101_INSTANCE_sSPR9sHVXNRc%26p_p_lifecycle%3D0%26p_p_state%3Dnormal%26p_p_mode%3Dview%26p_p_col_id%3Dcolumn-2%26p_p_col_count%3D1%26_101_INSTANCE_sSPR9sHVXNRc_struts_action%3D%252Fasset_publisher%252Fview">here</a>.<br />
<br />
The HBP held an online release event on 30 March:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/2XXz2quUWFQ/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/2XXz2quUWFQ?feature=player_embedded" width="320"></iframe></div>
<br />
Prof. Felix Schürmann (EPFL-BBP, Geneva), Dr. Eilif Muller (EPFL-BBP, Geneva), and Prof. Idan Segev (HUJI, Jerusalem) present an overview of the mission, tools, capabilities and science of the EU Human Brain Project (HBP) Brain Simulation Platform:<br />
<div class="separator" style="clear: both; text-align: center;">
<br /><iframe width="320" height="266" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/C3bU_U3vx68/0.jpg" src="https://www.youtube.com/embed/C3bU_U3vx68?feature=player_embedded" frameborder="0" allowfullscreen></iframe></div>
<br />
A publicly accessible forum for the BSP is here:<br />
<a href="https://forum.humanbrainproject.eu/c/bsp">https://forum.humanbrainproject.eu/c/bsp</a><br />
and for community models<br />
<a href="https://forum.humanbrainproject.eu/c/community-models">https://forum.humanbrainproject.eu/c/community-models</a><br />
and for community models of hippocampus in particular<br />
<a href="https://forum.humanbrainproject.eu/c/community-models/hippocampus">https://forum.humanbrainproject.eu/c/community-models/hippocampus</a>eilifhttp://www.blogger.com/profile/09717715572079097672noreply@blogger.com0tag:blogger.com,1999:blog-73903413139568172.post-89174899641307269842016-03-07T11:47:00.001+01:002016-03-07T11:47:45.448+01:00 Released - BluePyOpt 0.2 : Leveraging OSS and the cloud to optimise models to neuroscience data<div class="cooked">
<h2>
BluePyOpt</h2>
<br />
The BlueBrain Python Optimisation Library (BluePyOpt) is an extensible framework for data-driven model parameter optimisation that wraps and standardises several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures.<br />
<br />
The code is available here:<br /><a class="onebox" href="https://github.com/BlueBrain/BluePyOpt" target="_blank">https://github.com/BlueBrain/BluePyOpt</a><br />
A preprint to the paper is available here:<br /><a class="onebox" href="http://arxiv.org/abs/1603.00500" target="_blank">http://arxiv.org/abs/1603.00500</a><br />
</div>
<section class="ember-view post-menu-area clearfix" id="ember2689"><nav class="post-controls"></nav></section><div class="topic-body">
<div class=" contents regular ">
<section class="ember-view post-menu-area clearfix" id="ember2689"><nav class="post-controls"></nav></section>
</div>
</div>
eilifhttp://www.blogger.com/profile/09717715572079097672noreply@blogger.com1tag:blogger.com,1999:blog-73903413139568172.post-50073240669404962512016-02-26T21:02:00.001+01:002016-02-26T21:03:01.659+01:00Stop plotting your data -- HoloViews 1.4 released!We are pleased to announce the fifth public release of HoloViews, a Python package for exploring and visualizing scientific data:<br />
<br />
http://holoviews.org<br />
<br />
HoloViews provides composable, sliceable, declarative data structures for building even complex visualizations easily. Instead of you having to explicitly and laboriously plot your data, HoloViews lets you simply annotate your data so that any part of it visualizes itself automatically. You can now work with large datasets as easily as you work with simple datatypes at the Python prompt.<br />
<br />
The new version can be installed using conda:<br />
<br />
conda install -c ioam holoviews<br />
<br />
Release 1.4 introduces major new features, incorporating over 1700 new commits and closing 142 issues:<br />
<br />
- Now supports both Bokeh (bokeh.pydata.org) and matplotlib backends, with Bokeh providing extensive interactive features such as panning and zooming linked axes, and customizable callbacks<br />
<br />
- DynamicMap: Allows exploring live streams from ongoing data collection or simulation, or parameter spaces too large to fit into your computer's or your browser's memory, from within a Jupyter notebook<br />
<br />
- Columnar data support: Underlying data storage can now be in Pandas dataframes, NumPy arrays, or Python dictionaries, allowing you to define HoloViews objects without copying or reformatting your data<br />
<br />
- New Element types: Area (area under or between curves), Spikes (sequence of lines, e.g. spectra, neural spikes, or rug plots), BoxWhisker (summary of a distribution), QuadMesh (nonuniform rasters), Trisurface (Delaunay-triangulated surface plots)<br />
<br />
- New Container type: GridMatrix (grid of heterogenous Elements)<br />
<br />
- Improved layout handling, with better support for varying aspect ratios and plot sizes<br />
<br />
- Improved help system, including recursively listing and searching the help for all the components of a composite object<br />
<br />
- Improved Jupyter/IPython notebook support, including improved export using nbconvert, and standalone HTML output that supports dynamic widgets even without a Python server<br />
<br />
- Significant performance improvements for large or highly nested data<br />
<br />
And of course we have fixed a number of bugs found by our very dedicated users; please keep filing Github issues if you find any!<br />
<br />
For the full list of changes, see:<br />
<br />
https://github.com/ioam/holoviews/releases<br />
<br />
HoloViews is now supported by Continuum Analytics, and is being used in a wide range of scientific and industrial projects. HoloViews remains freely available under a BSD license, is Python 2 and 3 compatible, and has minimal external dependencies, making it easy to integrate into your workflow. Try out the extensive tutorials at holoviews.org today!<br />
<br />
Jean-Luc R. Stevens<br />
Philipp Rudiger<br />
James A. Bednar<br />
<br />
Continuum Analytics, Inc., Austin, TX, USA<br />
School of Informatics, The University of Edinburgh, UK<br />
<div>
<br /></div>
Jim Bednarhttp://www.blogger.com/profile/01375388412687533096noreply@blogger.com0tag:blogger.com,1999:blog-73903413139568172.post-38928981437121847872015-08-28T16:54:00.004+02:002015-08-28T16:54:58.680+02:00Docker images for neuronal network simulation<div dir="ltr" style="text-align: left;" trbidi="on">
I've created some <a href="https://www.docker.com/">Docker</a> images for biological neuronal network simulations with Python.<br />
<br />
The images contain <a href="http://www.nest-simulator.org/">NEST</a> 2.6, <a href="http://www.neuron.yale.edu/neuron/">NEURON</a> 7.3, <a href="http://briansimulator.org/">Brian</a> 1.4 and <a href="http://neuralensemble.org/PyNN/">PyNN</a> 0.8.0rc1, together with IPython, numpy, scipy and matplotlib.<br />
<br />
The images are intended as a quick way to get simulation projects up-and-running on Linux, OS X and Windows (the latter two via the<a href="https://www.docker.com/toolbox"> Docker Toolbox</a>, which runs Docker in a VM). They can be used for teaching or as the basis for reproducible research projects that can easily be shared with others.<br />
<br />
The images are available on <a href="https://hub.docker.com/u/neuralensemble/">Docker Hub</a>.<br />
<br />
To quickly get started, once you have Docker installed, run<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">docker pull neuralensemble/simulation</span><br />
<span style="font-family: 'Courier New', Courier, monospace;">docker run -i -t neuralensemble/simulation /bin/bash</span><br />
<br />
then inside the container<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">source ~/env/simulation/bin/activate</span><br />
<br />
For ssh/X11 support, use the "simulationx" image instead of "simulation". <a href="https://github.com/NeuralEnsemble/neuralensemble-docker/blob/master/simulationx/README.md">Full instructions are available here</a>.<br />
<br />
I plan to add further images for neuroscience data analysis, providing <a href="http://neuralensemble.org/neo/">Neo</a>, <a href="http://neuralensemble.org/elephant/">Elephant</a>, <a href="http://neuralensemble.org/OpenElectrophy/">OpenElectrophy</a>, <a href="http://neuralensemble.org/SpykeViewer/">SpykeViewer</a>, the <a href="https://github.com/G-Node">G-Node Python tools</a>, <a href="http://klusta-team.github.io/">KlustaSuite</a>, etc. If anyone would like to help out, or suggest other tools that should be installed, please contact me, or <a href="https://github.com/NeuralEnsemble/neuralensemble-docker/issues">open a ticket on Github</a>.</div>
Andrew Davisonhttp://www.blogger.com/profile/13733080438835986816noreply@blogger.com3tag:blogger.com,1999:blog-73903413139568172.post-23121052789479524022015-07-03T14:55:00.000+02:002015-07-03T14:55:06.448+02:00Sumatra 0.7 released<div dir="ltr" style="text-align: left;" trbidi="on">
<span style="background-color: white; color: #222222; line-height: 18px;"><span style="font-family: Arial, Helvetica, sans-serif;">We would like to announce the release of version 0.7.0 of Sumatra, a tool for automated tracking of simulations and computational analyses so as to be able to easily replicate them at a later date.</span></span><br />
<span style="letter-spacing: -0.14000000059604645px; line-height: 21px;"><span style="font-family: Arial, Helvetica, sans-serif;"><br /></span></span>
<span style="letter-spacing: -0.14000000059604645px; line-height: 21px;"><span style="font-family: Arial, Helvetica, sans-serif;">This version of Sumatra brings some major improvements for users, including an improved web browser interface, improved support for the R language, Python 3 compatibility, a plug-in interface making Sumatra easier to extend and customize, and support for storing data using WebDAV.</span></span><br />
<div style="letter-spacing: -0.14000000059604645px; line-height: 21px; margin-bottom: 0.5em; margin-top: 0.8em;">
<span style="font-family: Arial, Helvetica, sans-serif;">In addition, there have been many changes under the hood, including a move to Github and improvements to the test framework, largely supported by the use of Docker.</span></div>
<div style="letter-spacing: -0.14000000059604645px; line-height: 21px; margin-bottom: 0.5em; margin-top: 0.8em;">
<span style="font-family: Arial, Helvetica, sans-serif;">Last but not least, we have changed licence from the CeCILL licence (GPL-compatible) to a BSD 2-Clause Licence, which should make it easier for other developers to use Sumatra in their own projects.</span></div>
<div class="section" id="updated-and-extended-web-interface" style="letter-spacing: -0.14000000059604645px; line-height: 21px;">
<h2 style="margin: 1.3em 0px 0.2em; padding: 0px;">
<span style="font-family: Arial, Helvetica, sans-serif; font-size: small;">Updated and extended web interface<a class="headerlink" href="http://sumatra.readthedocs.org/en/latest/releases/0.7.0.html#updated-and-extended-web-interface" style="color: black !important; margin-left: 6px; padding: 0px 4px; text-decoration: none !important; visibility: hidden;" title="Permalink to this headline"></a></span></h2>
<div style="margin-bottom: 0.5em; margin-top: 0.8em;">
<span style="font-family: Arial, Helvetica, sans-serif;">Thanks to Felix Hoffman’s Google Summer of Code project, the web browser interface now provides the option of viewing the history of your project either in a “process-centric” view, as in previous versions, where each row in the table represents a computation, or in a “data-centric” view, where each row is a data file. Where the output from one computation is the input to another, additional links make it possible to follow these connections.</span></div>
<div style="margin-bottom: 0.5em; margin-top: 0.8em;">
<span style="font-family: Arial, Helvetica, sans-serif;">The web interface has also had a cosmetic update and several other improvements, including a more powerful comparison view (see screenshot). Importantly, the interface layout no longer breaks in narrower browser windows.</span></div>
<div class="figure align-center" style="margin: 0.5em auto; padding: 0.5em; text-align: center;">
<a class="reference internal image-reference" href="http://sumatra.readthedocs.org/en/latest/_images/compare_records.png" style="color: #ca7900;"><span style="font-family: Arial, Helvetica, sans-serif;"><img alt="../_images/compare_records.png" src="http://sumatra.readthedocs.org/en/latest/_images/compare_records.png" style="border: 0px; max-width: 100%; width: 841.59375px;" /></span></a></div>
</div>
<div class="section" id="bsd-licence" style="letter-spacing: -0.14000000059604645px; line-height: 21px;">
<h2 style="margin: 1.3em 0px 0.2em; padding: 0px;">
<span style="font-family: Arial, Helvetica, sans-serif; font-size: small;">BSD licence<a class="headerlink" href="http://sumatra.readthedocs.org/en/latest/releases/0.7.0.html#bsd-licence" style="color: black !important; margin-left: 6px; padding: 0px 4px; text-decoration: none !important; visibility: hidden;" title="Permalink to this headline"></a></span></h2>
<div style="margin-bottom: 0.5em; margin-top: 0.8em;">
<span style="font-family: Arial, Helvetica, sans-serif;">The Sumatra project aims to provide not only tools for scientists as end users (such as <code class="docutils literal" style="background-color: #f2f2f2; border-bottom-color: rgb(221, 221, 221); border-bottom-style: solid; border-bottom-width: 1px; color: #333333; letter-spacing: 0.01em;"><span class="pre">smt</span></code> and <code class="docutils literal" style="background-color: #f2f2f2; border-bottom-color: rgb(221, 221, 221); border-bottom-style: solid; border-bottom-width: 1px; color: #333333; letter-spacing: 0.01em;"><span class="pre">smtweb</span></code>), but also library components for developers to add Sumatra’s functionality to their own tools. To support this second use, we have switched licence from CeCILL (GPL-compatible) to the BSD 2-Clause Licence.</span></div>
</div>
<div class="section" id="python-3-support" style="letter-spacing: -0.14000000059604645px; line-height: 21px;">
<h2 style="margin: 1.3em 0px 0.2em; padding: 0px;">
<span style="font-family: Arial, Helvetica, sans-serif; font-size: small;">Python 3 support<a class="headerlink" href="http://sumatra.readthedocs.org/en/latest/releases/0.7.0.html#python-3-support" style="color: black !important; margin-left: 6px; padding: 0px 4px; text-decoration: none !important; visibility: hidden;" title="Permalink to this headline"></a></span></h2>
<div style="margin-bottom: 0.5em; margin-top: 0.8em;">
<span style="font-family: Arial, Helvetica, sans-serif;">In version 0.6.0, Sumatra already supported provenance capture for projects using Python 3, but required Python 2.6 or 2.7 to run. Thanks to Tim Tröndle, Sumatra now also runs in Python 3.4.</span></div>
</div>
<div class="section" id="plug-in-interface" style="letter-spacing: -0.14000000059604645px; line-height: 21px;">
<h2 style="margin: 1.3em 0px 0.2em; padding: 0px;">
<span style="font-family: Arial, Helvetica, sans-serif; font-size: small;">Plug-in interface<a class="headerlink" href="http://sumatra.readthedocs.org/en/latest/releases/0.7.0.html#plug-in-interface" style="color: black !important; margin-left: 6px; padding: 0px 4px; text-decoration: none !important; visibility: hidden;" title="Permalink to this headline"></a></span></h2>
<div style="margin-bottom: 0.5em; margin-top: 0.8em;">
<span style="font-family: Arial, Helvetica, sans-serif;">To support the wide diversity of workflows in scientific computing, Sumatra has always had an extensible architecture. It is intended to be easy to add support for new database formats, new programming languages, new version control systems, or new ways of launching computations.</span></div>
<div style="margin-bottom: 0.5em; margin-top: 0.8em;">
<span style="font-family: Arial, Helvetica, sans-serif;">Until now, adding such extensions has required that the code be included in Sumatra’s code base. Version 0.7.0 adds a plug-in interface, so you can define your own local extensions, or use other people’s.</span></div>
<div style="margin-bottom: 0.5em; margin-top: 0.8em;">
<span style="font-family: Arial, Helvetica, sans-serif;">For more information, see <a class="reference internal" href="http://sumatra.readthedocs.org/en/latest/plugins.html" style="color: #ca7900;"><em>Extending Sumatra with plug-ins</em></a>.</span></div>
</div>
<div class="section" id="webdav-support" style="letter-spacing: -0.14000000059604645px; line-height: 21px;">
<h2 style="margin: 1.3em 0px 0.2em; padding: 0px;">
<span style="font-family: Arial, Helvetica, sans-serif; font-size: small;">WebDAV support<a class="headerlink" href="http://sumatra.readthedocs.org/en/latest/releases/0.7.0.html#webdav-support" style="color: black !important; margin-left: 6px; padding: 0px 4px; text-decoration: none !important; visibility: hidden;" title="Permalink to this headline"></a></span></h2>
<div style="margin-bottom: 0.5em; margin-top: 0.8em;">
<span style="font-family: Arial, Helvetica, sans-serif;">The option to archive output data files has been extended to allow archiving to a remote server using the WebDAV protocol.</span></div>
</div>
<div class="section" id="support-for-the-r-language" style="letter-spacing: -0.14000000059604645px; line-height: 21px;">
<h2 style="margin: 1.3em 0px 0.2em; padding: 0px;">
<span style="font-family: Arial, Helvetica, sans-serif; font-size: small;">Support for the R language<a class="headerlink" href="http://sumatra.readthedocs.org/en/latest/releases/0.7.0.html#support-for-the-r-language" style="color: black !important; margin-left: 6px; padding: 0px 4px; text-decoration: none !important; visibility: hidden;" title="Permalink to this headline"></a></span></h2>
<div style="margin-bottom: 0.5em; margin-top: 0.8em;">
<span style="font-family: Arial, Helvetica, sans-serif;">Sumatra will now attempt to determine the versions of all external packages loaded by an R script.</span></div>
</div>
<div class="section" id="other-changes" style="letter-spacing: -0.14000000059604645px; line-height: 21px;">
<h2 style="margin: 1.3em 0px 0.2em; padding: 0px;">
<span style="font-family: Arial, Helvetica, sans-serif; font-size: small;">Other changes<a class="headerlink" href="http://sumatra.readthedocs.org/en/latest/releases/0.7.0.html#other-changes" style="color: black !important; margin-left: 6px; padding: 0px 4px; text-decoration: none !important; visibility: hidden;" title="Permalink to this headline"></a></span></h2>
<div style="margin-bottom: 0.5em; margin-top: 0.8em;">
<span style="font-family: Arial, Helvetica, sans-serif;">For developers, there has been a significant change - the project has moved from Mercurial to Git, and is now hosted on <a class="reference external" href="https://github.com/open-research/sumatra" style="color: #ca7900;">Github</a>. Testing has also been significantly improved, with more system/integration testing, and the use of <a class="reference external" href="https://www.docker.com/" style="color: #ca7900;">Docker</a> for testing PostgreSQL and WebDAV support.</span></div>
<div style="margin-bottom: 0.5em; margin-top: 0.8em;">
<span style="font-family: Arial, Helvetica, sans-serif;">Parsing of command-line parameters has been improved. The <code class="xref py py-class docutils literal" style="border: 0px; color: #333333; font-weight: bold; letter-spacing: 0.01em;"><span class="pre">ParameterSet</span></code> classes now have a <code class="xref py py-meth docutils literal" style="border: 0px; color: #333333; font-weight: bold; letter-spacing: 0.01em;"><span class="pre">diff()</span></code> method, making it easier to see the difference between two parameter sets, especially for large and hierarchical sets.</span></div>
<div style="margin-bottom: 0.5em; margin-top: 0.8em;">
<span style="font-family: Arial, Helvetica, sans-serif;">Following the <a class="reference external" href="https://mercurial.selenic.com/wiki/MercurialApi#Why_you_shouldn.27t_use_Mercurial.27s_internal_API" style="color: #ca7900;">recommendation of the Mercurial developers</a>, and to enable the change of licence to BSD, we no longer use the Mercurial internal API. Instead we use the Mercurial command line interface via the <a class="reference external" href="https://hgapi.readthedocs.org/" style="color: #ca7900;">hgapi</a> package.</span></div>
</div>
<div class="section" id="bug-fixes" style="letter-spacing: -0.14000000059604645px; line-height: 21px;">
<h2 style="margin: 1.3em 0px 0.2em; padding: 0px;">
<span style="font-family: Arial, Helvetica, sans-serif; font-size: small;">Bug fixes<a class="headerlink" href="http://sumatra.readthedocs.org/en/latest/releases/0.7.0.html#bug-fixes" style="color: black !important; margin-left: 6px; padding: 0px 4px; text-decoration: none !important; visibility: hidden;" title="Permalink to this headline"></a></span></h2>
<div style="margin-bottom: 0.5em; margin-top: 0.8em;">
<span style="font-family: Arial, Helvetica, sans-serif;">A <a class="reference external" href="https://github.com/open-research/sumatra/issues?q=is%3Aissue+milestone%3A0.7+is%3Aclosed+label%3Abug" style="color: #ca7900;">fair number of bugs</a> have been fixed.</span></div>
<div style="margin-bottom: 0.5em; margin-top: 0.8em;">
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span></div>
<h3 style="color: #222222; letter-spacing: normal; margin: 0px; position: relative;">
<span style="font-family: Arial, Helvetica, sans-serif; font-size: small;">Download, support and documentation</span></h3>
<div>
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span></div>
<div style="color: #222222; letter-spacing: normal; line-height: 18px;">
<span style="font-family: Arial, Helvetica, sans-serif;">The easiest way to get the latest version of Sumatra is</span></div>
<pre style="color: #222222; letter-spacing: normal; line-height: 18px;"><span style="font-family: Arial, Helvetica, sans-serif;">
</span></pre>
<pre style="color: #222222; letter-spacing: normal; line-height: 18px;"><span style="font-family: Arial, Helvetica, sans-serif;"> $ pip install sumatra
</span></pre>
<div style="color: #222222; letter-spacing: normal; line-height: 18px;">
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span></div>
<div style="color: #222222; letter-spacing: normal; line-height: 18px;">
<span style="font-family: Arial, Helvetica, sans-serif;">Alternatively, Sumatra 0.7.0 may be downloaded from <a href="http://pypi.python.org/pypi/Sumatra" style="color: #888888; text-decoration: none;">PyPI</a> or from the <a href="http://software.incf.org/software/sumatra/download/" style="color: #888888; text-decoration: none;">INCF Software Center</a>. Support is available from the <a href="https://groups.google.com/forum/#!forum/sumatra-users" style="color: #888888; text-decoration: none;">sumatra-users</a> Google Group. <a href="http://sumatra.readthedocs.org/" style="color: #888888; text-decoration: none;">Full documentation is available on Read the Docs.</a></span></div>
</div>
</div>
Andrew Davisonhttp://www.blogger.com/profile/13733080438835986816noreply@blogger.com0tag:blogger.com,1999:blog-73903413139568172.post-53211900479680831562015-04-08T16:01:00.000+02:002015-04-08T16:01:37.042+02:00Elephant 0.1.0 released<div dir="ltr" style="text-align: left;" trbidi="on">
We are pleased to announce the first release of the <a href="https://neuralensemble.org/elephant/">Elephant</a> toolbox for the analysis of neurophysiology data.<br />
<br />
Elephant builds on the Python scientific stack (NumPy, SciPy) to provide a set of well-tested analysis functions for spike train data and time series recordings from electrodes, such as spike train statistics, power spectrum analysis, filtering, cross-correlation and spike-triggered averaging. The toolbox also provides tools to generate artificial spike trains, either from stochastic processes or by controlled perturbations of real spike trains. Elephant is built on the <a href="http://neuralensemble.org/neo/">Neo</a> data model, and takes advantage of the broad file-format support provided by the Neo library. A bridge to the <a href="http://pandas.pydata.org/">Pandas</a> data analysis library is also provided.<br />
<br />
Elephant is a community-based effort, aimed at providing a common platform to test and distribute code from different laboratories, with the goal of improving the reproducibility of modern neuroscience research. If you are a neuroscience researcher or student using Python for data analysis, please consider <a href="http://elephant.readthedocs.org/en/latest/developers_guide.html">joining us</a>, either to contribute your own code or to help with code review and testing.<br />
<br />
Elephant is the direct successor to <a href="http://neuralensemble.org/NeuroTools/">NeuroTools</a> and maintains ties to complementary projects such as <a href="http://neuralensemble.org/OpenElectrophy/">OpenElectrophy</a> and <a href="http://spyke-viewer.readthedocs.org/">SpykeViewer</a>. It is also the default tool for electrophysiology data analysis in the <a href="https://www.humanbrainproject.eu/neuroinformatics-platform">Human Brain Project</a>.<br />
<br />
As a simple example, let's generate some artificial spike train data using a homogeneous Poisson process:<br />
<br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">from elephant.spike_train_generation import homogeneous_poisson_process</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">from quantities import Hz, s, ms</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">spiketrains = [</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> homogeneous_poisson_process(rate=10.0*Hz, t_start=0.0*s, t_stop=100.0*s)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> for i in range(100)]</span><br />
<br />
and visualize it in Matplotlib:<br />
<br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">import matplotlib.pyplot as plt</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">import numpy as np</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">for i, spiketrain in enumerate(spiketrains):</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> t = spiketrain.rescale(ms)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> plt.plot(t, i * np.ones_like(t), 'k.', markersize=2)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">plt.axis('tight')</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">plt.xlim(0, 1000)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">plt.xlabel('Time (ms)', fontsize=16)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">plt.ylabel('Spike Train Index', fontsize=16)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">plt.gca().tick_params(axis='both', which='major', labelsize=14)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">plt.show()</span><img alt="_images/tutorial_1_figure_1.png" src="http://elephant.readthedocs.org/en/latest/_images/tutorial_1_figure_1.png" style="width: 600px;" /><br />
Now we calculate the coefficient of variation of the inter-spike interval for each of the 100 spike trains.<br />
<br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">from elephant.statistics import isi, cv</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">cv_list = [cv(isi(spiketrain)) for spiketrain in spiketrains]</span><br />
<br />
As expected for a Poisson process, the values cluster around 1:<br />
<br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">plt.hist(cv_list)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">plt.xlabel('CV', fontsize=16)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">plt.ylabel('count', fontsize=16)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">plt.gca().tick_params(axis='both', which='major', labelsize=14)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">plt.show()</span><br />
<br />
<img alt="_images/tutorial_1_figure_2.png" src="http://elephant.readthedocs.org/en/latest/_images/tutorial_1_figure_2.png" style="width: 600px;" /></div>
Andrew Davisonhttp://www.blogger.com/profile/13733080438835986816noreply@blogger.com0tag:blogger.com,1999:blog-73903413139568172.post-2473524516673880062015-04-01T03:45:00.001+02:002015-04-01T03:46:13.527+02:00ANN: HoloViews 1.0 data visualization and ImaGen 2.0 pattern generation in PythonWe are pleased to announce the first public release of HoloViews, a free Python package for scientific and engineering data visualization:<br />
<br />
<a href="http://ioam.github.io/holoviews">http://ioam.github.io/holoviews</a><br />
<br />
and version 2.0 of ImaGen, a free Python package for generating two-dimensional patterns useful for vision research and computational modeling:<br />
<br />
<a href="http://ioam.github.io/imagen">http://ioam.github.io/imagen</a><br />
<br />
HoloViews provides composable, sliceable, declarative data structures for building even complex visualizations of any scientific data very easily. With HoloViews, you can see your data as publication-quality figures almost instantly, so that you can focus on the data itself, rather than on laboriously putting together your figures. Even complex multi-subfigure layouts and animations are very easily built using HoloViews.<br />
<br />
ImaGen provides highly configurable, resolution-independent input patterns, directly visualizable using HoloViews but also available without any plotting package so that they can easily be incorporated directly into your computational modeling or visual stimulus generation code. With ImaGen, any software with a Python interface can immediately support configurable streams of 0D, 1D, or 2D patterns, without any extra coding.<br />
<br />
HoloViews and ImaGen are very general tools, but they were designed to solve common problems faced by vision scientists and computational modelers. HoloViews makes it very easy to visualize data from vision research, whether it is visual patterns, neural activity patterns, or more abstract measurements or analyses. Essentially, HoloViews provides a set of general, compositional, multidimensional data structures suitable for both discrete and continuous real-world data, and pairs them with separate customizable plotting classes to visualize them without extensive coding.<br />
<br />
ImaGen 2.0 uses the continuous coordinate systems provided by HoloViews to implement flexible resolution-independent generation of streams of patterns, with parameters controlled by the user and allowing randomness or other arbitrary changes over time. These patterns can be used for visual stimulus generation, testing or training computational models, initializing connectivity in models, or any other application where random or dynamic but precisely controlled streams of patterns are needed.<br />
<br />
Features:<br />
<br />
- Freely available under a BSD license<br />
- Python 2 and 3 compatible<br />
- Minimal external dependencies -- easy to integrate into your workflow<br />
- Declarative approach provides powerful compositionality with minimal coding<br />
- Include extensive, continuously tested IPython Notebook tutorials<br />
- Easily reconfigurable using documented and validated parameters<br />
- Animations are supported natively, with no extra work<br />
- Supports reproducible research -- simple specification, archived in an IPython Notebook, providing a recipe for regenerating your results<br />
- HoloViews is one of three winners of the 2015 UK Open Source Awards<br />
<br />
To get started, check out <a href="http://ioam.github.io/holoviews">ioam.github.io/holoviews</a> and <a href="http://ioam.github.io/imagen">ioam.github.io/imagen</a>!<br />
<br />
Jean-Luc Stevens<br />
Philipp Rudiger<br />
Christopher Ball<br />
James A. BednarJim Bednarhttp://www.blogger.com/profile/01375388412687533096noreply@blogger.com0tag:blogger.com,1999:blog-73903413139568172.post-53432450506786818472015-03-12T11:15:00.000+01:002015-03-12T11:15:04.908+01:00Students: spend the summer improving brain research software tools in Google Summer of Code<div dir="ltr" style="text-align: left;" trbidi="on">
From Malin Sandström at the INCF:<br />
<br />
<div style="font-family: Helvetica; font-size: 12px;">
Are you a student interested in brain research and software development? Or do you know one?<br /></div>
<span style="font-family: Helvetica; font-size: 12px;">This year again, INCF is participating as mentoring organization in the Google Summer of Code, a global program that offers students stipends to spend the summer writing code for open source projects. INCF has 27 project proposals offered by mentors from the international research community, many of them with a computational neuroscience slant. All projects deal with development and/or improvement of open source tools that are used in the neuroscience community.</span><br style="font-family: Helvetica; font-size: 12px;" /><br style="font-family: Helvetica; font-size: 12px;" /><span style="font-family: Helvetica; font-size: 12px;">You can see our full list of projects here: </span><a href="https://incf.org/gsoc/2015/proposals" style="font-family: Helvetica; font-size: 12px;">https://incf.org/gsoc/2015/proposals</a><br style="font-family: Helvetica; font-size: 12px;" /><br style="font-family: Helvetica; font-size: 12px;" /><span style="font-family: Helvetica; font-size: 12px;">To be eligible, students must fulfill the Google definition of 'student': </span><span style="font-family: Helvetica; font-size: x-small;">an individual enrolled in or accepted into an accredited institution including (but not necessarily limited to) colleges, universities, masters programs, PhD programs and undergraduate programs</span><span style="font-family: Helvetica; font-size: 12px;">.</span><br style="font-family: Helvetica; font-size: 12px;" /><br style="font-family: Helvetica; font-size: 12px;" /><span style="font-family: Helvetica; font-size: 12px;">Student applications open on </span><b style="font-family: Helvetica; font-size: 12px;">Monday, March 16.</b><br />
<b style="font-family: Helvetica; font-size: 12px;"><br /></b>
<span style="font-family: Helvetica; font-size: 12px;">GSoC questions welcome to: </span><a href="mailto:gsoc@incf.org" style="font-family: Helvetica; font-size: 12px;">gsoc@incf.org</a></div>
Andrew Davisonhttp://www.blogger.com/profile/13733080438835986816noreply@blogger.com0tag:blogger.com,1999:blog-73903413139568172.post-57689848760633424632015-02-26T15:47:00.002+01:002015-02-26T15:50:00.475+01:00Workshop Announcement - "HBP Hippocamp CA1: Collaborative and Integrative Modeling of Hippocampal Area CA1"Registration is now open for the workshop "HBP Hippocamp CA1: Collaborative and Integrative Modeling of Hippocampal Area CA1", to be held March 31st - April 1st, 2015 at UCL School of Pharmacy in London, supported by the Human Brain Project (www.humanbrainproject.eu).<br />
<br />
In short, the aims of the workshop are two-fold. First, to engage the larger community of experimentalists and modelers working on hippocampus, and highlight existing modeling efforts and strategic datasets for modeling hippocampal area CA1. Second, to define and bootstrap an inclusive community-driven model and data-integration process to achieve open pre-competitive reference models of area CA1 (and, ultimately, the rest of the hippocampus), which are well documented, validated, and released at regular intervals (supported in part by IT infrastructure funded by HBP). Involvement from the community interested in characterization and modeling of the hippocampus is highly encouraged. To keep the meeting focused on the task, participation will be limited to ~30 people, so registration is required.<br />
<br />
Please consult the meeting website at
<a href="http://neuralensemble.org/meetings/HippocampCA1/">http://neuralensemble.org/meetings/HippocampCA1/</a>
for registration and further details.<br />
<br />
<b>Organizing committee:</b>
Jo Falck (UCL), Szabolcs Káli (Hungarian Academy of Sciences), Sigrun Lange (UCL), Audrey Mercer (UCL), Eilif Muller (EPFL), Armando Romani (EPFL) and Alex Thomson (UCL).
eilifhttp://www.blogger.com/profile/09717715572079097672noreply@blogger.com0tag:blogger.com,1999:blog-73903413139568172.post-4886209493743569012015-01-06T10:21:00.001+01:002015-01-06T10:21:42.399+01:00PyNN 0.8 beta 2 released<div dir="ltr" style="text-align: left;" trbidi="on">
We're happy to announce the second beta release of PyNN 0.8.<br />
<br />
With this release we are getting close to a first release candidate: the <a href="http://www.nest-simulator.org/?page=Software">NEST</a> and <a href="http://www.neuron.yale.edu/neuron/">NEURON</a> backends are essentially feature complete, although with a few bugs remaining. If you're using one of these simulators as your PyNN backend we recommend using this beta release for new projects; now would also be a good time to think about upgrading existing projects. The <a href="http://briansimulator.org/">Brian</a> backend is less well developed, but considerable progress has been made.<br />
<br />
For a list of the main changes between PyNN 0.7 and 0.8, see the <a href="http://neuralensemble.org/docs/PyNN/releases/0.8-alpha-1.html">release notes for the 0.8 alpha 1 release</a>.<br />
<br />
For the changes in this beta release see the <a href="http://neuralensemble.org/docs/PyNN/releases/0.8-beta-2.html">release notes</a>. In particular note that this release requires NEST version 2.4, and for the first time supports Python 3.2+ as well as Python 2.6 and 2.7. Support for NEST 2.6 will be provided in an upcoming release.<br />
<br />
The <a href="http://software.incf.org/software/pynn/download">source package</a> is available from the INCF Software Center.<br />
<h3 style="text-align: left;">
<br /></h3>
<h3 style="text-align: left;">
What is PyNN?</h3>
<br />
<a href="http://neuralensemble.org/PyNN/">PyNN</a> (pronounced 'pine' ) is a simulator-independent language for building neuronal network models.<br />
<br />
In other words, you can write the code for a model once, using the PyNN API and the Python programming language, and then run it without modification on any simulator that PyNN supports (currently NEURON, NEST and Brian).<br />
<br />
Even if you don't wish to run simulations on multiple simulators, you may benefit from writing your simulation code using PyNN's powerful, high-level interface. In this case, you can use any neuron or synapse model supported by your simulator, and are not restricted to the standard models.<br />
<br />
PyNN is also being used as a <a href="http://www.frontiersin.org/neuroinformatics/paper/10.3389/neuro.11/017.2009/">user-friendly interface to neuromorphic hardware systems</a>.<br />
<br />
The code is released under the <a href="http://www.cecill.info/">CeCILL licence</a> (GPL-compatible).<br />
<div>
<br /></div>
</div>
Andrew Davisonhttp://www.blogger.com/profile/13733080438835986816noreply@blogger.com0tag:blogger.com,1999:blog-73903413139568172.post-77376457885307861432014-08-20T01:48:00.000+02:002014-08-20T01:48:25.429+02:00GSoC Open Source Brain: Cortical Connections <html>
<head>
<title> Cortical Connections </title>
</head>
<body>
<h1> Cortical Connections </h1>
<p> In the same vein that the post before this one we will show here how to construct the connections between the cortical layers. In order to do so we will construct a function that works in general for any arbitrary connectivity, we describe in the
following its structure. First, as in the thalamo-cortical connectivity, we have again the same structure of a function that
loops over the target population extracting the relevant parameters that characterize these neurons. Furthermore we have another
function that loops over the source population creating the corresponding tuples for the connection list.
Is in this last function where the particular connectivity rule is implemented. </p>
<p>In the particular case of the Troyer model the connectivity between the cortical cells is determined by the
correlation between the receptive fields of the neurons, the receptive fields here being Gabor functions.
In more detail the neurons whose receptive fields are more correlated will be the ones more likely to have
excitatory connections between them. On the other hand the ones whose receptive fields are less correlated will be more likely
to receive inhibitory connections. In this post we show two schemes that accomplish this connectivity.
The first one uses the fact the parameters of the receptive field to calculate a connectivity and the second one uses the receptive fields directly to calculate the correlations. We present the determining functions in the stated order down here. </p>
<p>Now we present the function that creates the connectivity for a given neuron par. The circular distance between the orientation
and phases are calculated as a proxy to estimate how similar the receptive fields of the neurons are. After that,
the distance between them is weighted and normalized with a normal function in order to obtain a value that we can
interpret as a probability value. Finally in order to calculate the connectivity we sample n_pick times with the given
probability value to see hwo strong a particular connection should be.</p>
<pre>
def cortical_to_cortical_connection(target_neuron_index, connections, source_population, n_pick, g, delay, source_orientations,
source_phases, orientation_sigma, phase_sigma, target_neuron_orientation,
target_neuron_phase, target_type):
"""
Creates the connections from the source population to the target neuron
"""
for source_neuron in source_population:
# Extract index, orientation and phase of the target
source_neuron_index = source_population.id_to_index(source_neuron)
source_neuron_orientation = source_orientations[source_neuron_index]
source_neuron_phase = source_phases[source_neuron_index]
# Now calculate phase and orientation distances
or_distance = circular_dist(target_neuron_orientation, source_neuron_orientation, 180)
if target_type:
phase_distance = circular_dist(target_neuron_phase, source_neuron_phase, 360)
else:
phase_distance = 180 - circular_dist(target_neuron_phase, source_neuron_phase, 360)
# Now calculate the gaussian function
or_gauss = normal_function(or_distance, mean=0, sigma=orientation_sigma)
phase_gauss = normal_function(phase_distance, mean=0, sigma=phase_sigma)
# Now normalize by guassian in zero
or_gauss = or_gauss / normal_function(0, mean=0, sigma=orientation_sigma)
phase_gauss = phase_gauss / normal_function(0, mean=0, sigma=phase_sigma)
# Probability is the product
probability = or_gauss * phase_gauss
probability = np.sum(np.random.rand(n_pick) < probability) # Samples
synaptic_weight = (g / n_pick) * probability
if synaptic_weight > 0:
connections.append((source_neuron_index, target_neuron_index, synaptic_weight, delay))
return connections
</pre>
<p> Note that the overall strength is weighted by the conductivity value g that is passed as an argument.
Furthermore a delay that is also passed as an argument is added to the list as the last element of the tuple. </p>
<p> Secondly we present the full correlation scheme. In this scheme we utilize the kernels directly to calculate the spatial
correlation between them. In particular after we have flattened our kernels Z to have a series instead of a matrix we use the
function perasonr from scipy.stats to calculate the correlation. Again as in the case above we use this probability to sample
n_pick times and then calculate the relative connectivity strength with this. </p>
<pre>
def cortical_to_cortical_connection_corr(target_neuron_index, connections, source_population, n_pick, g, delay,
source_orientations, source_phases, target_neuron_orientation, target_neuron_phase,
Z1, lx, dx, ly, dy, sigma, gamma, w, target_type):
"""
Creates the connections from the source population to the target neuron
"""
for source_neuron in source_population:
# Extract index, orientation and phase of the target
x_source, y_source = source_neuron.position[0:2]
source_neuron_index = source_population.id_to_index(source_neuron)
source_neuron_orientation = source_orientations[source_neuron_index]
source_neuron_phase = source_phases[source_neuron_index]
Z2 = gabor_kernel(lx, dx, ly, dy, sigma, gamma, source_neuron_phase, w, source_neuron_orientation,
x_source, y_source)
if target_type:
probability = pearsonr(Z1.flat, Z2.flat)[0]
else:
probability = (-1) * pearsonr(Z1.flat, Z2.flat)[0]
probability = np.sum(np.random.rand(n_pick) < probability) # Samples
synaptic_weight = (g / n_pick) * probability
if synaptic_weight > 0:
connections.append((source_neuron_index, target_neuron_index, synaptic_weight, delay))
return connections
</pre>
<p> Note that the overall strength is weighted by the conductivity value g that is passed as an argument. Furthermore a
delay that is also passed as an argument is added to the list as the last element of the tuple. </p>
<p>We now show how a plot that illustrates how the probabilities change when the parameters that determined the gabor
function are changed for each scheme. </p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-XpCld-apZ8o/U_PhOMn4egI/AAAAAAAABXo/kObMEyftwnw/s1600/correlation_profile.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-XpCld-apZ8o/U_PhOMn4egI/AAAAAAAABXo/kObMEyftwnw/s400/correlation_profile.png" /></a></div>
<p> In the figure above we have int he upper part how the probability for the first scheme a neuron with phase 0 and orientation
0 change as we vary the phase (left) and orientation (right). In the two graphs bellow we have the same for the second scheme
we presented </p>
</body>
</html>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-73903413139568172.post-39665240864620659742014-08-18T18:06:00.000+02:002014-08-20T01:43:32.744+02:00GSoC Open Source Brain: Thalamo-Cortical Connections<html>
<head>
<title>Thalamo-cortical connections </title>
</head>
<body>
<h1>Thalamo-cortical connections </h1>
<p> In this post I will show how to build arbitrary custom connections in PyNN. We will illustrate the general technique
in the particular case of the Troyer model. In the Troyer model the connections from the LGN to the cortex are determined with
a gabor-profile therefore I am going to describe the required functions to achieve such an aim. </p>
<p> In the PyNN <a href="http://neuralensemble.org/docs/PyNN/connections.html" name="documentation">documentation</a> we find
that one of the ways of implementing arbitrary connectivity patterns is to use the FromListConnector utility. In this format
we have to construct a list of tuples with a tuple for each connection. In each tuple we need to include the index of the source
neuron (the neuron from which the synapse originates), the index of the target neuron (the neuron into which the synapse terminates),
the weight and the delay. For example (0, 1, 5, 0.1) would indicate that we have a connection from the neuron 0 to the neuron 1
with a synaptic weight of 5 and a delay of 0.1. </p>
<p> In the light of the explanation above we need to construct a function that is able to construct a list with the
appropriate weights given a target and a source populations. In order to start moving towards this goal we will first write a function
that connects a given neuron in the target population to all the neurons in the source population. We first present the function
here bellow and we will explain it later: </p>
<pre>
def lgn_to_cortical_connection(cortical_neuron_index, connections, lgn_neurons, n_pick, g, delay, polarity, sigma,
gamma, phi, w, theta, x_cortical, y_cortical):
"""
Creates connections from the LGN to the cortex with a Gabor profile.
This function adds all the connections from the LGN to the cortical cell with index = cortical_neuron_index. It
requires as parameters the cortical_neruon_index, the current list of connections, the lgn population and also
the parameters of the Gabor function.
Parameters
----
cortical_neuron_index : the neuron in the cortex -target- that we are going to connect to
connections: the list with the connections to which we will append the new connnections
lgn_neurons: the source population
n_pick: How many times we will sample per neuron
g: how strong is the connection per neuron
delay: the time it takes for the action potential to arrive to the target neuron from the source neuron
polarity: Whether we are connection from on cells or off cells
sigma: Controls the decay of the exponential term
gamma: x:y proportionality factor, elongates the pattern
phi: Phase of the overall pattern
w: Frequency of the pattern
theta: Rotates the whole pattern by the angle theta
x_cortical, y_cortical : The spatial coordinate of the cortical neuron
"""
for lgn_neuron in lgn_neurons:
# Extract position
x, y = lgn_neuron.position[0:2]
# Calculate the gabbor probability
probability = polarity * gabor_probability(x, y, sigma, gamma, phi, w, theta, x_cortical, y_cortical)
probability = np.sum(np.random.rand(n_pick) < probability) # Samples
synaptic_weight = (g / n_pick) * probability
lgn_neuron_index = lgn_neurons.id_to_index(lgn_neuron)
# The format of the connector list should be pre_neuron, post_neuron, w, tau_delay
if synaptic_weight > 0:
connections.append((lgn_neuron_index, cortical_neuron_index, synaptic_weight, delay))
</pre>
<p>The first thing to note from the function above are its arguments. It contains the source population and the particular
target neuron that we want to connect to. It also contains all the connectivity and gabor-function related parameters. In
the body of the function we have one loop over the whole source population that decides whether we add a connection from a
particular cell or not. In order to decide if we add a connection we have to determine the probability from the gabor function.
Once we have this we sample n_pick times and add a weighted synaptic weight accordingly to the list for each neuron.</p>
<p> In the function above we have the values of the gabor function passed as arguments. However, the values of each gabor
function depend on the nature of the cell of the target population. In the light of this we will construct another function
that loops over the target population and extracts the appropriate gabor values for each function in this population.
We again present the function and then explain it:</p>
<pre>
def create_lgn_to_cortical(lgn_population, cortical_population, polarity, n_pick, g, delay, sigma, gamma, phases,
w, orientations):
"""
Creates the connection from the lgn population to the cortical population with a gabor profile. It also extracts
the corresponding gabor parameters that are needed in order to determine the connectivity.
"""
print 'Creating connection from ' + lgn_population.label + ' to ' + cortical_population.label
# Initialize connections
connections = []
for cortical_neuron in cortical_population:
# Set the parameters
x_cortical, y_cortical = cortical_neuron.position[0:2]
cortical_neuron_index = cortical_population.id_to_index(cortical_neuron)
theta = orientations[cortical_neuron_index]
phi = phases[cortical_neuron_index]
# Create the connections from lgn to cortical_neuron
#lgn_to_cortical_connection(cortical_neuron_index, connections, lgn_population, n_pick, g, polarity, sigma,
#gamma, phi, w, theta, x_cortical, y_cortical)
lgn_to_cortical_connection(cortical_neuron_index, connections, lgn_population, n_pick, g, delay, polarity, sigma,
gamma, phi, w, theta, 0, 0)
return connections
</pre>
<p>This function requires as arguments the source and target populations as well as the necessary parameters that characterize
each cell connectivity: orientation and phase. In the body of the function we have a loop over the cortical population that
extracts the relevant parameters -position, orientation and phase- and then calls the function that we already describe previously
in order to create the connectivity from the source population to the cell in place. </p>
<p> So now we have the necessary functions to construct a list. Now, we can use FromListConnector to transform the list into a
connector. And the use this to define a Projection. We define both the excitatory and inhibitory connections. We abstract this
complete set into the following function: </p>
<pre>
def create_thalamocortical_connection(source, target, polarity, n_pick, g, delay, sigma, gamma, w, phases, orientations, simulator):
"""
Creates a connection from a layer in the thalamus to a layer in the cortex through the mechanism of Gabor sampling
"""
# Produce a list with the connections
connections_list = create_lgn_to_cortical(source, target, polarity, n_pick, g, delay, sigma, gamma, phases, w, orientations)
# Transform it into a connector
connector = simulator.FromListConnector(connections_list, column_names=["weight", "delay"])
# Create the excitatory and inhibitory projections
simulator.Projection(source, target, connector, receptor_type='excitatory')
simulator.Projection(source, target, connector, receptor_type='inhibitory')
</pre>
<p> With this we can create in general connections from one target population to the other. We can even change change the gabor
function for whatever we want if we want to experiment with other connectivity patterns. Finally we present down here an example of a
sampling from a Gabor function with the aglorithm we just constructed: </p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-QAsnc0l1OyY/U_IjUlygLCI/AAAAAAAABXU/Ge0TnIxWlv4/s1600/sampling.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-QAsnc0l1OyY/U_IjUlygLCI/AAAAAAAABXU/Ge0TnIxWlv4/s400/sampling.png" /></a></div>
<p> So in the image we show in the left the sampling from the ideal Gabor function in the right.
</body>
</html>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-73903413139568172.post-36594195564367692262014-08-16T08:55:00.000+02:002014-08-16T23:17:27.256+02:00 GSoC Open Source Brain: Arbitrary Spike-trains in PyNN<html>
<head>
<title> Arbitrary Spikes in PyNN </title>
</head>
<body>
<h1> Arbitrary Spike-trains in PyNN </h1>
<p> In this example we are going to create a population of cells with arbitrary spike trains. We will load the spike train from a file where they are stored as a list of arrays with the times at which they occurred. In order to so we are going to use the SpikeSourceArray class model of <strong> PyNN </strong> </p>
<p> First we start importing <strong> PyNN </strong> in with nest and all the other required libraries. Furthermore we start the simulators and given some general parameters. We assume that we already have produced spikes for the cells with 0.50 contrast: </p>
<pre>
import numpy as np
import matplotlib.pyplot as plt
import cPickle
import pyNN.nest as simulator
contrast = 0.50
Nside_lgn = 30
Ncell_lgn = Nside_lgn * Nside_lgn
N_lgn_layers = 4
t = 1000 # ms
simulator.setup(timestep=0.1, min_delay=0.1, max_delay=5.0)
</pre>
<p> So we are going to suppose that we have our data stored in './data'. The spike-trains are lists as long as the cell population that contain for each element an array with the times at which the spikes occurred for that particular neuron. In order to load them we will use the following code </p>
<pre>
directory = './data/'
format = '.cpickle'
spikes_on = []
spikes_off = []
for layer in xrange(N_lgn_layers):
# Layer 1
layer = '_layer' + str(layer)
polarity = '_on'
contrast_mark = str(contrast)
mark = '_spike_train'
spikes_filename = directory + contrast_mark + mark + polarity + layer + format
f2 = open(spikes_filename, 'rb')
spikes_on.append(cPickle.load(f2))
f2.close()
polarity = '_off'
contrast_mark = str(contrast)
mark = '_spike_train'
spikes_filename = directory + contrast_mark + mark + polarity + layer + format
f2 = open(spikes_filename, 'rb')
spikes_off.append(cPickle.load(f2))
f2.close()
</pre>
<p> Now this is the crucial part. If we want to utilize the SpikeSourceArray model for a cell in PyNN we can define a function that pass the spike-train for each cell in the population. In order to so we use the following code: </p>
<pre>
def spike_times(simulator, layer, spikes_file):
return [simulator.Sequence(x) for x in spikes_file[layer]]
</pre>
<p> Note that we have to change every spike-train array to a sequence before using it as a spike-train. After defining this function we can create the LGN models:
</p>
<pre>
# Cells models for the LGN spikes (SpikeSourceArray)
lgn_spikes_on_models = []
lgn_spikes_off_models = []
for layer in xrange(N_lgn_layers):
model = simulator.SpikeSourceArray(spike_times=spike_times(simulator, layer, spikes_on))
lgn_spikes_on_models.append(model)
model = simulator.SpikeSourceArray(spike_times=spike_times(simulator, layer, spikes_off))
lgn_spikes_off_models.append(model)
</pre>
<p> Now that we have the corresponding model for the cells we can create the populations in the usual way:
<pre>
# LGN Popluations
lgn_on_populations = []
lgn_off_populations = []
for layer in xrange(N_lgn_layers):
population = simulator.Population(Ncell_lgn, lgn_spikes_on_models[layer], label='LGN_on_layer_' + str(layer))
lgn_on_populations.append(population)
population = simulator.Population(Ncell_lgn, lgn_spikes_off_models[layer], label='LGN_off_layer_' + str(layer))
lgn_off_populations.append(population)
</pre>
<p> In order to analyze the spike-trains patterns for each population we need to declare a recorder for each population: </p>
<pre>
layer = 0 # We declare here the layer of our interest
population_on = lgn_on_populations[layer]
population_off = lgn_off_populations[layer]
population_on.record('spikes')
population_off.record('spikes')
</pre>
<p> Note here that we can chose the layer of our interest by modifying the value of the layer variable. Finally we run the model with the usual instructions and extract the spikes: </p>
<pre>
#############################
# Run model
#############################
simulator.run(t) # Run the simulations for t ms
simulator.end()
#############################
# Extract the data
#############################
data_on = population_on.get_data() # Creates a Neo Block
data_off = population_off.get_data()
segment_on = data_on.segments[0] # Takes the first segment
segment_off = data_off.segments[0]
</pre>
<p>In order to visualize the spikes we use the following function: </p>
<pre>
# Plot spike trains
def plot_spiketrains(segment):
"""
Plots the spikes of all the cells in the given segments
"""
for spiketrain in segment.spiketrains:
y = np.ones_like(spiketrain) * spiketrain.annotations['source_id']
plt.plot(spiketrain, y, '*b')
plt.ylabel('Neuron number')
plt.xlabel('Spikes')
</pre>
<p> Here the spiketrain variable contains the spike-train for each cell, that is, an array with the times at which the action potentials happened for each cell. In order to tell them apart we assigned them the value of the cell id. Finally we can plot the spikes of the on and off cells with the following code: </p>
<pre>
plt.subplot(2, 1, 1)
plt.title('On cells ')
plot_spiketrains(segment_on)
plt.subplot(2, 1, 2)
plt.title('Off cells ')
plot_spiketrains(segment_off)
plt.show()
</pre>
<p>We now show the plot produced by the code above. Note that the on and off cells are off-phase by 180.</p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-i7P1OyrnSG0/U-7_rnVHTAI/AAAAAAAABXA/LRiCxQ2zoOU/s1600/raster_plots.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-i7P1OyrnSG0/U-7_rnVHTAI/AAAAAAAABXA/LRiCxQ2zoOU/s400/raster_plots.png" /></a></div>
</body>
</html>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-73903413139568172.post-31756523662570448622014-08-14T17:53:00.000+02:002014-08-16T23:17:12.324+02:00 GSoC Open Source Brain: Firing Rate Induced by a Sinus Grating <html>
<head>
<title> Firing Rate Induced by a Sinus Grating </title>
</head>
<body>
<h1> Firing Rate induced by a Sinus Grating </h1>
<p> Now that we know how to do convolutions with our center-surround kernel we can chose any other kind of stimulus to carry this out. In the neuoscience of vision it is very common to use a sinus grating in a wide array of experimental setings so we are going to use it now. In short, in this post we are going to see the see what signal does a center-surround kernel produces when is convolved with a sinus grating.</p>
<h2> Center-Surround Kernel</h2>
<p> In order to do the convolution we are going to define the kernel in the usual way using a function that we have utilized from our work before: </p>
<pre>
# First we define the size and resolution of the space in which the convolution is going to happen
dx = 0.05
dy = 0.05
lx = 6.0 # In degrees
ly = 6.0 # In degrees
# Now we define the temporal parameters of the kernel
dt_kernel = 5.0 # ms
kernel_duration = 150 # ms
kernel_size = int(kernel_duration / dt_kernel)
# Now the center surround parameters
factor = 1 # Controls the overall size of the center-surround pattern
sigma_center = 0.25 * factor # Corresponds to 15'
sigma_surround = 1 * factor # Corresponds to 1 degree
# Finally we create the kernel
kernel_on = create_kernel(dx, lx, dy, ly, sigma_surround, sigma_center, dt_kernel, kernel_size)
</pre>
<h2> Sinus Grating </h2>
<p> Now we are going to construct our sinus grating. But first, we need to think on how long our stimulus is going to last which is a function of how long the we want to simulate the convolution and of the resolutions of the stimulus and the simulation: </p>
<pre>
## Now we define the temporal l parameters of the sinus grating
dt_stimuli = 5.0 # ms
# We also need to add how long do we want to convolve
dt = 1.0 # Simulation resolution
T_simulation = 1 * 10 ** 3.0 # ms
T_simulation += int(kernel_size * dt_kernel) # Add the size of the kernel
Nt_simulation = int(T_simulation / dt) # Number of simulation points
N_stimuli = int(T_simulation / dt_stimuli) # Number of stimuli points
</pre>
<p> Finally we now present the parameters that determine the sinus grating. First the spatial frequency (K), followed by the spatial phase (Phi) and orientation (Theta). Furthermore we have also a parameter for the amplitude and the temporal frequency: </p>
<pre>
# And now the spatial parameters of the sinus grating
K = 0.8 # Cycles per degree
Phi = 0 # Spatial phase
Theta = 0 # Orientation
A = 1 # Amplitude
# Temporal frequency of sine grating
w = 3 # Hz
</pre>
<p> Now with all the spatial parameters in our possession we can call the function that produces the sine grating, we define it as the following function that we present below: </p>
<pre>
stimuli = sine_grating(dx, lx, dy, ly, A, K, Phi, Theta, dt_stimuli, N_stimuli, w)
def sine_grating(dx, Lx, dy, Ly, A, K, Phi, Theta, dt_stimuli, N_stimuli, w):
'''
Returns a sine grating stimuli
'''
Nx = int(Lx / dx)
Ny = int(Ly / dy)
# Transform to appropriate units
K = K * 2 * np.pi # Transforms K to cycles per degree
w = w / 1000.0 # Transforms w to kHz
x = np.arange(-Lx/2, Lx/2, dx)
y = np.arange(-Ly/2, Ly/2, dy)
X, Y = np.meshgrid(x, y)
Z = A * np.cos(K * X *cos(Theta) + K * Y * sin(Theta) - Phi)
t = np.arange(0, N_stimuli * dt_stimuli, dt_stimuli)
f_t = np.cos(w * 2 * np.pi * t )
stimuli = np.zeros((N_stimuli, Nx, Ny))
for k, time_component in enumerate(f_t):
stimuli[k, ...] = Z * time_component
return stimuli
</pre>
<h2>Convolution </h2>
<p> Now that we have the stimulus and the kernel we can do the convolution, in order to do that we use again our functions and indexes that we use in the last post: </p>
<pre>
## Now we can do the convolution
# First we define the necessary indexes to the convolution
signal_indexes, delay_indexes, stimuli_indexes = create_standar_indexes(dt, dt_kernel, dt_stimuli, kernel_size, Nt_simulation)
working_indexes, kernel_times = create_extra_indexes(kernel_size, Nt_simulation)
# Now we calculate the signal
signal = np.zeros(Nt_simulation)
for index in signal_indexes:
signal[index] = convolution(index, kernel_times, delay_indexes, stimuli_indexes, kernel_on, stimuli)
</pre>
<p> We can visualize signal with the following code:</p>
<pre>
#Plot the signal
t = np.arange(kernel_size*dt_kernel, T_simulation, dt)
plt.plot(t, signal[signal_indexes])
plt.show()
</pre>
<div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-CKnwsuhHrNk/U-wM10qj_OI/AAAAAAAABWo/mlvTRXIehn0/s1600/sinus_grating.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-CKnwsuhHrNk/U-wM10qj_OI/AAAAAAAABWo/mlvTRXIehn0/s400/sinus_grating.png" /></a></div>
<p>
We can see that the signal is also a sinus with a frequency that is consistent with the one from the sinus grating.
</p>
</body>
</html>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-73903413139568172.post-51234054635682017972014-06-20T16:15:00.002+02:002014-06-20T16:15:40.892+02:00GSoC Open Source Brain: Retinal Filter II <body>
<h1> LGN-Retinal Filter II </h1>
<p> Now that we know how to crate a filter is time to use it to calculate how an LGN neuron would react to an incoming stimulus.
In this entry we will create a white noise stimulus in order to see how an LGN neuron reacts to it, this approach has the advantage that we can then
recover the filter by <emph> reverse correlation methods </emph> as a sanity check. </p>
<p> In the same spirit of the last post, we will define the spatial and time parameters that determine the lengths and resolutions in those dimensions: </p>
<pre>
#Time parameters
dt = 1.0 # resolution of the response (in milliseconds)
dt_kernel = 5.0 # resolution of the kernel (in milliseconds)
dt_stimuli = 10.0 # resolution of the stimuli (in milliseconds)
kernel_size = 25 # The size of the kernel
T_simulation = 2 * 10 ** 2.0 # Total time of the simulation in ms
Nt_simulation = int(T_simulation / dt) #Simulation points
N_stimuli = int(T_simulation / dt_stimuli) #Number of stimuli
# Space parameters
dx = 1.0
Lx = 20.0
Nx = int(Lx / dx)
dy = 1.0
Ly = 20.0
Ny = int(Ly / dy )
</pre>
<p> Now, we call our kernel which we have wrapped-up as a function from the work in the last post: </p>
<pre>
# Call the kernel
# Size of center area in the center-surround profile
sigma_center = 15
# Size of surround area in the center-surround profile
sigma_surround = 3
kernel = create_kernel(dx, Lx, dy, Ly, sigma_surround,
sigma_center, dt_kernel, kernel_size)
</pre>
<p> With this in our hand we can use the numpy random functions to create our white noise stimuli, we use here the
realization of white noise call ternary noise which consists on values of -1, 0 and 1 assigned randomly to each pixel in our stimuli: </p>
<pre>
# Call the stimuli
stimuli = np.random.randint(-1, 2, size=(N_stimuli, Nx, Ny))
</pre>
<p> Before we can proceed to calculate the convolution we need to do some preliminary work. The convolution problem involves three time scales with different resolutions. We have first the resolution of the response <i> dt </i>, the resolution of the kernel <i> dt_kernel </i> and finally the resolution of the stimulus <i> dt_stimuli</i>.Operations with the kernel involve jumping from one scale to another constantly so we need a mechanism to keep track of that. In short, we would like to have a mechanism that transforms from some coordinates to the others in one specific place and not scatter all over the place.</p>
<p> Furthermore, in the convolution the kernel is multiplied by a specific point of images for each point in time. For the sake of efficiency
we would like to have a mechanism that does this for once. With this in mind I have built a set of indexes for each scale that allow us to associate each element on the indexes of the response to its respective set of images. Also, we have a vector that associates every possible delay time in the kernel to the set of indexes in the response. We illustrate the mechanisms in the next figure </p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-4BKGoWEANEA/U6Q9gsd9k2I/AAAAAAAABVs/LQVkPPqDkg4/s1600/scales2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-4BKGoWEANEA/U6Q9gsd9k2I/AAAAAAAABVs/LQVkPPqDkg4/s400/scales2.png" /></a></div>
<p> We can appreciate the three different times scales in the image. Furthermore, we have a set of indexes called delay indexes that maps each response to its respective image and also other set of indexes called delay indexes that map each of the delays to his respective response. We can create this set of indexes
with the following code: </p>
<pre>
# Scale factors
input_to_image = dt / dt_stimuli # Transforms input to image
kernel_to_input = dt_kernel / dt # Transforms kernel to input
input_to_kernel = dt / dt_kernel # Transforms input to kernel
working_indexes = np.arange(Nt_simulation).astype(int)
# From here we remove the start at put the ones
remove_start = int(kernel_size * kernel_to_input)
signal_indexes = np.arange(remove_start,
Nt_simulation).astype(int)
# Calculate kernel
kernel_times = np.arange(kernel_size)
kernel_times = kernel_times.astype(int)
# Delay indexes
delay_indexes = np.floor(kernel_times * kernel_to_input)
delay_indexes = delay_indexes.astype(int)
# Image Indexes
stimuli_indexes = np.zeros(working_indexes.size)
stimuli_indexes = np.floor(working_indexes * input_to_image)
stimuli_indexes = stimuli_indexes.astype(int)
</pre>
<p>
Now, we can calculate the response of a neuron with a center-surround receptive field by performing the convolution between its filter and the stimuli.
We also plot the stimuli to see how it looks:
</p>
<pre>
for index in signal_indexes:
delay = stimuli_indexes[index - delay_indexes]
# Do the calculation
signal[index] = np.sum(kernel[kernel_times,...]
* stimuli[delay,...])
t = np.arange(remove_start*dt, T_simulation, dt)
plt.plot(t, signal[signal_indexes], '-',
label='Kernel convoluted with noise')
plt.legend()
plt.xlabel('Time (ms)')
plt.ylabel('Convolution')
plt.grid()
plt.show()
</pre>
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-xygFRSI3HXI/U6Q86m2Ke0I/AAAAAAAABVc/D0V5kDmNmKE/s1600/retinal_filterII_convolution.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-xygFRSI3HXI/U6Q86m2Ke0I/AAAAAAAABVc/D0V5kDmNmKE/s400/retinal_filterII_convolution.png" /></a></div>
<p> We can see that the resolution of the response is as good as the resolution of the filter and this explains the discontinuities in the figure above. </p>
<p> As a sanity check we can calculate a voltage triggered average to recover the sta: </p>
<pre>
## Calculate the STA
kernel_size = kernel_times.size
Nside = np.shape(stimuli)[2]
sta = np.zeros((kernel_size ,Nside, Nside))
for tau, delay_index in zip(kernel_times, delay_indexes):
# For every tau we calculate the possible delay
# and take the appropriate image index
delay = stimuli_indexes[signal_indexes - delay_index]
# Now we multiply the voltage for the appropriate images
weighted_stimuli = np.sum( signal[signal_indexes, np.newaxis, np.newaxis] * stimuli[delay,...], axis=0)
# Finally we divide for the sample size
sta[tau,...] = weighted_stimuli / signal_indexes.size
</pre>
<p> Which we can plot in a convenient way with the following set of instructions: </p>
<pre>
## Visualize the STA
closest_square_to_kernel = int(np.sqrt(kernel_size)) ** 2
# Define the color map
cdict1 = {'red': ((0.0, 0.0, 0.0),
(0.5, 0.0, 0.1),
(1.0, 1.0, 1.0)),
'green': ((0.0, 0.0, 0.0),
(1.0, 0.0, 0.0)),
'blue': ((0.0, 0.0, 1.0),
(0.5, 0.1, 0.0),
(1.0, 0.0, 0.0))
}
from matplotlib.colors import LinearSegmentedColormap
blue_red1 = LinearSegmentedColormap('BlueRed1', cdict1)
n = int( np.sqrt(closest_square_to_kernel))
# Plot the filters
for i in range(closest_square_to_kernel):
plt.subplot(n,n,i + 1)
plt.imshow(sta[i,:,:], interpolation='bilinear',
cmap=blue_red1)
plt.colorbar()
plt.show()
</pre>
<div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-GQExV7pXhpQ/U6Q846GC2bI/AAAAAAAABVU/he3Jb-WVM6A/s1600/retinal_filter_sta.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-GQExV7pXhpQ/U6Q846GC2bI/AAAAAAAABVU/he3Jb-WVM6A/s640/retinal_filter_sta.png" /></a></div>
</body>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-73903413139568172.post-66825596893395688392014-06-18T18:23:00.000+02:002014-06-18T18:24:00.462+02:00GSoC Open Source Brain: Retinal Filter I <body>
<h1> LGN-Retinal filter </h1>
Treating the retina and the LGN together as a spatio-temporal filter is a common trait in the models that we are going to work with in this project.
In this series of articles I am going to develop and explain the mechanism that I have developed to model this particular stage of processing.
<h1> Structure of the filter </h1>
In this particular example we are dealing with a separable spatio-temporal filter. That is, we can write our filter as a product of the spatial part and the temporal part. In the following sections, we will describe them in this given order. <br/>
<h2> Spatial filter</h2>
In order to calculate the spatial filter we need two things. First, how wide our filters are going to be and then the resolution level. So lx and ly
will stand for the size of the receptive filters (in degrees) for the x and y direction respectively. The same will go for dx and dy with respect to the resolution.
With this in our hands we then can proceed to create a 2 dimensional structure for our functions with the help of the mesh functions: <br/>
<br/>
<code>
## Spatial parameters <br/>
x = np.arange(-lx/2, lx/2, dx) <br/>
y = np.arange(-ly/2, ly/2, dy) <br/>
<br/>
X, Y = np.meshgrid(x, y) # Create the 2D dimensional coordinates <br/>
</code>
<br/>
Now that we have the function it is just a matter of calculating the function with the appropiate formula: <br/>
<br/>
<code>
R = np.sqrt(X**2 + Y**2) # Distance <br/>
center = (17.0 / sigma_center**2) * np.exp(-(R / sigma_center)**2) <br/>
surround = (16.0 / sigma_surround**2) * np.exp(-(R / sigma_surround)**2) <br/>
Z = surround - center <br/>
</code>
<br/>
With the formula in our hands we can plot it as a countour line to have an idea of how it looks like in space: <br/>
<code>
# Plot countour map <br/>
plt.contourf(X, Y, Z, 50, alpha=0.75, cmap=plt.cm.hot)<br/>
plt.colorbar()<br/>
<br/>
C = plt.contour(X, Y, Z, 10, colors='black', linewidth=.5)<br/>
plt.clabel(C, inline=10, fontsize=10)<br/>
plt.show()<br/>
</code>
<br/>
<div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-ZIgAesaGZew/U6GmfFbsSDI/AAAAAAAABUc/9GEQFT_519U/s1600/center_surround.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-ZIgAesaGZew/U6GmfFbsSDI/AAAAAAAABUc/9GEQFT_519U/s400/center_surround.png" /></a></div>
<br/>
We can also show a view when y=0 in order to offer a side view of the plot in order to gain further insight in the structure of it: <br\>
<div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-MBh-uny-ygU/U6GmhBiD1eI/AAAAAAAABUk/gnwzql2Squg/s1600/side_view.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-MBh-uny-ygU/U6GmhBiD1eI/AAAAAAAABUk/gnwzql2Squg/s400/side_view.png" /></a></div>
<br/>
<h2> Temporal filter </h2>
The behaviour of the temporal filters in time is usually called bi-phasic response. This consists in a response that initially grows in time for the mean level but
then it goes bellow to the mean level to finally return at it in time window of the order 200ms approximately.
The particular function and parameters that we are going to use to illustrate this behaviour were taken from the paper by Cai et Al (Reference bellow).
But first, we have to define our kernel size and resolution as in the code down here: <br/>
<br/>
<code>
# First the kernel size and resolution <br/>
kernel_size = 200<br/>
dt_kernel = 1.0<br/>
t = np.arange(0, kernel_size * dt_kernel, dt_kernel) # Time vector <br/>
</code>
<br/>
Now, we construct the function in the following way: <br/>
<code>
## Temporal parameters <br/>
K1 = 1.05 <br/>
K2 = 0.7<br/>
c1 = 0.14<br/>
c2 = 0.12<br/>
n1 = 7.0<br/>
n2 = 8.0<br/>
t1 = -6.0<br/>
t2 = -6.0<br/>
td = 6.0<br/>
<br/>
p1 = K1 * ((c1*(t - t1))**n1 * np.exp(-c1*(t - t1))) / ((n1**n1) * np.exp(-n1)) <br/>
p2 = K2 * ((c2*(t - t2))**n2 * np.exp(-c2*(t - t2))) / ((n2**n2) * np.exp(-n2)) <br/>
p3 = p1 - p2 <br/>
</code>
<br/>
We plot the function to have an idea of how it looks now: <br/>
<br/>
<div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-O6UR_36liyQ/U6GmiOPPk4I/AAAAAAAABUs/I96ym1nbvEU/s1600/time_filter.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-O6UR_36liyQ/U6GmiOPPk4I/AAAAAAAABUs/I96ym1nbvEU/s400/time_filter.png" /></a></div>
<br/>
<h2> Spatio-Temporal Filter </h2>
Finally, in order to have build the kernel, given that our filter is separable, we just have to multiply the temporal the temporal part by the spatial part at each point in space:
<br/>
<pre>
# Initialize and fill the spatio-temporal kernel
kernel = np.zeros((kernel_size, int(lx/dx), int(ly/dy)))
for k, p3 in enumerate(p3):
kernel[k,...] = p3 * Z
</pre>
<br/>
Which we can show now in the following plot:
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-TQfeujG3kc8/U6Gmi5-R2qI/AAAAAAAABU0/hjkUvy2cCBU/s1600/kernel.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-TQfeujG3kc8/U6Gmi5-R2qI/AAAAAAAABU0/hjkUvy2cCBU/s640/kernel.png" /></a></div>
<h1> References </h1>
<ul>
<li> Cai, Daqing, Gregory C. Deangelis, and Ralph D. Freeman. "Spatiotemporal receptive field organization in the lateral geniculate nucleus of cats and kittens." Journal of Neurophysiology 78.2 (1997): 1045-1061. </li>
</ul>
</body>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-73903413139568172.post-40033273713383081442014-06-05T13:23:00.000+02:002014-06-10T10:48:16.888+02:00GSoC Open Source Brain: How to Create Connections, Two Neurons Example <body>
<h1> Two Neurons </h1>
<p> In this example we are going to define one of the most simple networks possible:
one with three elements. This will allow us to introduce a couple of fundamental
concepts in <a href="http://neuralensemble.org/PyNN/">PyNN</a>. </p>
<p> As in our past example we start by importing PyNN and the necessary libraries.
Also we declare some initialization variables and we start the simulator: </p>
<code>
import pyNN.nest as simulator <br/>
import pyNN.nest.standardmodels.electrodes as elect <br/>
import matplotlib.pyplot as plt <br/>
import numpy as np <br/>
<br/>
N = 3 # Number of neurons <br/>
t = 150.0 # Simulation time <br/>
<br/>
# Has to be called at the beginning of the simulation <br/>
simulator.setup(timestep=0.1, min_delay=0.1, max_delay=10)
</code>
<p> Now we have to declare our cell modell. But before to so, let's check which parameters of it
we can play with. In order to do so we can consult the available parameters for each model class
with the method <i>default_parameters</i>. In our case of example we are interested in the integrate and fire model
with current based synapses. We can call the following code to see the parameters available:</p>
<code>
simulator.IF_curr_exp.default_parameters
</code>
<p>After this we are going to get a list of the available parameters, we set them and the declare
our model with the following instructions: </p>
<code>
# Neuron Model's parameters <br/>
i_offset = 0 <br/>
R = 20 <br/>
tau_m = 20.0 <br/>
tau_refractory = 50 <br/>
v_thresh = 0 <br/>
v_rest = -60 <br/>
tau_syn_E = 5.0 <br/>
tau_syn_I = 5.0 <br/>
cm = tau_m / R <br/>
<br/>
# Declare our cell model <br/>
model = simulator.IF_curr_exp(cm=cm, i_offset=i_offset, tau_m=tau_m, tau_refrac=tau_refractory, tau_syn_E=tau_syn_E, tau_syn_I=tau_syn_I, v_reset=v_rest, v_thresh=v_thresh) <br/>
<br/>
# Declare a population <br/>
neurons = simulator.Population(N, model) <br/>
</code>
<p> In order to modify a specific subset of a given population in PyNN we use the concept of <strong>View</strong>.
Views allow us to use Python array notation to access a given subset of a population and modify it for our purposes.
In this particular case we are going select two sub-populations (neurons 1 and 2) to modify the parameters
and a create a connection to them from the reminder neuron. In order to modify parameters from
a given population we use the method <i>set_parameters</i> that each population posses: </p>
<code>
# Create views <br/>
neuron1 = neurons[[0]] <br/>
neuron2 = neurons[1, 2] <br/>
<br/>
# Modify second neuron <br/>
tau_m2 = 10.0 <br/>
cm2 = tau_m2 / R <br/>
neurons[1].set_parameters(cm=cm2, tau_m=tau_m2) <br/>
<br/>
tau_m3 = 5.0 <br/>
cm3 = tau_m3 / R <br/>
neurons[2].set_parameters(cm=cm3, tau_m=tau_m3) <br/>
</code>
<p> Now, in order to create <a href="http://neuralensemble.org/docs/PyNN/connections.html">connections</a> in
PyNN we need the concept of projection. As stated in the <a href="http://neuralensemble.org/docs/PyNN/connections.html">
tutorial </a> of PyNN a project needs the following elements to be declared: </p>
<ul>
<li> The pre-synaptic population </li>
<li> The post-synaptic population </li>
<li> A connection algorithm </li>
<li> A synapse type </li>
</ul>
<p>The general form of the <i>Projection</i> method is given by</p>
<code>
Projection(presynaptic population, posynaptic population, connection algorithm, synapase type)
</code>
<p> In our example the pre-synaptic population is going to be the neuron 0 and the post-synaptic population
is going to be composed of the neurons 1 and 2 as declared in the views above. As a connection algorithm we are going to
use the <i>AllToAllConnector</i> method which connects every member of the presynaptic population to the post-synaptic
population. Finally we can define a static syanpse with the method <i>StaticSynapse</i>, it requires two
attributes a value that determines the size (weight) and a delay that determines how long after the spike
in the pre-synaptic neuron the pos-synaptic neuron elicits a response. Our code bellow is: </p>
<code>
# Synapses <br/>
syn = simulator.StaticSynapse(weight=10, delay=0.5) <br/>
# Projections <br/>
connections = simulator.Projection(neuron1, neuron2, simulator.AllToAllConnector(), <br/>
syn, receptor_type='excitatory') <br/>
</code>
<p> Finally we set the current for the neuron 1 and the recorder as in the previous example: </p>
<code>
# DC source <br/>
current = elect.DCSource(amplitude=3.5, start=20.0, stop=100.0) <br/>
current.inject_into(neuron1) <br/>
#neurons.inject(current) <br/>
<br/>
# Record the voltage <br/>
neurons.record('v') <br/>
<br/>
simulator.run(t) # Run the simulations for t ms <br/>
simulator.end() <br/>
</code>
<p> Finally we extract the data and plot the function </p>
<code>
# Extracts the data <br/>
data = neurons.get_data() # Creates a Neo Block <br/>
segment = data.segments[0] # Takes the first segment <br/>
vm = segment.analogsignalarrays[0] # Take the arrays <br/>
<br/>
# Extract the data for neuron 1 <br/>
vm1 = vm[:, 0] <br/>
vm2 = vm[:, 1] <br/>
vm3 = vm[:, 2] <br/>
<br/>
# Plot the data <br/>
plt.plot(vm.times, vm1, label='pre-neuron') <br/>
plt.hold('on') <br/>
plt.plot(vm.times, vm2, label='post-neuron 1') <br/>
plt.plot(vm.times, vm3, label= 'post-neuron 2') <br/>
<br/>
plt.xlabel('time') <br/>
plt.ylabel('Voltage') <br/>
plt.legend() <br/>
<br/>
plt.show()
</code>
<p> A particular example of the simulation running is attached next. Playing with the parameters in
this example can provide a clear idea of how the models and synapsis' parameters work: </p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-cZhM73tT02c/U5BSykOIDYI/AAAAAAAABTE/2FSJ_EoSQr8/s1600/two_neurons.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-cZhM73tT02c/U5BSykOIDYI/AAAAAAAABTE/2FSJ_EoSQr8/s400/two_neurons.png" /></a></div>
</body>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-73903413139568172.post-78853259785084521302014-06-05T13:16:00.000+02:002014-06-05T13:29:28.406+02:00GSoC Open Source Brain: First Example, Integrate and Fire neuron <body>
<h1> First Example </h1>
<p> In this entry I am going to describe a very basic example with <a href="http://neuralensemble.org/PyNN/">PyNN</a>. Our aim is to build a system with a simple Integrate and Fire neuron under the influence of a direct current. The first thing that we need to do is to import the simulator that we are going to use with the instruction: </p>
<code>
import pyNN.nest as simulator
</code>
<p> Note that this could also be <a href="http://www.neuron.yale.edu/neuron/">Neuron </a> or<a href="http://briansimulator.org"> Brian </a> instead of <a href="http://www.nest-initiative.org/Software:About_NEST"> Nest </a> <br/> </p>
<p> Now, we need to define our model but before that we need to do some presettings. So we set first the number of neurons that our simulations is going to run and also the total time it will take. </p>
<code >
N = 1 # Number of neurons <br/>
t = 100.0 #Simulation time <br/>
<br/>
# Has to be called at the beginning of the simulation <br/>
simulator.setup(timestep=0.1, min_delay=0.1, max_delay=10)
</code>
<p> Now we can define a <a href="http://neuralensemble.org/docs/PyNN/standardmodels.html"> neuron model </a> for our neuron. For this example we are going to chose a leaky integrate and fire with exponentially decaying pos-synpatic current. </p>
<code>
model = simulator.IF_curr_exp()
<br/>
neurons = simulator.Population(N, model)
</code>
<p> Note that once we have defined our model and our number of neurons we can define a population with this. </p>
<p> As a next step we define the current and inject it into the neurons </p>
<code>
# DC source <br/>
current = simulator.DCSource(amplitude=0.5, start=20.0, stop=80.0) <br/>
#current = elect.DCSource(amplitude=0.5, start=20.0, stop=80.0) <br/>
current.inject_into(neurons) <br/>
#neurons.inject(current) <br/>
</code>
<p> And finally we can indicate our simulator to run with the instruction </p>
<code>
simulator.run(t) # Run the simulations for t ms
</code>
<p>With this we have already simulated our neuron. However, as the things stand right now it we are unable to visualize the trajectory in time of the voltage in our system. In order to extract the membrane potential from our data we have to declare a recorder for the voltage. </p>
<code>
neurons.record('v') # Record voltage <br/>
simulator.run(t) # Run the simulations for t ms <br/>
</code>
<p> Now in ourder to extract our simulations from the recorder we use:</p>
<code>
data = neurons.get_data()
</code>
<p>This returns a <a href="http://neo.readthedocs.org/en/latest/core.html">Neo </a> bloc. A Neo blocked is a container of segments which in turn contain the data recorded in a given experiment. In order to extract our data from the block above we use the following code:</p>
<code>
data = neurons.get_data() # Crates a Neo Block <br/>
segment = data.segments[0] # Takes the first <br/>
vm = segment.analogsignalarrays[0] <br/>
</code>
<p>Finally in order to plot our data: </p>
<code>
import matplotlib.pyplot as plt <br/>
plt.plot(vm.times, vm) <br/>
plt.xlabel('time') <br/>
plt.ylabel('Vm') <br/>
plt.show() <br/>
</code>
<p> Which produces the next plot: </p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-wAQsXEtusG8/U5BRTNpo-eI/AAAAAAAABS4/YQjDN54zVRI/s1600/first_plot.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-wAQsXEtusG8/U5BRTNpo-eI/AAAAAAAABS4/YQjDN54zVRI/s400/first_plot.png" /></a></div>
</body>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-73903413139568172.post-7254682927118239502014-06-05T13:10:00.000+02:002014-06-05T13:34:24.551+02:00GSoC Open Source Brain: Google Summer of Code 2014 Presentation <body>
<h1> Presentation </h1>
<h2> Introduction </h2>
<p> My name is Ramon Heberto Martinez and I am student in the Erasmus Mundus Master in Complex Systems Science. I will use the coming series of entries in this blog to describe and report my progress in the project for <a herf="https://www.google-melange.com/gsoc/homepage/google/gsoc2014"> Google Summer of Code 2014 </a> (GSoC 2014). I have been lucky enough to have my proposal entitled <a href="http://incf.org/gsoc/2014/open-source-cross-simulator-large-scale-cortical-models"> Open source, cross simulator, large scale cortical models </a> with the <a href="http://incf.org/">International Neuroinformatics Coordinating Facility</a> (INCF). This project will be co- mentored by Andrew Davison at the INCF's French branch and Padraig Gleeson from the INCF branch in the UK. </p>
<h2 style="background-color:#ffffff;">The Project</h2>
<p> As we advance the study of the brain we have required more powerful tools to study it. In particular more powerful computational tools have become available as well as more elaborated simulation environments. An example of these efforts is the <a href="http://www.opensourcebrain.org/">Open Source Brain Project</a> (OSB) project that provides a space where the computational models can be built collaboratively and shared with open standards such as <a href="http://neuralensemble.org/PyNN/">PyNN</a> [Davison et al., 2008] and <a href="http://www.neuroml.org/">NeuroML</a> [Gleeson et al., 2010]. </p>
<p> On the other hand there is a lack of well tested open models that can serve as benchmarks to test and reliably compare the capabilities of the different environments. It is the spirit of this project to try to reduce the lack of such models. In particular this project will consist on developing models of the visual system which, at the date that I write, is an area not very well covered by the OSB project so far </p>
<p>The work specifically will consist on developing the code of the models below in PyNN and release them as free code in the platform of the Open Source Brain Project <br/>.
Papers:
<ul>
<li> Different roles for simple-cell and complex-cell inhibition in v1 [Lauritzen and Miller, 2003]. </li>
<li> Inhibitory stabilization of the cortical network underlies visual surround suppression [Ozeki et al., 2009] </li>
<li> Feedforward origins of response variability underlying contrast invari- ant orientation tuning in cat visual
cortex [Sadagopan and Ferster, 2012] </li>
</ul>
The theme of the papers is to have a thorough set of properties of the visual system (V1) and in particular of the orientation invariance property.
</p>
<h2> References and Links </h2>
<h3> References </h3>
<ul>
<li>Andrew P Davison, Daniel Br ̈derle, Jochen Eppler, Jens Kremkow, Eilif Muller, Dejan Pecevski, Laurent Perrinet, and Pierre Yger. Pynn: a common interface for neuronal network simulators. Frontiers in neuroinformatics, 2, 2008. </li>
<li> Padraig Gleeson, Sharon Crook, Robert C Cannon, Michael L Hines, Guy O Billings, Matteo Farinella, Thomas M Morse, Andrew P Davison, Subhasis Ray, Upinder S Bhalla, et al. Neuroml: a language for describing datadriven models of neurons and networks with a high degree of biological detail. PLoS computational biology,(6):e1000815, 2010. </li>
<li> Thomas Z Lauritzen and Kenneth D Miller. Different roles for simple-cell and complex-cell inhibition in v1. The Journal of neuroscience, 23(32):10201–10213, 2003. </li>
<li> Hirofumi Ozeki, Ian M Finn, Evan S Schaffer, Kenneth D Miller, and David Ferster. Inhibitory stabilization of the cortical network underlies visual surround suppression. Neuron, 62(4):578–592, 2009. </li>
<li> Roger D Peng. Reproducible research in computational science. Science (New York, Ny), 334(6060):1226, 2011. </li>
<li> Srivatsun Sadagopan and David Ferster. Feedforward origins of response variability underlying contrast invariant orientation tuning in cat visual cortex. Neuron, 74(5):911–923, 2012. </li>
</ul>
<h3>Links</h3>
<a href="http://incf.org/gsoc/2014/open-source-cross-simulator-large-scale-cortical-models">Project Page at INCF </a> <br/>
<a href="http://neuralensemble.org/PyNN/">PyNN</a> <br/>
<a href="http://www.neuroml.org/">NeuroML</a> <br/>
<a href="http://www.opensourcebrain.org/">Open Source Brain Project</a> <br/>
<a href="http://neuralensemble.org/">Neural Ensemblet</a> <br/>
<a href="http://incf.org/">International Neuroinformatics Coordinating Facility</a> <br/>
<a herf="https://www.google-melange.com/gsoc/homepage/google/gsoc2014"> Google Summer of Code 2014 </a>
<body style="background-color:#ffffff;"> <!--Comment: White color attribuite-->
</body>
Unknownnoreply@blogger.com0