Wednesday, January 22, 2014

Tools for cortical modelling from the Bednar Lab

Jim Bednar writes:

"We are pleased to announce the availability of a comprehensive new example of modelling topographic maps in the visual cortex, suitable as a ready-to-run starting point for future research. This example consists of:

1. A new J. Neuroscience paper (Stevens et al. 2013a) describing the GCAL model and showing that is is stable, robust, and adaptive, developing orientation maps like those observed in ferret V1.

2. A new open-source Python software package, Lancet, for launching simulations and collating the results into publishable figures.

3. A new Frontiers in Neuroinformatics paper (Stevens et al. 2013b) describing a lightweight and practical workflow for doing reproducible research using Lancet and IPython.

4. An IPython notebook showing the precise steps necessary to reproduce the 842 simulation runs required to reproduce the complete set of figures and text of Stevens et al. (2013a),  using the Topographica simulator.

5. A family of Python packages that were once part of Topographica but are now usable by a broader audience. These packages include 'param' for specifying parameters declaratively, 'imagen' for defining 0D, 1D, and 2D distributions (such as visual stimuli), and  'featuremapper' for analyzing the activity of neural populations  (e.g. to estimate receptive fields, feature maps, or tuning curves).

The resulting recipe for building mechanistic models of cortical map development should be an excellent way for new researchers to start doing work in this area.

Jean-Luc R. Stevens, Judith S. Law, Jan Antolik, Philipp Rudiger, Chris Ball, and James A. Bednar

Computational Systems Neuroscience Group
The University of Edinburgh


_______________________________________________________________________________


1. STEVENS et. al. 2013a: GCAL model

Our recent paper:

Jean-Luc R. Stevens, Judith S. Law, Jan Antolik, and James A. Bednar. Mechanisms for stable, robust, and adaptive  development of orientation maps in the primary visual cortex. Journal of Neuroscience, 33:15747-15766, 2013.  http://dx.doi.org/10.1523/JNEUROSCI.1037-13.2013

shows how the GCAL model was designed to replace previous models of V1development that were unstable and not robust.  The model in this paper accurately reproduces the process of orientation map development in ferrets, as illustrated in this animation comparing GCAL, a simpler model, and chronic optical imaging data from ferrets.


2. LANCET

Lancet is a lightweight Python package that offers a set of flexible components to allow researchers to declare their intentions succinctly and reproducibly. Lancet makes it easy to specify a parameter space,
run jobs, and collate the output from an external simulator or analysis tool. The approach is fully general, to allow the researcher to switch between different software tools and platforms as necessary.


3. STEVENS et. al. 2013b: LANCET/IPYTHON workflow

Jean-Luc R. Stevens, Marco I. Elver, and James A. Bednar.  An Automated and Reproducible Workflow for Running and Analyzing  Neural Simulations Using Lancet and IPython Notebook. Frontiers in Neuroinformatics, in press, 2013. http://www.frontiersin.org/Journal/10.3389/fninf.2013.00044/abstract

Lancet is designed to integrate well into an exploratory workflow within the Notebook environment offered by the IPython project. In an IPython notebook, you can generate data, carry out analyses, and plot the results interactively, with a complete record of all the code used.
Together with Lancet, it becomes practical to automate every step needed to generate a publication within IPython Notebook, concisely and reproducibly.  This new paper describes the reproducible workflow and shows how to use it in your own projects.


4. NOTEBOOKS for Stevens et al. 2013a

As an extended example of how to use Lancet with IPython to do reproducible research, the complete recipe for reproducing Stevens et al. 2013a is available in models/stevens.jn13 of Topographica's GitHub repository.  The first of two notebooks defines the model, alternating between code specification, a textual description of the key model properties with figures, and interactive visualization of the model's initial weights and training stimuli. The second notebook can be run to quickly generate the last three published figures (at half resolution) but can also launch all 842, high-quality simulations
needed to reproduce all the published figures in the paper.  Static copies of these notebooks, along with instructions for downloading runnable versions, can be viewed here:

  http://topographica.org/_static/gcal.html
  http://topographica.org/_static/stevens_jn13.html

5. PARAM, IMAGEN, and FEATUREMAPPER

The Topographica simulator has been refactored into several fully independent Python projects available on GitHub (http://ioam.github.io). These projects are intended to be useful to a wide audience of both computational and experimental neuroscientists:

  param: The parameters offered by param allow scientific Python programs to be written declaratively, with type and range checking,  optional documentation strings, dynamically generated values,  default values and many other features.

  imagen: Imagen offers a set of 0D,1D and 2D pattern distributions. These patterns may be procedurally generated or  loaded from files. They can be used to generate simple scalar values, such as values drawn from a specific random distribution,  or for generating complex, resolution-independent composite image pattern distributions typically used as visual stimuli (e.g. Gabor and Gaussian patches or masked sinusoidal gratings).


  featuremapper: Featuremapper allows the response properties of a neural population to be measured from any simulator or experimental setup that can give estimates of the neural activity values in  response to an input pattern. Featuremapper may be used to measure preference and selectivity maps for various stimulus features (e.g  orientation and direction of visual stimuli, or frequency for  auditory stimuli), to compute tuning curves for these features, or  to measure receptive fields, regardless of the underlying implementation of the model or experimental setup."

Tuesday, December 10, 2013

BrainScaleS CodeJam #6

Registration is now open for the 6th BrainScaleS CodeJam workshop, which will take place 27th - 29th January 2014 in Jülich, Germany.

The CodeJam workshops are dedicated to bringing together scientists, graduate students, and scientific programmers to share ideas, present their work, and write code together. Mornings are dedicated to invited and contributed talks, leaving the afternoons free for discussions, tutorials and code sprints. These workshops have been hugely effective in catalyzing open-source neuroscience software development.

The 6th CodeJam has a focus on high-performance computing. We invite contributions on any topic related to software in neuroscience, but especially on topics related to the main theme. If you have ideas for organising code sprints, whether a feature that you would like to see added to an existing tool or an idea for new software, please also let us know.

Please visit the event website http://www.fz-juelich.de/ias/jsc/events/codejam to learn more and register. Places are limited, so early registration is recommended.

The local organisation is carried out by the Simulation Lab Neuroscience (SLNS), headed by Abigail Morrison. Originally funded by the FACETS project, the workshops enjoy ongoing funding and support from BrainScaleS, INCF, the Helmholtz Association, the Jülich-Aachen Research Alliance and the Bernstein Network.

Tuesday, November 19, 2013

PyNN 0.8 beta 1 released

We're very happy to announce the first beta release of PyNN 0.8.


For PyNN 0.8 we have taken the opportunity to make significant, backward-incompatible
changes to the API. The aim was fourfold:

  •   to simplify the API, making it more consistent and easier to remember;
  •   to make the API more powerful, so more complex models can be expressed with less code;
  •   to allow a number of internal simplifications so it is easier for new developers to contribute;
  •   to prepare for planned future extensions, notably support for multi-compartmental models.


For a list of the main changes between PyNN 0.7 and 0.8, see the release notes for the 0.8 alpha 1 release.

For the changes in this beta release see the release notes.

The biggest change with this beta release is that we now think the PyNN 0.8 development branch is stable enough to do science with. If you have an existing project using an earlier version of PyNN, you might not want to update, but if you're starting a new project, we recommend using this beta release.

The source package is available from the INCF Software Center


What is PyNN?

PyNN (pronounced 'pine' ) is a simulator-independent language for building neuronal network models.

In other words, you can write the code for a model once, using the PyNN API and the Python programming language, and then run it without modification on any simulator that PyNN supports (currently NEURONNEST and Brian).

Even if you don't wish to run simulations on multiple simulators, you may benefit from writing your simulation code using PyNN's powerful, high-level interface. In this case, you can use any neuron or synapse model supported by your simulator, and are not restricted to the standard models.


The code is released under the CeCILL licence (GPL-compatible).

Wednesday, October 23, 2013

Mozaik - an integrated workflow framework for large scale neural simulations (0.1 release)

Mozaik is intended to improve the efficiency of computational neuroscience projects by relieving users from writing boilerplate code for simulations involving complex heterogenous neural network models, complex stimulation and experimental protocols and subsequent analysis and plotting.

Mozaik integrates the model, experiment and stimulation specification, simulation execution, data storage, data analysis and visualization into a single automated workflow, ensuring that all relevant
meta-data are available to all workflow components. It is based on several widely used tools, including PyNN, Neo and Matplotlib. It offers a declarative way of specifying models and recording configurations, using hierarchically organized configuration files.

To install the stable 0.1 version run:

pip install mozaik

The code repository with the latest developmental version is at  https://github.com/antolikjan/mozaik

The Mozaik homepage, with full documentation, is http://neuralensemble.org/mozaik/

Friday, September 6, 2013

Spyke Viewer 0.4.0 released

We are pleased to announce the release of spykeutils and Spyke Viewer 0.4.0, available on PyPi, NeuroDebian and as binary version.

Spyke Viewer is a multi-platform GUI application for navigating, analyzing and visualizing data from electrophysiological experiments or neural simulations.  It is based on the Neo library, which enables it to load a wide variety of data formats used in electrophysiology. At its core, Spyke Viewer includes functionality for navigating Neo object hierarchies and performing operations on them. spykeutils is a Python library containing analysis functions and plots for Neo objects.

A central design goal of Spyke Viewer is flexibility. For this purpose, it includes an embedded Python console for exploratory analysis, a filtering system, and a plugin system. Filters are used to semantically define data subsets of interest. Spyke Viewer comes with a variety of plugins implementing common neuroscientific plots (e.g. rasterplot, peristimulus time histogram, correlogram, and signal plot). Custom plugins for other analyses or plots can be easily created and modified using the integrated Python editor or external editors. Documentation and installation instructions are at http://spyke-viewer.readthedocs.org/en/0.4.0/index.html

In addition, the Spyke Repository is now online. Find Spyke Viewer extensions (e.g. plugins, startup script snippets, IO plugins etc.) or share your own at http://spyke-viewer.g-node.org. Among the extensions currently hosted at the site are plugins for spike detection and spike sorting.

Some highlights from the changelogs since 0.2.0 (available for spykeutils and Spyke Viewer):
  • spykeutils & Spyke Viewer: Support for lazy loading features in current Neo development version 
  • spykeutils & Spyke Viewer: New features for spike waveform and correlogram plots 
  • Spyke Viewer: Splash screen while loading the application 
  • Spyke Viewer: Better support for starting plugins remotely: Progress bar and console output available 
  • Spyke Viewer: Additional data export formats 
  • Spyke Viewer: A startup script and an API enable more configuration and customization 
  • Spyke Viewer: Forcing a specific IO class is now possible. Graphical options for IOs with parameters 
  • Spyke Viewer: Filters are automatically deactivated on loading a selection if they prevent parts of it to be shown 
  • Spyke Viewer: Modified plugins are saved automatically before starting the plugin 
  • Spyke Viewer: Plugin configurations are now restored when saving or refreshing plugins and when restarting the program 
  • Spyke Viewer: Added context menu for navigation. Includes entries for removing objects and an annotation editor. 
  • Spyke Viewer: The editor now has search and replace functionality (access wiht Ctrl+F and Ctrl+H) 
  • spykeutils: Fast and well tested implementations for many spike train metrics (contributed by Jan Gosmann) 

The new version can be installed over older versions and will keep previous configuration, filters and plugins.

Sunday, March 10, 2013

Call for contributions: Python in Neuroscience II

CALL FOR CONTRIBUTIONS

Research Topic: "Python in Neuroscience II"
Journal:
co-hosted by Frontiers in Neuroinformatics
and Frontiers in Brain Imaging Methods

URL:
http://www.frontiersin.org/Neuroinformatics/researchtopics/Python_in_Neuroscience_II/1591

Editors:
Andrew P. Davison, CNRS, France
Markus Diesmann, Research Center Juelich, Germany
Marc-Oliver Gewaltig, Ecole Polytechnique Federale de Lausanne, Switzerland
Satrajit S. Ghosh, Massachusetts Institute of Technology, USA
Fernando Perez, University of California at Berkeley, USA
Eilif B. Muller, Blue Brain Project, EPFL, Switzerland
James A. Bednar, The University of Edinburgh, United Kingdom
Bertrand Thirion, Institut National de Recherche en informatique et automatique, France
Yaroslav O. Halchenko, Dartmouth College, USA

Important Dates:

Abstract/outline submission deadline: April 7th, 2013.
Invitations for full paper submissions sent by April 21st, 2013.
Invited full paper submission deadline: July 15, 2013.

 
Research Topic Abstract

Frontiers in Neuroinformatics hosted the research topic “Python in Neuroscience” in 2008-2009, documenting the first wave of mature
tools to propel Python into common use in the field.  This widespread
convergence on Python as the systems integration language of choice in
neuroscience has brought with it exciting new possibilities for
cross-fertilization, collaboration, and interdisciplinary interaction. 

The Python ecosystem remains vibrant and inventive, and continues to
produce cutting edge tools for neuroscience research.  With this second
research topic on “Python in Neuroscience” we seek to showcase the most
exciting developments since 2009 that include, but are not limited to,
the following themes:

- interactive simulation and visualization
- workflows and automation
- brain-machine interfaces
- advances in neuroimaging analysis methods
- sharing, re-use, storage and databasing of models and data
- data analysis libraries and frameworks
- brain atlasing, ontologies, semantic web
- model description and abstraction languages

We invite contributions that promote innovative use of Python for
scientific work from any branch of neuroscience.

This research topic is dedicated to the memory of Prof. Dr. Rolf
Koetter, visionary, colleague and friend.


Submission Procedure

Researchers and practitioners are invited to submit on or before April
7th, 2013 a max. 1 page abstract/outline of work related to the focus
of the research topic to python.in.neuroscience@gmail.com for
consideration for inclusion as an elaborated full article in the
research topic.

Please include a provisional title, a full author list, and format the
subject of your email as follows: "[python RT] outline - Your Name".

Authors will be notified whether their contribution has been accepted
by April 21st, 2013.


Full Article Information

* Full articles will be solicited based on the abstracts/outlines we
receive by April 7th, 2013.

* The deadline for submission of full articles will be July 15, 2013. 

* Manuscripts should be clearly different from user manuals or web
pages. Rather they should focus on the underlying concepts and
innovations in architecture, algorithms, data-structures, workflows,
etc.

* The research topic is co-hosted by Frontiers in Neuroinformatics and
Frontiers in Brain Imaging Methods.  Authors choose the journal which
is named when citing their article (Front. Neuroinformatics or Front.
Brain Imaging Methods) by submitting the full article to the
respective journal.  All research topic articles will be listed in
the research topic page appearing in both journals.

* Article formatting will be as for standard Frontiers "Original
Research", “Methods” or “Review” articles.  Guidelines and instructions
for their preparation can be found here.

* Frontiers in Neuroinformatics and Frontiers in Brain Imaging Methods
are open access journals, following a pay-for-publication model.
Research Topic articles enjoy a generous discount, thanks to the
support of the Frontiers Research Foundation.  Details of the
publication fees can be found here.

* Further details will be provided to authors of accepted abstracts by
April 21st, 2013.


Confirmed Submissions

The following authors have been contacted in the preparation phase of
the research topic and have confirmed they would submit a manuscript.
Article titles and author lists are preliminary.

Henrik Lindén*, Espen Hagen*, Szymon Leski, Eivind S. Norheim, Klas H.
Pettersen, Gaute T. Einevoll (*equal contribution), "LFPy: A tool for
simulation of extracellular potentials with biophysically detailed
model neurons".

VK Jirsa, AR McIntosh, et al., "Integrating neuroinformatics tools in
TheVirtualBrain".

Andrew P. Davison, Eilif Muller, Jochen M. Eppler and Mikael Djurfeldt,
"Multisimulations in PyNN: Integrating PyNN and MUSIC".

Robert Pröpper and Klaus Obermayer, "Spyke Viewer: a flexible and
extensible electrophysiological data analysis platform".

Michael Hull and David Willshaw, "morphforge: an object-model for
simulating small networks of biologically detailed neurons in python".

Alexandre Abraham, Fabian Pedregosa, Andreas Muller, Jean Kossaifi,
Alexandre Gramfort, Bertrand Thirion, Gaël Varoquaux, “Statistical
learning for Neuroimaging with scikit-learn”.

A. Gramfort, M. Luessi, E. Larson, D. Engemann, D. Strohmeier, C.
Brodbeck, M. Hamalainen, “MNE-Python: MEG and EEG data analysis with
Python”.

Thomas Vincent, Solveig Badillo, Lotfi Chaari, Christine Bakhous,
Florence Forbes, Philippe Ciuciu, "Flexible multivariate hemodynamics fMRI
data analyses and simulations with PyHRF"

Wiecki, Thomas V. and Sofer, Imri and Frank, Michael J. "HDDM:
Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python”.

M. Djurfeldt, A.P. Davison, J.M. Eppler: "Modeling connectivity: Connection-set Algebra in NEST and PyNN"

Friday, February 22, 2013

Sumatra 0.5 released

We would like to announce the release of version 0.5.0 of Sumatra, a tool for automated tracking of simulations and computational analyses so as to be able to easily replicate them at a later date.

Interfaces to documentation systems

The big addition to Sumatra in this version is a set of tools to include figures and other results generated by Sumatra-tracked computations in documents, with links to full provenance information: i.e. the full details of the code, input data and computational environment used to generate the figure/result.

The following tools are available:

  • for reStructuredText/Sphinx: an “smtlink” role and “smtimage” directive.
  • for LaTeX, a “sumatra” package, which provides the “\smtincludegraphics” command.

see Reproducible publications: including and linking to provenance information in documents for more details.

Other changes

Sumatra 0.5 development has mostly been devoted to polishing. There were a bunch of small improvements, with contributions from several new contributors. The Bitbucket pull request workflow seemed to work well for this. The main changes are:

  • working directory now captured (as a parameter of LaunchMode);
  • data differences are now based on content, not name, i.e. henceforth two files with identical content but different names (e.g. because the name contains a timestamp) will evaluate as being the same;
  • improved error messages when a required version control wrapper is not installed;
  • dependencies now capture the source from which the version was obtained (e.g. repository url);
  • YAML-format parameter files are now supported (thanks to Tristan Webb);
  • added "upstream" attribute to the Repository class, which may contain the URL of the repository from which your local repository was cloned;
  • added MirroredFileSystemDataStore, which supports the case where files exist both on the local filesystem and on some web server (e.g. DropBox);
  • the name/e-mail of the user who launched the computation is now captured (first trying ~/.smtrc, then the version control system);
  • there is now a choice of methods for auto-generating labels when they are not supplied by the user: timestamp-based (the default and previously the only option) and uuid-based. Use the "-g" option to smt configure;
  • you can also specify the timestamp format to use (thanks to Yoav Ram);
  • improved API reference documentation.

Bug fixes

A handful of bugs have been fixed.

Download, support and documentation

The easiest way to get the latest version of Sumatra is

  $ pip install sumatra

Alternatively, Sumatra 0.5.0 may be downloaded from PyPI or from the INCF Software Center. Support is available from the sumatra-users Google Group. Full documentation is available on pythonhosted.org.