Image of neuroscience students in a classroom

Tutorials & E-Library

Would you like to learn how to use the tools and services available on EBRAINS? Here, you can find a list of EBRAINS offerings and links to their tutorials.

User Documentation
Level: Advanced

Segmentation intro: Dealing with simulations that generate a lot of data

Many users have asked how to deal with simulations that generate a lot of data that must be saved. The simplest strategy for saving simulation results is to capture them to Vectors during the simulation, and write them to one or more files after the simulation is complete. This works as long as everything fits into available memory.

If memory limits are exceeded, the first thing to do is go back and decide whether you really need to keep all of that stuff. For example, when simulating spiking network models, the only data that must be saved are the spike-gid pairs. Given these it is possible to go back after the end of a simulation and reconstruct the details of activity in a subnet--but that's another story. In this document we presume that you have already selected just those variables that are absolutely essential.

The answer to your problem is to break the simulation into shorter segments, each of which is brief enough to avoid memory overrun, and save the results from each segment before advancing to the next. After the last segment has been executed and its results saved, you can assemble the segmented data into a single record, if you like. Here are a couple of simple examples to help you get started.
User Documentation
Level: Advanced

ModelView: Compact display of parameters for NEURON models.

The availability of a large number of models in ModelDB (Migliore et al 2003), helps investigators test their intuition of model behavior and provides building blocks for future modeling applications to the interpretation of experimental findings. However the NEURON (Hines and Carnevale 2001) model legacy code entered by publication authors was generally not developed with presentation as a high priority. The original code can be difficult to analyze and it sometimes happens that variables are reset so that the values at run time are different than the first values indicated in the top of the code. ModelView overcomes these problems by providing a (run-time state) preview of the properties of a model (anatomy and biophysical attributes). Having this information available for viewing in ModelDB lets investigators quickly develop a conceptual picture of the model structure and compare parameter differences between runs. It makes it possible to ask detailed questions about the model that would have been time-consuming to answer without ModelView.
Level: Beginner

Live Papers - a step-by-step walkthrough of the Live Paper Platform

The EBRAINS Live Papers are structured and interactive supplementary documents to complement journal publications, that allow users to readily access, explore, and utilize the various kinds of data underlying scientific studies. Interactivity is a prominent feature with several integrated tools and services that will allow users to download, visualize, or simulate data, models, and results presented in the corresponding publications.

In this tutorial-cum-demonstration, you will learn how to view published Live Papers and how to access and explore the resources provided in these Live Papers.
Video Tutorial
Level: Advanced

Object Classification

ilastik object classification workflow tutorial. As the name suggests, the object classification workflow aims to classify full objects, based on object-level features and user annotations. An object in this context is a set of pixels that belong to the same instance. Object classification requires a second input besides the usual raw image data: an image that indicates for pixels whether they belong to an object or not, i.e. pixel predictions, a segmentation or a label image. This can be obtained e.g. using the Pixel Classification Workflow. The workflow exists in two variants to handle different types for the second input:
Video Tutorial
Level: Advanced

Pixel Classification workflow

The Pixel Classification workflow assigns labels to pixels based on pixel features and user annotations. The workflow offers a choice of generic pixel features, such as smoothed pixel intensity, edge filters and texture descriptors. Once the features are selected, a Random Forest classifier is trained from user annotations interactively. The Random Forest is known for its excellent generalization properties, the overall workflow is applicable to a wide range of segmentation problems. Note that this workflow performs semantic, rather than instance, segmentation and returns a probability map of each class, not individual objects.

Create an account

EBRAINS is open and free. Sign up now for complete access to our tools and services.