All Tools

ebrains-collaboratory

The EBRAINS Collaboratory (initially known as Collaboratory 2.0) offers researchers and developers an environment to work in teams and share their work with users, teams or all of the Internet. Workspaces in the Collaboratory are known as collabs. The Collaboratory is composed of a collection of software for web services. The IAM service of the Collaboratory manages user identification and team management for EBRAINS services. Users can be grouped into units, groups and collab teams for simpler management. The Wiki service of the Collaboratory hosts the main interface to access all other Collaboratory services. It also offers a handy way of documenting your work with a simple wiki user interface. The Drive service offers each collab its own storage space for files. The drive provides easy access to files from Jupyter Notebooks. All files are under version control. The Drive is intended for smaller files that change more often. The Lab service provides a JupyterLab environment for your notebooks with official releases of EBRAINS tools pre-installed. It’s a great way of programming interactively and of sharing your notebooks with other users. The Office service handles Office documents (Word, PowerPoint or Excel) which can be edited collaboratively online. Whether it is for taking live minutes in a meeting or to finalize/review a report, the collaborative mode is very handy. The Bucket service offers each collab its own storage space for large files. The bucket provides programmatic access to files from Jupyter Notebooks via the bucket API. Datasets, videos and other files too large for the Drive should be stored here. The Chat service offers instant messaging with all users that have an EBRAINS account and that have entered the chat at least once. The chat offers channels, discussions and direct messaging. Client apps are available for desktop and mobile devices. Users that are not active in the Chat also receive notifications by email.

Factorisation-based Image Labelling

Rationale The approach assumes that segmented (into GM, WM and background) images have been aligned, so does not require the additional complexity of a convolutional approach. The use of segmented images is to make the approach less dependent on the particular image contrasts so it generalises better to a wider variety of brain scans. The approach assumes that there are only a relatively small number of labelled images, but many images that are unlabelled. It therefore uses a semi-supervised learning approach, with an underlying Bayesian generative model that has relatively few weights to learn. Model The approach is patch based. For each patch, a set of basis functions model both the (categorical) image to label, and the corresponding (categorical) label map. A common set of latent variables control the two sets of basis functions, and the results are passed through a softmax so that the model encodes the means of a multinouli distribution (Böhning, 1992; Khan et al, 2010). Continuity over patches is achieved by modelling the probability of the latent variables within each patch conditional on the values of the latent variables in the six adjacent patches, which is a type of conditional random field (Zhang et al, 2015; Brudfors et al, 2019). This model (with Wishart priors) gives the prior mean and covariance of a Gaussian prior over the latent variables of each patch. Patches are updated using an iterative red-black checkerboard scheme. Labelling After training, labelling a new image is relatively fast because optimising the latent variables can be formulated within a scheme similar to a recurrent Res-Net (He et al, 2016)."

fMRIPrep

Preprocessing of functional MRI (fMRI) involves numerous steps to clean and standardize the data before statistical analysis. Generally, researchers create ad hoc preprocessing workflows for each dataset, building upon a large inventory of available tools. The complexity of these workflows has snowballed with rapid advances in acquisition and processing. fMRIPrep is an analysis-agnostic tool that addresses the challenge of robust and reproducible preprocessing for task-based and resting fMRI data. fMRIPrep automatically adapts a best-in-breed workflow to the idiosyncrasies of virtually any dataset, ensuring high-quality preprocessing without manual intervention. fMRIPrep robustly produces high-quality results on diverse fMRI data. Additionally, fMRIPrep introduces less uncontrolled spatial smoothness than observed with commonly used preprocessing tools. fMRIPrep equips neuroscientists with an easy-to-use and transparent preprocessing workflow, which can help ensure the validity of inference and the interpretability of results. The workflow is based on Nipype and encompases a large set of tools from well-known neuroimaging packages, including [FSL](<https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/), ANTs, FreeSurfer, AFNI, and Nilearn. This pipeline was designed to provide the best software implementation for each state of preprocessing, and will be updated as newer and better neuroimaging software becomes available. fMRIPrep performs basic preprocessing steps (coregistration, normalization, unwarping, noise component extraction, segmentation, skullstripping etc.) providing outputs that can be easily submitted to a variety of group level analyses, including task-based or resting-state fMRI, graph theory measures, surface or volume-based statistics, etc. fMRIPrep allows you to easily do the following: Take fMRI data from unprocessed (only reconstructed) to ready for analysis. Implement tools from different software packages. Achieve optimal data processing quality by using the best tools available. Generate preprocessing-assessment reports, with which the user can easily identify problems. Receive verbose output concerning the stage of preprocessing for each subject, including meaningful errors. Automate and parallelize processing steps, which provides a significant speed-up from typical linear, manual processing.

ibc-public

This Python package gives the pipeline used to process the MRI data obtained in the Individual Brain Charting Project. More info on the data can be found at IBC public protocols and IBC webpage. Latest collection of raw data is available on OpenNeuro, data accession no.002685. Latest collection of unthresholded statistical maps can be found on NeuroVault, id collection=6618. Install Under the main working directory of this repository in your computer, run the following command in a command prompt: pip install -e . Example usage One can import the entire package with import ibc_public or use specific parts of the package: from ibc_public import utils_data utils_data.make_surf_db(derivatives="/path/to/ibc/derivatives", mesh="fsaverage5") Details These script make it possible to preprocess the data run topup distortion correction run motion correction run coregistration of the fMRI scans to the individual T1 image run spatial normalization of the data run a general linear model to obtain brain activity maps for the main contrasts of the experiment. Core scripts The core scripts are in the scripts folder pipeline.py lunches the full analysis on fMRI data (pre-processing + GLM) glm_only.py launches GLM analyses on the data surface_based_analyses launches surface extraction and registration with Freesurfer; it also projects fMRI data to the surface surface_glm_analysis.py runs glm analyses on the surface dmri_preprocessing (WIP) is for diffusion daat. It relies on dipy. anatomical mapping (WIP) yields T1w, T2w and MWF surrogates from anatomical acquisitions. script_retino.py yields some post-processing for retinotopic acquisitions (derivation of retinotopic representations from fMRI maps) Dependencies Dependencies are : FSL (topup) SPM12 for preprocessing Freesurfer for surface-based analysis Nipype to call SPM12 functions Pypreprocess to generate preprocessing reports Nilearn for various functions Nistats to run general Linear models. The scripts have been used with the following versions of software and environment: Python 3.5 Ubuntu 16.04 Nipype v0.14.0 Pypreprocess v0.0.1.dev FSL v5.0.9 SPM12 rev 7219 Nilearn v0.4.0 Nistats v0.0.1.a Future work More high-level analyses scripts Scripts for additional datasets not yet available scripts for surface-based analysis Contributions Please feel free to report any issue and propose improvements on Github.

IntrAnat

A software to visualize electrodes implantation on image data and prepare database for group studies. Multimodality and electrode implantation with 3D display and easy co-registration between modalities. (MRI : T1, T2, FLAIR, fMRI, DTI; CT ; PET) Semi-automatic estimation of the volume of resection Importation of SEEG files (for now only .TRC, Micromed©) Display of cortico-cortical evoked potential mapping Automatic exportation of "dictionaries" containing the information of contact positions in the native and MNI coordinate systems, associated parcels in different atlas (MarsAtlas, Destrieux – Freesurfer, Brodmann, AAL, etc.), white/grey matter labeling, and resection labeling. Automatic exportation of dictionaries containing the total volume of the resection and percentage of MarsAtlas or Destrieux parcels which have been considered by the resection (for now only assumes no brain deformation due to the resection). Display of Epileptogenicity maps coregistered with other modalities (all statistical maps registered in the T1 pre space are loadable). groupDisplay can be used to visualize electrode contacts from many patients over images in the MNI referential and to research patients according to different keywords. IntranatElectrodes software is based on BrainVISA, Morphologist and Cortical Surface. It uses ANTs and spm12 for multimodality coregistration and spm12 for estimation of the deformation field to convert into MNI Space. It needs a Matlab license to run the normalisation and groupDisplay interface.

Neo

Neo is a Python package for working with electrophysiology data in Python, together with support for reading a wide range of neurophysiology file formats, including Spike2, NeuroExplorer, AlphaOmega, Axon, Blackrock, Plexon, Tdt, and support for writing to a subset of these formats plus non-proprietary formats including HDF5. The goal of Neo is to improve interoperability between Python tools for analyzing, visualizing and generating electrophysiology data by providing a common, shared object model. In order to be as lightweight a dependency as possible, Neo is deliberately limited to represention of data, with no functions for data analysis or visualization. Neo is used by a number of other software tools, including SpykeViewer (data analysis and visualization), Elephant (data analysis), the G-node suite (databasing), PyNN (simulations), tridesclous (spike sorting) and ephyviewer (data visualization). OpenElectrophy (data analysis and visualization) uses an older version of neo. Neo implements a hierarchical data model well adapted to intracellular and extracellular electrophysiology and EEG data with support for multi-electrodes (for example tetrodes). Neo's data objects build on the quantities package, which in turn builds on NumPy by adding support for physical dimensions. Thus Neo objects behave just like normal NumPy arrays, but with additional metadata, checks for dimensional consistency and automatic unit conversion. A project with similar aims but for neuroimaging file formats is NiBabel.

NeuroR

NeuroR is a collection of tools to repair morphologies. There are presently three types of repair which are outlined below. Sanitization This is the process of sanitizing a morphological file. It currently: ensures it can be loaded with MorphIO raises if the morphology has no soma or of invalid format removes unifurcations set negative diameters to zero raises if the morphology has a neurite whose type changes along the way removes segments with near zero lengths (shorter than 1e-4) Note: more functionality may be added in the future Cut plane repair The cut plane repair aims at regrowing part of a morphologies that have been cut out when the cell has been experimentally sliced. neuror cut-plane repair contains the collection of CLIs to perform this repair. Additionally, there are CLIs for the cut plane detection and writing detected cut planes to JSON files: If the cut plane is aligned with one of the X, Y or Z axes, the cut plane detection can be done automatically with the CLIs: neuror cut-plane file neuror cut-plane folder If the cut plane is not one the X, Y or Z axes, the detection has to be performed through the helper web application that can be launched with the following CLI: neuror cut-plane hint Unravelling Unravelling is the action of “stretching” the cell that has been shrunk because of the dehydratation caused by the slicing. The unravelling CLI sub-group is: neuror unravel The unravelling algorithm can be described as follows: Segments are unravelled iteratively. Each segment direction is replaced by the averaged direction in a sliding window around this segment. The original segment length is preserved. The start position of the new segment is the end of the latest unravelled segment.

openMINDS-Python

openMINDS Python is a small library to support the creation and use of openMINDS metadata models and schemas in your Python application, with import and export in JSON-LD format. The package contains all openMINDS schemas as Python classes in addition to schema base classes and utility methods. Installation pip install openMINDS Usage from datetime import date from openminds import Collection, IRI import openminds.latest.core as omcore # Create an empty metadata collection collection = Collection() # Create some metadata mgm = omcore.Organization( full_name="Metro-Goldwyn-Mayer Studios, Inc.", short_name="MGM", homepage=IRI("https://www.mgm.com") ) stan = omcore.Person( given_name="Stan", family_name="Laurel", affiliations=omcore.Affiliation(member_of=mgm, start_date=date(1942, 1, 1)) ) ollie = omcore.Person( given_name="Oliver", family_name="Hardy", affiliations=omcore.Affiliation(member_of=mgm, start_date=date(1942, 1, 1)) ) # Add the metadata to the collection collection.add(stan, ollie, mgm) # Check the metadata are valid failures = collection.validate() # Save the collection in a single JSON-LD file collection.save("my_collection.jsonld") # Save each node in the collection to a separate file collection.save("my_collection", individual_files=True) # creates files within the 'my_collection' directory # Load a collection from file new_collection = Collection() new_collection.load("my_collection.jsonld")

Paraver

Paraver was developed to respond to the need to have a qualitative global perception of the application behavior by visual inspection and then to be able to focus on the detailed quantitative analysis of the problems. Expressive power, flexibility and the capability of efficiently handling large traces are key features addressed in the design of Paraver. The clear and modular structure of Paraver plays a significant role towards achieving these targets. Paraver is a very flexible data browser that is part of the CEPBA-Tools toolkit. Its analysis power is based on two main pillars. First, its trace format has no semantics; extending the tool to support new performance data or new programming models requires no changes to the visualizer, just to capture such data in a Paraver trace. The second pillar is that the metrics are not hardwired on the tool but programmed. To compute them, the tool offers a large set of time functions, a filter module, and a mechanism to combine two time lines. This approach allows displaying a huge number of metrics with the available data. To capture the experts knowledge, any view or set of views can be saved as a Paraver configuration file. After that, re-computing the view with new data is as simple as loading the saved file. The tool has been demonstrated to be very useful for performance analysis studies, giving much more details about the applications behaviour than most performance tools. Some Paraver features are the support for: Detailed quantitative analysis of program performance Concurrent comparative analysis of several traces Customizable semantics of the visualized information Cooperative work, sharing views of the tracefile Building of derived metrics

ParaView

ParaView is an open-source, multi-platform data analysis and visualization application. ParaView users can quickly build visualizations to analyze their data using qualitative and quantitative techniques. The data exploration can be done interactively in 3D or programmatically using ParaView’s batch processing capabilities. ParaView was developed to analyze extremely large datasets using distributed memory computing resources. It can be run on supercomputers to analyze datasets of petascale as well as on laptops for smaller data. ParaView is an application framework as well as a turn-key application. The ParaView code base is designed in such a way that all of its components can be reused to quickly develop vertical applications. This flexibility allows ParaView developers to quickly develop applications that have specific functionality for a specific problem domain. ParaView runs on distributed and shared memory parallel and single processor systems. It has been successfully deployed on Windows, Mac OS X, Linux, SGI, IBM Blue Gene, Cray and various Unix workstations, clusters and supercomputers. Under the hood, ParaView uses the Visualization Toolkit (VTK) as the data processing and rendering engine and has a user interface written using Qt® The goals of the ParaView team include the following: Develop an open-source, multi-platform visualization application. Support distributed computation models to process large data sets. Create an open, flexible, and intuitive user interface. Develop an extensible architecture based on open standards.

PCI-st

The Perturbational Complexity Index (PCI) was recently introduced to assess the capacity of thalamocortical circuits to engage in complex patterns of causal interactions. While showing high accuracy in detecting consciousness in brain-injured patients, PCI depends on elaborate experimental setups and offline processing, and has restricted applicability to other types of brain signals beyond transcranial magnetic stimulation and high-density EEG (TMS/hd-EEG) recordings. We aim to address these limitations by introducing PCIST, a fast method for estimating perturbational complexity of any given brain response signal. PCIST is based on dimensionality reduction and state transitions (ST) quantification of evoked potentials. The index was validated on a large dataset of TMS/hd-EEG recordings obtained from 108 healthy subjects and 108 brain-injured patients, and tested on sparse intracranial recordings (SEEG) of 9 patients undergoing intracranial single-pulse electrical stimulation (SPES) during wakefulness and sleep. When calculated on TMS/hd-EEG potentials, PCIST performed with the same accuracy as the original PCI, while improving on the previous method by being computed in less than a second and requiring a simpler set-up. In SPES/SEEG signals, the index was able to quantify a systematic reduction of intracranial complexity during sleep, confirming the occurrence of state-dependent changes in the effective connectivity of thalamocortical circuits, as originally assessed through TMS/hd-EEG. PCIST represents a fundamental advancement towards the implementation of a reliable and fast clinical tool for the bedside assessment of consciousness as well as a general measure to explore the neuronal mechanisms of loss/recovery of brain complexity across scales and models.

SDA: Simulation of Diffusional Association

SDA7 can be used to carry out Brownian dynamics simulations of the diffusional association in a continuum aqueous solvent of two solute molecules, e.g. proteins, or of a solute molecule to an inorganic surface. SDA7 can also be used to simulate the diffusion of multiple proteins, in dilute or concentrated solutions, e.g., to study the effects of macromolecular crowding. If the 3D structure of the bound complex is unknown, SDA can be used for rigid-body docking to predict the structure of the diffusional encounter complex or the orientation in which a protein binds to a surface. The configurations obtained from SDA can subsequently be refined by running molecular dynamics simulations to obtain structures for fully bound complexes. If the 3D structure of the bound complex is known, SDA can be used to calculate bimolecular association rate constants. It can also be used to record Brownian dynamics trajectories or encounter complexes and to calculate bimolecular electron transfer rate constants. While these Brownian dynamics simulations are usually carried out with rigid solutes, in SDA7 we give a possibility to assign more than one conformation to each solute molecule. This allows some large-scale internal dynamics of macromolecules to be considered in the simulations. In this SDA distribution, there is a single executable, sda_flex, which will execute different types of simulation: Compute the bimolecular diffusional association rate constant for 2 solutes using a user-defined set of intermolecular contact distances as reaction criteria Compute the rate constants for electron transfer from the relative diffusion of two proteins Perform rigid-body docking of two macromolecules Perform rigid-body docking of a solute and a surface Calculate the time during which user-defined contacts are maintained; this gives an approximation for the lifetimes of a complex. The starting configurations may be from a crystal structure or recorded from a simulation Re-calculate energies for a recorded set of configurations Compute PMFs for protein/surface binding Perform simulations of the diffusion of multiple proteins The simulations can be run in serial or in parallel mode on a shared-memory computer architecture.

STEPS

STEPS is a package for exact stochastic simulation of reaction-diffusion systems in arbitrarily complex 3D geometries. Our core simulation algorithm is an implementation of Gillespie's SSA, extended to deal with diffusion of molecules over the elements of a 3D tetrahedral mesh. While it was mainly developed for simulating detailed models of neuronal signaling pathways in dendrites and around synapses, it is a general tool and can be used for studying any biochemical pathway in which spatial gradients and morphology are thought to play a role. STEPS also supports accurate and efficient computational of local membrane potentials on tetrahedral meshes, with the addition of voltage-gated channels and currents. Tight integration between the reaction-diffusion calculations and the tetrahedral mesh potentials allows detailed coupling between molecular activity and local electrical excitability. We have implemented STEPS as a set of Python modules, which means STEPS users can use Python scripts to control all aspects of setting up the model, generating a mesh, controlling the simulation and generating and analyzing output. The core computational routines are still implemented as C/C++ extension modules for maximal speed of execution. STEPS 3.0.0 and above provide early parallel solution for stochastic spatial reaction-diffusion and electric field simulation. STEPS 3.6.0 and above provide a new set of APIs (API2) to speedup STEPS model development. Models developed with the old API (API1) are still supported.

Subcellular WebApp

The subcellular application was designed as a hub web based environment for creation and simulation of reaction-diffusion models integrated with the molecular repository. It allows also to import, combine and simulate existing models expressed with BNGL and SBML languages. Two types of models are supported: rule-based models convenient and computationally efficient for modeling big protein signaling complexes and chemical reaction network models. The subcellular application is integrated with a number of solvers for reaction-diffusion systems of equations. It supports simulation of spatially distributed systems using STEPS (stochastic engine for pathway simulation) – which provides spatial stochastic and deterministic solvers for simulation of reactions and diffusion on tetrahedral meshes. The application provides as well a number of facilities for visualizing the models geometry and the results of the simulations. The molecular repository is a publicly available database of biological information, relevant for brain molecular network modeling. It accommodates several types of biological information which are not available from existing public databases, such as concentrations of proteins in different subcellular compartments of neuronal and glial cells, kinetic data on protein interactions specific for brain and synaptic signaling and plasticity, data on molecules mobility. The repository is integrated with the subcellular application. They share the same set of entities described by BioNetGen expressions. The molecular repository can be queried from the subcellular application and the results of the query can be added to a molecular network model.

UG4

UG4 (Unstructured Grids 4) is an extensive, flexible, cross-platform open source simulation framework for the numerical solution of systems of partial differential equations. Using Finite Element and Finite Volume methods on hybrid, adaptive, unstructured multigrid hierarchies, UG4 allows for the simulation of complex real world models (physical, biological etc.) on massively parallel computer architectures. UG4 is implemented in the C++ programming language and provides grid management, discretization and (linear as well as non-linear) solver utilities. It is extensible and customizable via its plugin mechanism. The highly scalable MPI based parallelization of UG4 has been shown to scale to hundred thousands of cores. Simulation workflows are defined either using the Lua scripting language or the graphical VRL interface https://vrl-studio.mihosoft.eu/. Besides that, UG4 can be used as a library for third-party code. Several examples are provided in the Examples application that can be used for simulations of the corresponding phenomena but also serve as demonstration modules for implementing user-defined plugins and scripts. By developing custom plugins, users can extend the functionality of the framework for their particular purposes. The framework provides coupling facilities for the models implemented in different plugins. Key elements of UG4 are: Efficient solvers on distributed, adaptive multigrid hierarchies. A flexible component based discretization system. Efficient support for massively parallel computer architectures. Full scripting support. A modular plugin based architecture.

UQSA

Uncertainty quantification via ABC-MCMC with copulas as well as global sensitivity analysis for ODE models in systems biology. This R package can approximate the posterior probability density of Parameters for Ordinary Differential Equation models. The ABC sampler used here is developed to be fairly model agnostic, but the supplied tool set and R functions specifically target ODEs as they are fast enough to simulate to permit Bayesian methods. Bayesian methods for parameter estimation are resource intensive and therefore require some consideration of efficiency in simulation. Other modeling frameworks exist, with benefits of higher accuracy in specific scenarios (e.g. low molecule count), or reduced complexity (rule based models). We have written a sibling library for R that facilitates the simulation of systems biology specific models using the GNU scientific library solvers (and models written in C). With powerful enough computing hardware, or small enough models, these frameworks can be combined with this package. We write models using the SBtab format and automatically generate C-code as well as R-code for them, the R-code can be used with deSolve (an R package) while the C-code is compatible with gsl_odeiv2 solvers. Code generation is done via SBtabVFGEN (an R package) and vfgen (a standalone software). In addition, we are writing our own substitution for vfgen, to avoid single points of failure. But the model setup phase can be completely sidestepped by writing the C-code manually (or generating it in any other way).

Whole-brain linear effective connectivity (WBLEC) estimation

These Python notebooks reproduce some figures in the following preprint using the libraries pyMOU and NetDynFlow: https://www.biorxiv.org/content/10.1101/531830v2 The notebook 1_MOUEC_Estimation.ipynb should be executed first to tune the model to the fMRI data. The other notebooks can be used for classification and interpretation of the model (using the flow for network analysis). The data files are: BOLD time series in ts_emp.npy structural connectivity in SC_anat.npy ROI labels in ROI_labels.npy ####Notebook 1_MOUEC_Estimation.ipynb This notebook calculates the functional connectivity and the model-based effective connectivity for each session (or run) and subject from the BOLD time series. The model is a multivariate Ornstein-Uhlenbeck (MOU) process, whose estimation procedure is implemented in the pyMOU library. The model estimates and other measures are stored in the form of arrays in the model_param_movie folder. ####Notebooks 2a_ClassificationTasks.ipynb and 2b_ClassificationSubjects.ipynb These notebooks compare the performances of the several type of connectivity measures (including functional and effective connectivity) in identifying cognitive tasks and subjects. They rely on the scikit.learn library. ####Notebook 3a_Flow.ipynb This notebook uses the NetDynFlow library to calculate the flow, which is network-oriented analysis of the MOU model fitted to the BOLD data. The flow corresponds to the input response of the network to perturbation (or stimulation of given regions). The flow captures network effects that arise from the recurrent connectivity, i.e. also taking into account indirect paths between all pairs of regions. ####Notebook 3b_Communities.ipynb This notebook detects communities based on the flow, namely brain regions are grouped together if they exchange strong flow in the network. It also compares the community structure between rest and movie.

Create an account

EBRAINS is open and free. Sign up now for complete access to our tools and services.