JavaScript is required to consult this page

Tools

Nilearn

Nilearn makes it easy to use many advanced machine learning, pattern recognition and multivariate statistical techniques on neuroimaging data for applications such as MVPA (Mutli-Voxel Pattern Analysis), decoding, predictive modelling, functional connectivity, brain parcellations, connectomes. Nilearn can readily be used on task fMRI, resting-state, or VBM data. For a machine-learning expert, the value of nilearn can be seen as domain-specific feature engineering construction, that is, shaping neuroimaging data into a feature matrix well suited to statistical learning, or vice versa.

Data analysis and visualisation

NMODL Framework

The NMODL Framework is a code generation engine for NEURON MODeling Language (NMODL). It is designed with modern compiler and code generation techniques to: Provide modular tools for parsing, analysing and transforming NMODLProvide easy to use, high level Python APIGenerate optimised code for modern compute architectures including CPUs, GPUs Flexibility to implement new simulator backendsSupport for full NMODL specification.

Modelling and simulation

NRP

The HBP Neurorobotics platform is the backbone of the EBRAINS Closed-Loop Neuroscience service. It provides access to an online, physically realistic environment within which users can simulate and use neural models (including spiking neural networks running on neuromorphic chips) composed into functional architectures, connected to physical incarnations (musculoskeletal models, robotic systems). The functional connection of neural models to physical agents allows to explore performance of a number of tasks of interest in closed loop, spanning a range from lower level of abstraction sensorimotor tasks, to multimodal perception or dexterous motion control, to cognitive functions extending to contextual awareness and decision making. The service allows cognitive and computational neuroscientists to explore, in an embodied setting, connections between neural models considered (their structure), their dynamical behavior (activity), and function, expressed through the physical agent. It provides roboticists direct access to cutting-edge neuroscience research, with the tools to investigate efficacy of functional neural models in addressing problems in robotics and embodied AI. The service is currently available as an online platform accessible and usable remotely through a standard browser, or as a local version (using Docker image or through a full source installation).

NSuite

NSuite is a framework for maintaining and running benchmarks and validation tests for multi-compartment neural network simulations on HPC systems. NSuite automates the process of building simulation engines, and running benchmarks and validation tests. NSuite is specifically designed to allow easy deployment on HPC systems in testing workflows, such as benchmark-driven development or continuous integration. There are three motivations for the development of NSuite: The need for a definitive resource for comparing performance and correctness of simulation engines on HPC systems. The need to verify the performance and correctness of individual simulation engines as they change over time. The need to test that changes to an HPC system do not cause performance or correctness regressions in simulation engines. The framework currently supports the simulation engines Arbor, NEURON, and CoreNeuron, while allowing other simulation engines to be added.

Validation and inference

Nutil

Nutil aims to simplify the pre-and-post processing of 2D brain section image data from mouse, rat and other small animal models. It can be used to preprocess images in preparation for analysis, and used as part of the QUINT workflow to perform spatial analysis of labelled features relative to a reference brain atlas. Nutil is developed as a stand-alone application with a simple user-interface, requiring little-to-no experience to execute.

Brain atlasesData analysis and visualisation

ODE-toolbox

Choosing the optimal solver for systems of ordinary differential equations (ODEs) is a critical step in dynamical systems simulation. ODE-toolbox is a Python package that assists in solver benchmarking, and recommends solvers on the basis of a set of user-configurable heuristics. For all dynamical equations that admit an analytic solution, ODE-toolbox generates propagator matrices that allow the solution to be calculated at machine precision. For all others, first-order update expressions are returned based on the Jacobian matrix. In addition to continuous dynamics, discrete events can be used to model instantaneous changes in system state, such as a neuronal action potential. These can be generated by the system under test as well as applied as external stimuli, making ODE-toolbox particularly well-suited for applications in computational neuroscience.

Modelling and simulation

openMINDS metadata for TVB-ready data

This Jupyter notebook contains Python code for creating openMINDS JSON-LD metadata collections for TVB-ready data. The code in this notebook was used to generate openMINDS metadata for curation of TVB-on-EBRAINS datasets using openMINDS v1. An overview over TVB-on-EBRAINS services is provided in the preprint https://arxiv.org/abs/2102.05888 The openMINDS schema standard specification is hosted in the repository https://github.com/HumanBrainProject/openMINDS

Whole-brain simulationModelling and simulation

openMINDS-Python

openMINDS Python is a small library to support the creation and use of openMINDS metadata models and schemas in your Python application, with import and export in JSON-LD format. The package contains all openMINDS schemas as Python classes in addition to schema base classes and utility methods. Installation pip install openMINDS<br /> ```<br /> <br /> ## Usage<br /> ```<br /> from datetime import date<br /> from openminds import Collection, IRI<br /> import openminds.latest.core as omcore<br /> <br /> # Create an empty metadata collection<br /> <br /> collection = Collection()<br /> <br /> # Create some metadata<br /> <br /> mgm = omcore.Organization(<br /> full_name="Metro-Goldwyn-Mayer Studios, Inc.",<br /> short_name="MGM",<br /> homepage=IRI("https://www.mgm.com")<br /> )<br /> <br /> stan = omcore.Person(<br /> given_name="Stan",<br /> family_name="Laurel",<br /> affiliations=omcore.Affiliation(member_of=mgm, start_date=date(1942, 1, 1))<br /> )<br /> <br /> ollie = omcore.Person(<br /> given_name="Oliver",<br /> family_name="Hardy",<br /> affiliations=omcore.Affiliation(member_of=mgm, start_date=date(1942, 1, 1))<br /> )<br /> <br /> # Add the metadata to the collection<br /> <br /> collection.add(stan, ollie, mgm)<br /> <br /> # Check the metadata are valid<br /> <br /> failures = collection.validate()<br /> <br /> # Save the collection in a single JSON-LD file<br /> <br /> collection.save("my_collection.jsonld")<br /> <br /> # Save each node in the collection to a separate file<br /> <br /> collection.save("my_collection", individual_files=True) # creates files within the 'my_collection' directory<br /> <br /> # Load a collection from file<br /> new_collection = Collection()<br /> new_collection.load("my_collection.jsonld")<br />

OTF2

The Open Trace Format Version 2 (OTF2) is a highly scalable, memory efficient event trace data format plus support library. It is the standard trace format for Scalasca, Vampir, and Tau and is open for other tools. OTF2 is the common successor format for the Open Trace Format (OTF) and the Epilog trace format. It preserves the essential features as well as most record types of both and introduces new features such as support for multiple read/write substrates, in-place time stamp manipulation, and on-the-fly token translation. In particular, it will avoid copying during unification of parallel event streams.

Data

Paraver

Paraver was developed to respond to the need to have a qualitative global perception of the application behavior by visual inspection and then to be able to focus on the detailed quantitative analysis of the problems. Expressive power, flexibility and the capability of efficiently handling large traces are key features addressed in the design of Paraver. The clear and modular structure of Paraver plays a significant role towards achieving these targets. Paraver is a very flexible data browser that is part of the CEPBA-Tools toolkit. Its analysis power is based on two main pillars. First, its trace format has no semantics; extending the tool to support new performance data or new programming models requires no changes to the visualizer, just to capture such data in a Paraver trace. The second pillar is that the metrics are not hardwired on the tool but programmed. To compute them, the tool offers a large set of time functions, a filter module, and a mechanism to combine two time lines. This approach allows displaying a huge number of metrics with the available data. To capture the experts knowledge, any view or set of views can be saved as a Paraver configuration file. After that, re-computing the view with new data is as simple as loading the saved file. The tool has been demonstrated to be very useful for performance analysis studies, giving much more details about the applications behaviour than most performance tools. Some Paraver features are the support for: Detailed quantitative analysis of program performance Concurrent comparative analysis of several traces Customizable semantics of the visualized information Cooperative work, sharing views of the tracefile Building of derived metrics

Data analysis and visualisation

ParaView

ParaView is an open-source, multi-platform data analysis and visualization application. ParaView users can quickly build visualizations to analyze their data using qualitative and quantitative techniques. The data exploration can be done interactively in 3D or programmatically using ParaView’s batch processing capabilities. ParaView was developed to analyze extremely large datasets using distributed memory computing resources. It can be run on supercomputers to analyze datasets of petascale as well as on laptops for smaller data. ParaView is an application framework as well as a turn-key application. The ParaView code base is designed in such a way that all of its components can be reused to quickly develop vertical applications. This flexibility allows ParaView developers to quickly develop applications that have specific functionality for a specific problem domain. ParaView runs on distributed and shared memory parallel and single processor systems. It has been successfully deployed on Windows, Mac OS X, Linux, SGI, IBM Blue Gene, Cray and various Unix workstations, clusters and supercomputers. Under the hood, ParaView uses the Visualization Toolkit (VTK) as the data processing and rendering engine and has a user interface written using Qt® The goals of the ParaView team include the following: Develop an open-source, multi-platform visualization application. Support distributed computation models to process large data sets. Create an open, flexible, and intuitive user interface. Develop an extensible architecture based on open standards.

Data analysis and visualisation

PCI-st

The Perturbational Complexity Index (PCI) was recently introduced to assess the capacity of thalamocortical circuits to engage in complex patterns of causal interactions. While showing high accuracy in detecting consciousness in brain-injured patients, PCI depends on elaborate experimental setups and offline processing, and has restricted applicability to other types of brain signals beyond transcranial magnetic stimulation and high-density EEG (TMS/hd-EEG) recordings. We aim to address these limitations by introducing PCIST, a fast method for estimating perturbational complexity of any given brain response signal. PCIST is based on dimensionality reduction and state transitions (ST) quantification of evoked potentials. The index was validated on a large dataset of TMS/hd-EEG recordings obtained from 108 healthy subjects and 108 brain-injured patients, and tested on sparse intracranial recordings (SEEG) of 9 patients undergoing intracranial single-pulse electrical stimulation (SPES) during wakefulness and sleep. When calculated on TMS/hd-EEG potentials, PCIST performed with the same accuracy as the original PCI, while improving on the previous method by being computed in less than a second and requiring a simpler set-up. In SPES/SEEG signals, the index was able to quantify a systematic reduction of intracranial complexity during sleep, confirming the occurrence of state-dependent changes in the effective connectivity of thalamocortical circuits, as originally assessed through TMS/hd-EEG. PCIST represents a fundamental advancement towards the implementation of a reliable and fast clinical tool for the bedside assessment of consciousness as well as a general measure to explore the neuronal mechanisms of loss/recovery of brain complexity across scales and models.

PIPSA

PIPSA (Protein Interaction Property Similarity Analysis) is a method to compare proteins according to their interaction properties. PIPSA may assist in function assignment, and the estimation of binding properties and enzyme kinetic parameters. The PIPSA webserver, webPIPSA, computes protein electrostatic potentials and corresponding similarity indices for a user-defined set of proteins. The standalone code, multipipsa, which includes a python wrapper, provides further options, including running PIPSA on multiple sites on a protein and performing a comparative analysis of the binding properties of user-defined groups of proteins.

Modelling and simulationMolecular and subcellular simulation

PLIViewer

The PLIViewer is visualization software for 3D-Polarized Light Imaging (3D-PLI), to interactively explore the scalar and vector datasets; it provides additional methods to transform data, thus revealing new insights that are not available in the raw representations. The high resolution provided by 3D-PLI produces massive, terabyte-scale datasets, which makes visualization challenging. The PLIViewer tackles this problem by providing functionality to select areas of interests from the dataset, and options for downscaling. It makes it possible to interactively compute and visualize Orientation Distribution Functions (ODFs) and polar plots from the vector field, which reveal mesoscopic and macroscopic scale information from the microscopic dataset without significant loss of detail. The PLIViewer equips the neuroscientist with specialized visualization tools needed to explore 3D-PLI datasets through direct and interactive visualization of the data.

Data analysis and visualisation

PoSCE

PoSCE (Population Shrinkage Covariance Embedding) is a functional connectivity estimator from rfMRI (resting-state functional Magnetic Resonance Images) timeseries. It relies on the Riemannian geometry of covariances and integrates prior knowledge of covariance distribution over a population. This is an implementation of the work introduced in: M. Rahim, B. Thirion and G. Varoquaux. Population shrinkage of covariance (PoSCE) for better individual brain functional-connectivity estimation, in Medical Image Analysis (2019).

Data analysis and visualisation

PyCOMPSs

PyCOMPSs is the Python binding of COMPSs, a programming model and runtime which aims to ease the development of parallel applications for distributed infrastructures, such as Clusters and Clouds. The Programming model offers a sequential interface but at execution time the runtime system is able to exploit the inherent parallelism of applications at task level. The framework is complemented by a set of tools for facilitating the development, execution monitoring and post-mortem performance analysis. A PyCOMPSs application is composed of tasks, which are methods annotated with decorators following the PyCOMPSs syntax. At execution time, the runtime builds a task graph that takes into account the data dependencies between tasks, and from this graph schedules and executes the tasks in the distributed infrastructure, taking also care of the required data transfers between nodes.

Modelling and simulation

pyGAlib

GAlib works on adjacency matrices, represented as 2D numpy arrays. This choice certainly limits the size of the networks that the library can handle but it also allows to exploit the powers of numpy to manipulate arrays and boost the performance far beyond pure Python code. As a result, GAlib is simple to use and to extend; easy to read, access and modify. It has no hidden code, so you always know what every function actually does. GAlib includes I/O and statistics tools, and a large set of functions for the analysis of graphs including clustering, distances and paths, matching index, assortativity, roles of nodes in modular networks, rih-club coefficients, K-core decomposition, etc. It also includes functions to generate random networks of different types, randomizing networks, as-well-as many examples and ready-to-use scripts useful also for complete beginners.

Modelling and simulation

PyJuGex

Find a set of differentially expressed genes between two user defined volumes of interest based on JuBrain maps. The tool downloads expression values of user specified sets of genes from Allen Brain API. Then, it uses zscores to find which genes are expressed differentially between the user specified regions of interests. This tool is available as a Python package.

Data analysis and visualisation

Results: 163 - 180 of 252

Make the most out of EBRAINS

EBRAINS is open and free. Sign up now for complete access to our tools and services.

Ready to get started?Create your account