Functionalizer is a tool for filtering the output of a touch detector (the "touches") according to morphological models, given in in the form of recipe prescription as described in the SONATA extension.
To process the large quantities of data optimally, this software uses PySpark.
The easiest way to install functionalizer is via:
pip install functionalizer
Due to a dependency on mpi4py
, a MPI implementation needs to be installed on the
system used. On Ubuntu, this can be achieved with:
apt-get install -y libopenmpi-dev
For manual installation from sources via pip
, a compiler handling C++17 will be
necessary. Furthermore, all git
submodules should be checked out:
gh repo clone BlueBrain/functionalizer -- --recursive --shallow-submodules
cd functionalizer
pip install .
Spark and Hadoop should be installed and set up as runtime dependencies.
Functionalizer is an integral part of building a brain circuit. It will take the connectome as established by
- appositionizer, in the form of detailed morphologies being in close proximity, or
- connectome-manipulator, which will approximate connectivity following probabilistic rules,
and transform then in any of the following filtering steps:
- trim appositions according to simple touch rules
- trim appositions to follow biological distributions, parametrized in connection rules
- add synaptic properties to convert any apposition into a proper synapse
If the input format is binary from appositionizer, one may use touch2parquet
from
parquet-converters to convert into Parquet that may be read by Functionalizer.
All circuit inputs need to be defined in a circuit_config.json
according to the
SONATA extension, containing pointers to nodes in nodes.h5 and morphologies.
A recipe.json
, defined in the same SONATA extension, is used to supply the parameters
needed for filters.
The output of Functionalizer should be converted to SONATA-conform HDF5 via parquet2hdf5
from parquet-converters.
Basic usage follows the pattern:
functionalizer --s2f --circuit-config=circuit_config.json --recipe=recipe.json edges.h5
Where the final argument edges.h5 may also be a directory of Parquet files. When running on a cluster with multiple nodes, care should be taken that every rank occupies a whole node, Spark will then spread out across each node.
The development of this software was supported by funding to the Blue Brain Project, a research center of the École polytechnique fédérale de Lausanne (EPFL), from the Swiss government's ETH Board of the Swiss Federal Institutes of Technology.
Copyright (c) 2017-2024 Blue Brain Project/EPFL