Acoular is a Python module for acoustic beamforming that is distributed under the new BSD license.
It is aimed at applications in acoustic testing. Multichannel data recorded by a microphone array can be processed and analyzed in order to generate mappings of sound source distributions. The maps (acoustic photographs) can then be used to locate sources of interest and to characterize them using their spectra.
- frequency domain beamforming algorithms: delay & sum, Capon (adaptive), MUSIC, functional beamforming, eigenvalue beamforming
- frequency domain deconvolution algorithms: DAMAS, DAMAS+, Clean, CleanSC, orthogonal deconvolution
- frequency domain inverse methods: CMF (covariance matrix fitting), general inverse beamforming, SODIX
- time domain methods: delay & sum beamforming, CleanT deconvolution
- time domain methods applicable for moving sources with arbitrary trajectory (linear, circular, arbitrarily 3D curved),
- frequency domain methods for rotating sources via virtual array rotation for arbitrary arrays and with different interpolation techniques
- 1D, 2D and 3D mapping grids for all methods
- gridless option for orthogonal deconvolution
- four different built-in steering vector formulations
- arbitrary stationary background flow can be considered for all methods
- efficient cross spectral matrix computation
- flexible modular time domain processing: n-th octave band filters, fast, slow, and impulse weighting, A-, C-, and Z-weighting, filter bank, zero delay filters
- time domain simulation of array microphone signals from fixed and arbitrarily moving sources in arbitrary flow
- fully object-oriented interface
- lazy evaluation: while processing blocks are set up at any time, (expensive) computations are only performed when needed
- intelligent and transparent caching: computed results are automatically saved and loaded on the next run to avoid unnecessary re-computation
- parallel (multithreaded) implementation with Numba for most algorithms
- easily extendable with new algorithms
Acoular is licensed under the BSD 3-clause. See LICENSE
If you use Acoular for academic work, please consider citing both our publication:
Sarradj, E., & Herold, G. (2017).
A Python framework for microphone array data processing.
Applied Acoustics, 116, 50–58.
https://doi.org/10.1016/j.apacoust.2016.09
and our software:
Sarradj, E., Herold, G., Kujawski, A., Jekosch, S., Pelling, A. J. R., Czuchaj, M., Gensch, T., & Oertwig, S..
Acoular – Acoustic testing and source mapping software.
Zenodo. https://zenodo.org/doi/10.5281/zenodo.3690794
Acoular runs under Linux, Windows and MacOS and needs Numpy, Scipy, Traits, scikit-learn, pytables, Numba packages available. Matplotlib is needed for some of the examples.
If you want to use input from a soundcard, you will also need to install the sounddevice package. Some solvers for the CMF method need Pylops.
Acoular can be installed via conda, which is also part of the Anaconda Python distribution. It is recommended to install into a dedicated conda environment. After activating this environment, run
conda install -c acoular acoular
This will install Acoular in your Anaconda Python environment and make the Acoular library available from Python. In addition, this will install all dependencies (those other packages mentioned above) if they are not already present on your system.
A second option is to install Acoular via pip. It is recommended to use a dedicated virtual environment and then run
pip install acoular
For more detailed installation instructions, see the documentation.
Documentation is available here with a getting started section and examples.
The Acoular blog contains some tutorials.
If you discover problems with the Acoular software, please report them using the issue tracker on GitHub. Please use the Acoular discussions forum for practical questions, discussions, and demos.
We are always happy to welcome new contributors to the project. If you are interested in contributing, have a look at the CONTRIBUTING.md file.
This reads data from 64 microphone channels and computes a beamforming map for the 8kHz third octave band:
from os import path
import acoular
from matplotlib.pylab import figure, plot, axis, imshow, colorbar, show
# this file contains the microphone coordinates
micgeofile = path.join(path.split(acoular.__file__)[0],'xml','array_64.xml')
# set up object managing the microphone coordinates
mg = acoular.MicGeom( file=micgeofile )
# set up object managing the microphone array data (usually from measurement)
ts = acoular.TimeSamples( file='three_sources.h5' )
# set up object managing the cross spectral matrix computation
ps = acoular.PowerSpectra( source=ts, block_size=128, window='Hanning' )
# set up object managing the mapping grid
rg = acoular.RectGrid( x_min=-0.2, x_max=0.2, y_min=-0.2, y_max=0.2, z=0.3, \
increment=0.01 )
# set up steering vector, implicitely contains also the standard quiescent
# environment with standard speed of sound
st = acoular.SteeringVector( grid = rg, mics=mg )
# set up the object managing the delay & sum beamformer
bb = acoular.BeamformerBase( freq_data=ps, steer=st )
# request the result in the 8kHz third octave band from approriate FFT-Lines
# this starts the actual computation (data intake, FFT, Welch CSM, beamforming)
pm = bb.synthetic( 8000, 3 )
# compute the sound pressure level
Lm = acoular.L_p( pm )
# plot the map
imshow( Lm.T, origin='lower', vmin=Lm.max()-10, extent=rg.extend(), \
interpolation='bicubic')
colorbar()