Skip to content

Latest commit

 

History

History
150 lines (105 loc) · 6.42 KB

README.rst

File metadata and controls

150 lines (105 loc) · 6.42 KB

BANZAI Pipeline

This repo contains the data reduction package for Las Cumbres Observatory (LCO).

BANZAI stands for Beautiful Algorithms to Normalize Zillions of Astronomical Images.

See also https://banzai.readthedocs.io for more information.

Please cite the following DOI if you are using processed LCOGT data.

Zenodo DOI

We have recently implemented a neural network model to detect cosmic rays in ground based images. For more information pleas see our paper on arXiv. If possible please also cite Xu et al., 2021, arXiv:2106.14922.

Test Status Coverage Status Documentation Status

Installation

BANZAI can be installed using pip, by running from the top-level directory containing setup.py.

Note that pip>=19.3.1 is required to build and install BANZAI.

pip install .

This will automatically install the dependencies from PyPi, so it is recommended to install BANZAI in a virtual environment.

Usage

BANZAI has a variety of console entry points:

  • banzai_reduce_individual_frame: Process a single frame
  • banzai_reduce_directory: Process all frames in a directory
  • banzai_make_master_calibrations: Make a master calibration frame by stacking previously processed individual calibration frames.
  • banzai_e2e_stack_calibrations: Convenience script for stacking calibration frames in the end-to-end tests
  • banzai_automate_stack_calibrations: Start the scheduler that sets when to create master calibration frames
  • banzai_run_realtime_pipeline: Start the listener to detect and process incoming frames
  • banzai_mark_frame_as_good: Mark a calibration frame as good in the database
  • banzai_mark_frame_as_bad: Mark a calibration frame as bad in the database
  • banzai_update_db: Update the instrument table by querying the ConfigDB
  • banzai_run_end_to_end_tests: A wrapper to run the end-to-end tests
  • banzai_migrate_db: Migrate data from a database from before 0.16.0 to the current database format
  • banzai_add_instrument: Add an instrument to the database
  • banzai_add_site: Add a site to the database
  • banzai_add_bpm: Add a BPM to the database
  • banzai_create_db: Initialize a database to be used when running the pipeline

You can see more about the parameters the commands take by adding a --help to any command of interest.

BANZAI can be deployed in two ways, an active pipeline that processes data as it arrives or a manual pipeline that is run from the command line.

The main requirement to run BANZAI is that the database has been set up. BANZAI is database type agnostic as it uses SQLAlchemy. To create a new database to run BANZAI, run

from banzai.dbs import create_db
create_db('.', db_address='sqlite:///banzai.db')

This will create an sqlite3 database file in your current directory called banzai.db.

If you are not running this at LCO, you will have to add the instrument of interest to your database by running banzai_add_instrument before you can process any data.

By default, BANZAI requires a bad pixel mask. You can create one that BANZAI can use by using the tool here. If the bad pixel mask is in the current directory when you create the database it will get automatically added. Otherwise run

from banzai.dbs import populate_calibration_table_with_bpms
populate_calibration_table_with_bpms('/directory/with/bad/pixel/mask', db_address='sqlite://banzai.db')

Generally, you have to reduce individual bias frames first by running banzai_reduce_individual_frame command. If the processing went well, you can mark them as good in the database using banzai_mark_frame_as_good. Once you have individually processed bias frames, you can create a master calibration using banzai_stack_calibrations. This master calibration will then be available for future reductions of other observation types. Next, similarly reduce individual dark frames and then stack them to create a master dark frame. Then, the same for skyflats. At this point, you will be able to process science images using the banzai_reduce_individual_frame command.

To run the pipeline in its active mode, you need to setup a task queue and a filename queue. See the docker-compose.yml file for details on this setup.

Tests

Unit tests can be run using the tox test automation tool, which will automatically build and install the required dependencies in a virtual environment, then run the tests. The end-to-end tests require more setup, so to run only the unit tests locally run:

tox -e test -- -m 'not e2e'

The -m is short for marker. The following markers are defined if you only want to run a subset of the tests:

  • e2e: End-to-end tests. Skip these if you only want to run unit tests.
  • master_bias: Only test making a master bias
  • master_dark: Only test making a master dark, assumes master bias frame already exists
  • master_flat: Only test making a master flat, assumes master bias and dark frames already exist
  • science_files: Only test processing science data, assumes master bias, dark, and flat frames already exist.

The end-to-end tests run on Jenkins at LCO automatically for every pull request.

To run the end-to-end tests locally, the easiest setup uses docker-compose. In the code directory run:

export DOCKER_IMG=banzai
docker build -t $DOCKER_IMG .
docker-compose up

After all of the containers are up, run

docker exec banzai-listener pytest --pyargs banzai.tests "-m e2e"

License

This project is Copyright (c) Las Cumbres Observatory and licensed under the terms of GPLv3. See the LICENSE file for more information.

Support

Create an issue

Powered by Astropy Badge