Skip to content

pridhi-arora/ChRIS_ultron_backEnd

 
 

Repository files navigation

ChRIS logo ChRIS_ultron_backEnd

BuildLicense Last Commit

The core backend service for the ChRIS distributed software platform, also known by the anacronym CUBE. Internally the service is implemented as a Django-PostgreSQL project offering a collection+json REST API. Important ancillary components include the pfcon and pman file transfer and remote process management microservices.

ChRIS development, testing and deployment

Abstract

ChRIS Ultron Back End (sometimes also ChRIS Underlying Back End) or simply CUBE is the main core of the ChRIS system. CUBE provides the main REST API to the ChRIS system, as well as maintaining an internal database of users, files, pipelines, and plugins. Currently CUBE has two separate compute paradigms depending on deployment context. In the case of development all components of CUBE use docker and docker swarm technologies. In the case of production technologies such as openshift and kubernetes are also supported.

Consult this page for instructions on starting CUBE in either development or production contexts. For documentation/overview/background, please see the documention.

Preconditions

Operating system support -- please read

Linux

Linux is the first class host platform for all things CUBE related. Linux distributions used by various core developers include Ubuntu, Arch, and Fedora. The development team is happy to provide help to folks trying / struggling to run CUBE on most any Linux distribution.

macOS

macOS is fully supported as a host platform for CUBE. Please note that you must update/tweak some things first. Most importantly, macOS is distributed with a deprecated version of the bash shell that will not work with our Makefile. If you want to host CUBE on macOS, you must update bash to a current version. Instructions are out of scope of this document, but we recommend homebrew as your friend here.

Windows

In a word, don't (ok, that's technically two words). CUBE is ideally meant to be deployed on Linux/*nix systems. Windows is not officially supported nor recommended as the host environment. If you insist on trying on Windows you can consult some unmaintained documentation on attempts to deploy CUBE using the Windows Subsystem for Linux (WSL) here. This probably will break. Note that currently no one on the core development uses Windows in much of any capacity so interest or knowledge to help questions about Windows support is low. Nonetheless, we would welcome any brave soul though who has the time and inclination to fully investigate CUBE on Windows deployment.

Install latest Docker and Docker Compose.

Currently tested platforms:

  • Docker 18.06.0+
  • Docker Compose 1.27.0+
  • Ubuntu 18.04+ and MAC OS X 10.14+

On a Linux machine make sure to add your computer user to the docker group

Consult this page https://docs.docker.com/engine/install/linux-postinstall/

TL;DR

If you read nothing else on this page, and just want to get an instance of the ChRIS backend services up and running with no mess, no fuss:

The real TL;DR

The all in one copy/paste line to drop into your terminal (assuming of course you are in the repo directory and have the preconditions met):

docker swarm leave --force && docker swarm init --advertise-addr 127.0.0.1 &&  \
./unmake.sh && sudo rm -fr CHRIS_REMOTE_FS && rm -fr CHRIS_REMOTE_FS &&        \
./make.sh -U -I -i

This will start a bare bones CUBE. This CUBE will NOT have any plugins installed. To install a set of plugins, do

./postscript.sh
Slightly longer but still short TL;DR

Start a local Docker Swarm cluster if not already started:

docker swarm init --advertise-addr 127.0.0.1

Get the source code from CUBE repo:

git clone https://github.com/FNNDSC/ChRIS_ultron_backend
cd ChRIS_ultron_backend

Run full CUBE instantiation with tests:

./unmake.sh ; sudo rm -fr CHRIS_REMOTE_FS; rm -fr CHRIS_REMOTE_FS; ./make.sh

Or skip unit and integration tests and the intro:

./unmake.sh ; sudo rm -fr CHRIS_REMOTE_FS; rm -fr CHRIS_REMOTE_FS; ./make.sh -U -I -s

Once the system is "up" you can add more compute plugins to the ecosystem:

./postscript.sh

The resulting CUBE instance uses the default Django development server and therefore is not suitable for production.

Production deployments

For convenience a deploy.sh bash script is provided as part of the Github repo's source code. Internally the script uses the docker stack or Kustomize tools to deploy on a Swarm or Kubernetes cluster respectively.

Fetch the repo's source code:

git clone https://github.com/FNNDSC/ChRIS_ultron_backend
cd ChRIS_ultron_backend

Deploy on a single-machine Docker Swarm cluster:

  • Create appropriate secrets subdirectory:
mkdir swarm/prod/secrets
  • Copy all the required secret configuration files into the secrets directory, please take a look at this wiki page to learn more about these files.

  • Deploy CUBE backend containers:

./deploy.sh up
  • Tear down and remove CUBE backend containers:
cd ChRIS_ultron_backend
./deploy.sh down

Deploy on a Kubernetes cluster:

  • Create appropriate secrets subdirectory:
mkdir kubernetes/prod/secrets
  • Copy all the required secret configuration files into the secrets directory, please take a look at this wiki page to learn more about these files.
Single-machine deployment:
  • Deploy CUBE backend containers:
./deploy.sh -O kubernetes up
  • Tear down and remove CUBE backend containers:
cd ChRIS_ultron_backend
./deploy.sh -O kubernetes down
Multi-machine deployment (with NFS-based persistent storage):
  • Deploy CUBE backend containers:
./deploy.sh -O kubernetes -T nfs -P <nfs_server_ip> -S <storeBase> -D <storageBase> up
  • Both storeBase and storageBase are explained in the header documentation of the deploy.sh script.

  • Tear down and remove CUBE backend containers:

cd ChRIS_ultron_backend
./deploy.sh -O kubernetes -T nfs -P <nfs_server_ip> down

Development

Docker Swarm-based development environment:

Start a local Docker Swarm cluster if not already started:

docker swarm init --advertise-addr 127.0.0.1

Start CUBE from the repository source directory by running the make bash script

git clone https://github.com/FNNDSC/ChRIS_ultron_backEnd.git
cd ChRIS_ultron_backEnd
./make.sh

All the steps performed by the above script are properly documented in the script itself. After running this script all the automated tests should have successfully run and a Django development server should be running in interactive mode in this terminal.

Later you can stop and remove CUBE services and storage space by running the following bash script from the repository source directory:

./unmake.sh

Then remove the local Docker Swarm cluster if desired:

docker swarm leave --force

Kubernetes-based development environment:

Install single-node Kubernetes cluster. On MAC OS Docker Desktop includes a standalone Kubernetes server and client. Consult this page https://docs.docker.com/desktop/kubernetes/. On Linux there is a simple MicroK8s installation. Consult this page https://microk8s.io. Then create the required alias:

snap alias microk8s.kubectl kubectl
microk8s.kubectl config view --raw > $HOME/.kube/config

Start the Kubernetes cluster:

microk8s start

Start CUBE from the repository source directory by running the make bash script

git clone https://github.com/FNNDSC/ChRIS_ultron_backEnd.git
cd ChRIS_ultron_backEnd
export HOSTIP=<IP address of this machine>
./make.sh -O kubernetes

Later you can stop and remove CUBE services and storage space by running the following bash script from the repository source directory:

./unmake.sh -O kubernetes

Stop the Kubernetes cluster if desired:

microk8s stop

Rerun automated tests after modifying source code

Open another terminal and run the Unit and Integration tests within the container running the Django server:

To run only the Unit tests:

cd ChRIS_ultron_backEnd
docker-compose -f docker-compose_dev.yml exec chris_dev python manage.py test --exclude-tag integration

To run only the Integration tests:

docker-compose -f docker-compose_dev.yml exec chris_dev python manage.py test --tag integration

To run all the tests:

docker-compose -f docker-compose_dev.yml exec chris_dev python manage.py test 

After running the Integration tests the ./CHRIS_REMOTE_FS directory must be empty otherwise it means some error has occurred and you should manually empty it.

Check code coverage of the automated tests

Make sure the chris_backend/ dir is world writable. Then type:

docker-compose -f docker-compose_dev.yml exec chris_dev coverage run --source=feeds,plugins,uploadedfiles,users manage.py test
docker-compose -f docker-compose_dev.yml exec chris_dev coverage report

Using HTTPie client to play with the REST API

A simple GET request to retrieve the user-specific list of feeds:

http -a cube:cube1234 http://localhost:8000/api/v1/

A simple POST request to run the plugin with id 1:

http -a cube:cube1234 POST http://localhost:8000/api/v1/plugins/1/instances/ Content-Type:application/vnd.collection+json Accept:application/vnd.collection+json template:='{"data":[{"name":"dir","value":"cube/"}]}'

Then keep making the following GET request until the "status" descriptor in the response becomes "finishedSuccessfully":

http -a cube:cube1234 http://localhost:8000/api/v1/plugins/instances/1/

Using swift client to list files in the users bucket

swift -A http://127.0.0.1:8080/auth/v1.0 -U chris:chris1234 -K testing list users

Documentation

REST API reference

Available here.

Install Sphinx and the http extension (useful to document the REST API)

pip install Sphinx
pip install sphinxcontrib-httpdomain

Build the html documentation

cd docs/
make html

ChRIS REST API design.

Available here.

ChRIS backend database design.

Available here.

Wiki.

Available here.

Learn More

If you are interested in contributing or joining us, Check here.

Releases

No releases published

Packages

No packages published

Languages

  • Python 92.3%
  • Shell 7.1%
  • Other 0.6%