Skip to content

aodn/data-discovery-ai

Repository files navigation

Language: Python CI

Data Discovery AI

Environment variables

In the root directory of the project, create a .env file.

Open the .env file and add the following line to include your API key:

API_KEY=your_actual_api_key_here

Run the API server with Docker

Simply run ./startServer.sh to run the app, this will create a docker image and run the image for you.

Host will be http://localhost:8000.

Run the API server for development

Requirements

  • Conda (recommended for creating a virtual environment)
  1. Install Conda (if not already installed):

    Follow the instructions at Conda Installation.

  2. Create Conda virtual environment:

    conda env create -f environment.yml

Dependencies management

Poetry is used for dependency management, the pyproject.toml file is what is the most important, it will orchestrate the project and its dependencies.

You can update the file pyproject.toml for adding/removing dependencies by using

poetry add <pypi-dependency-name> # e.g poetry add numpy
poetry remove <pypi-dependency-name> # e.g. poetry remove numpy

You might want to update the poetry.lock file after manually modifying pyproject.toml with poetry lock command. To update all dependencies, use poetry update command.

Installation and Usage

  1. Activate Conda virtual environment:

    conda activate data-discovery-ai
  2. Install environment dependencies:

    # after cloning the repo with git clone command
    cd data-discovery-ai
    poetry install
  3. Run the FastAPI server:

    poetry run uvicorn data_discovery_ai.server:app --reload --log-config=log_config.yaml
  4. Run the tests:

    poetry run pytest

Code formatting

The command below is for manual checks; checks are also executed when you run git commit.

The configurations for pre-commit hooks are defined in .pre-commit-config.yaml.

pre-commit run --all-files