Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
82 changes: 82 additions & 0 deletions .github/workflows/build_docker.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
name: Build Docker Images
on:
workflow_dispatch:
push:
branches:
- master
- release

env:
REGISTRY: ghcr.io
PLATFORMS: linux/amd64,linux/arm64
jobs:
build_and_publish_docker:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
# Clone the repository
- name: Checkout repository
uses: actions/checkout@v4
with:
ref: ${{ github.ref }}
fetch-depth: 1

# Configure Docker Buildx for cross-platform builds
- name: Setup QEMU
uses: docker/setup-qemu-action@v2
- name: Setup Docker Buildx
uses: docker/setup-buildx-action@v2

# Log into ghcr
- name: Log in to the Container registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}

# ml_api image
- name: Extract metadata for ml_api
uses: docker/metadata-action@v2
id: ml_meta
with:
images: ${{ env.REGISTRY }}/${{github.repository_owner}}/ml_api
tags: |
type=ref,event=branch
type=raw,value=latest

- name: Build and push ml_api Docker image
uses: docker/build-push-action@v5
with:
context: ml_api
push: true
tags: ${{ steps.ml_meta.outputs.tags }}
labels: ${{ steps.ml_meta.outputs.labels }}
platforms: ${{ env.PLATFORMS }}
file: ml_api/Dockerfile
cache-from: type=gha
cache-to: type=gha,mode=max

# obico_web image
- name: Extract metadata for obico_web
uses: docker/metadata-action@v2
id: web_meta
with:
images: ${{ env.REGISTRY }}/${{github.repository_owner}}/obico_web
tags: |
type=ref,event=branch
type=raw,value=latest

- name: Build and push obico_web Docker image
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ steps.web_meta.outputs.tags }}
labels: ${{ steps.web_meta.outputs.labels }}
platforms: ${{ env.PLATFORMS }}
file: backend/Dockerfile
cache-from: type=gha
cache-to: type=gha,mode=max
10 changes: 7 additions & 3 deletions backend/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,11 @@ FROM thespaghettidetective/web:base-1.18
WORKDIR /app
EXPOSE 3334

RUN pip install -U pip pipenv==2022.12.19
RUN pip install --no-cache-dir -U pip pipenv==2022.12.19

ADD ./ /app
RUN pip install -r requirements.txt
COPY backend /app
COPY frontend /frontend
RUN pip install --no-cache-dir -r requirements.txt

# Keep recordings in mounted config directory
CMD ["/bin/sh", "/app/run.sh"]
17 changes: 17 additions & 0 deletions backend/run.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
#!/bin/sh
set -e

OBICO_CONTAINER=${OBICO_CONTAINER:-$1}
if [ "${OBICO_CONTAINER}" = "tasks" ]; then
celery -A config worker --beat -l info -c 2 -Q realtime,celery
elif [ "${OBICO_CONTAINER}" = "web" ]; then
python manage.py migrate
python manage.py collectstatic -v 2 --noinput

# Implementation from https://github.com/imagegenius/docker-obico
[ -d /data/media ] || mkdir /data/media
[ -d /app/static_build/media ] && rm -r /app/static_build/media
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove everything here every time the server starts?

Copy link
Author

@d-mcknight d-mcknight Jul 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I updated the compose file to mount the /data directory so that a user can specify where on their host system to save recordings and to keep user data separated from code.

The line after this creates a symlink to /data which is where I also put the sqlite db by default

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if I'm following. My question is if everything in /app/static_build/media (which I believe will include timelapses etc) will be erased every time run.sh runs.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, the static_build files are built on container start; I just kept the existing logic to minimize changes. With this PR, I moved the timelapses to /data/media which is linked to /app/static_build/media on line 14; the directory is removed on line 13 so that the link may be created.

I made this change so that /app doesn't need to be mounted on the host FS. The files will not be erased, because they are mounted to the container at /data/media.

Related, I might be able to simplify the container startup and drop some of this logic if python manage.py collectstatic can be run before python manage.py migrate, but I don't know enough about what those processes are doing to say if that would work.

ln -s /data/media /app/static_build/media

daphne -b 0.0.0.0 -p 3334 config.routing:application
fi
22 changes: 10 additions & 12 deletions docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,8 @@ version: '2.4'

x-web-defaults: &web-defaults
restart: unless-stopped
build:
context: backend
dockerfile: 'Dockerfile'
volumes:
- ./backend:/app
- ./frontend:/frontend
- ./data:/data
depends_on:
- redis
environment:
Expand All @@ -25,7 +21,7 @@ x-web-defaults: &web-defaults
CSRF_TRUSTED_ORIGINS: '${CSRF_TRUSTED_ORIGINS-}'
SOCIAL_LOGIN: '${SOCIAL_LOGIN-False}'
REDIS_URL: '${REDIS_URL-redis://redis:6379}'
DATABASE_URL: '${DATABASE_URL-sqlite:////app/db.sqlite3}'
DATABASE_URL: '${DATABASE_URL-sqlite:////data/db.sqlite3}'
INTERNAL_MEDIA_HOST: '${INTERNAL_MEDIA_HOST-http://web:3334}'
ML_API_HOST: '${ML_API_HOST-http://ml_api:3333}'
ACCOUNT_ALLOW_SIGN_UP: '${ACCOUNT_ALLOW_SIGN_UP-False}'
Expand All @@ -47,14 +43,12 @@ services:
ml_api:
hostname: ml_api
restart: unless-stopped
build:
context: ml_api
# TODO: Update tag to `release` when there is a release-tagged image
image: ghcr.io/thespaghettidetective/ml_api:latest
environment:
DEBUG: 'True'
FLASK_APP: 'server.py'
# ML_API_TOKEN:
tty: true
command: bash -c "gunicorn --bind 0.0.0.0:3333 --workers 1 wsgi"
healthcheck:
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider --no-check-certificate http://ml_api:3333/hc/"]
start_period: 30s
Expand All @@ -65,11 +59,13 @@ services:
web:
<<: *web-defaults
hostname: web
# TODO: Update tag to `release` when there is a release-tagged image
image: ghcr.io/thespaghettidetective/obico_web:latest
ports:
- "3334:3334"
depends_on:
- ml_api
command: sh -c 'python manage.py migrate && python manage.py collectstatic -v 2 --noinput && daphne -b 0.0.0.0 -p 3334 config.routing:application'
command: [ "/bin/sh", "/app/run.sh", "web" ]
healthcheck:
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider --no-check-certificate http://web:3334/hc/"]
start_period: 30s
Expand All @@ -80,7 +76,9 @@ services:
tasks:
<<: *web-defaults
hostname: tasks
command: sh -c "celery -A config worker --beat -l info -c 2 -Q realtime,celery"
# TODO: Update tag to `release` when there is a release-tagged image
image: ghcr.io/thespaghettidetective/obico_web:latest
command: [ "/bin/sh", "/app/run.sh", "tasks" ]
healthcheck:
test: ["CMD-SHELL", "celery -A config inspect ping"]
start_period: 15s
Expand Down
8 changes: 5 additions & 3 deletions ml_api/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,13 @@ FROM thespaghettidetective/ml_api_base:1.4
WORKDIR /app
EXPOSE 3333

ADD . /app
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY . /app
RUN pip install --no-cache-dir --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt

RUN echo 'Downloading the latest failure detection AI model in Darknet format...'
RUN wget -O model/model-weights.darknet $(cat model/model-weights.darknet.url | tr -d '\r')
RUN echo 'Downloading the latest failure detection AI model in ONNX format...'
RUN wget -O model/model-weights.onnx $(cat model/model-weights.onnx.url | tr -d '\r')

CMD [ "gunicorn", "--bind", "0.0.0.0:3333", "--workers", "1", "wsgi" ]
12 changes: 3 additions & 9 deletions website/docs/server-guides/platform-specific/unraid_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,12 +40,12 @@ git clone -b release https://github.com/TheSpaghettiDetective/obico-server.git
cd obico-server && docker-compose up -d
```

This will install obico-server to your Unraid server! To update obico-server, open up the terminal, change directory to the install directory, and run docker compose again.
This will install obico-server to your Unraid server! To update obico-server, open up the terminal, change directory to the install directory, and run `docker compose pull`.

```Bash
cd /mnt/user/appdata/obico-server # or where you install obico-server to
git pull
docker-compose up -d --force-recreate --build
docker compose pull
docker compose up -d --force-recreate --build
```

## Configuring obico-server {#configuring-obico}
Expand All @@ -69,12 +69,6 @@ [email protected]
...
```

Rebuild the container (Note - if you are going to limit the CPU usage you can also change that now before rebuilding the container, see the below section) -

```bash
docker-compose up -d --force-recreate --build
```

## Issues with the Installation {#issues-with-the-installation}

Unlike most containers that you install to Unraid, containers installed with Docker-Compose are limited in what you can do with them through the GUI. You cannot update them, change their logo, description, or do anything except for stop and restart them through the GUI. When you update the containers, you must remove the old and outdated ones manually from the command line using `docker image rm`.
Expand Down