Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
117 changes: 63 additions & 54 deletions .devcontainer/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,55 +1,64 @@
# Use an official Python 3.11 runtime as a base image
FROM python:3.11-slim
# -------------------------------------------------------------
# Stage 1: Base Python environment with build tools
# -------------------------------------------------------------
FROM python:3.11-slim

# Install system dependencies, including build tools, git, cmake, clang, libc++-dev, libc++abi-dev, libomp-dev, ninja-build, Python development headers, OpenBLAS, and pkg-config
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
git \
cmake \
clang \
libc++-dev \
libc++abi-dev \
libomp-dev \
ninja-build \
python3-dev \
libopenblas-dev \
pkg-config \
&& rm -rf /var/lib/apt/lists/*

# Upgrade pip, setuptools, and wheel, then install Poetry 2.0.1
RUN pip install --upgrade pip setuptools wheel && \
pip install poetry==2.0.1

# Set the working directory in the container
WORKDIR /app

# Copy dependency files first to leverage Docker cache
COPY pyproject.toml poetry.lock* /app/

# Install dependencies using Poetry without installing the root package
RUN poetry config virtualenvs.create false && \
poetry install --no-root --no-interaction --no-ansi

# (Optional) Copy and install additional dependencies from requirements_poetry.txt if present
COPY requirements_poetry.txt /app/
RUN if [ -f requirements_poetry.txt ]; then pip install --no-cache-dir -r requirements_poetry.txt; fi

# # Set CMake arguments for OpenBLAS support
# ENV CMAKE_ARGS="-DGGML_BLAS=ON -DGGML_BLAS_VENDOR=OpenBLAS"

# # Install llama_cpp_python with verbose output
# RUN pip install --no-cache-dir --verbose llama_cpp_python==0.3.7

# RUN CMAKE_ARGS="-DGGML_BLAS=ON -DGGML_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python

RUN CMAKE_ARGS="-DGGML_NATIVE=OFF -DGGML_CPU_ARM_ARCH=armv8-a" pip install llama-cpp-python


# Copy the rest of the project files
COPY . /app

# Expose the port if needed (or you can omit if not running the server automatically)
EXPOSE 8000

# Instead of running the app automatically, start a shell for interactive work
CMD [ "bash" ]
# System dependencies for building and for Ollama
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential git curl wget cmake clang libc++-dev libc++abi-dev \
libomp-dev ninja-build python3-dev libopenblas-dev pkg-config ca-certificates \
&& rm -rf /var/lib/apt/lists/*

# -------------------------------------------------------------
# Install Poetry and project dependencies
# -------------------------------------------------------------
RUN pip install --upgrade pip setuptools wheel && pip install poetry==2.0.1
WORKDIR /app

# Copy Poetry files
COPY pyproject.toml poetry.lock* /app/
RUN poetry config virtualenvs.create false && poetry install --no-root --no-interaction --no-ansi

# Optional extra dependencies
COPY requirements_poetry.txt /app/
RUN if [ -f requirements_poetry.txt ]; then pip install --no-cache-dir -r requirements_poetry.txt; fi

# -------------------------------------------------------------
# Install Ollama
# -------------------------------------------------------------
# Ollama official Linux install script:
RUN curl -fsSL https://ollama.com/install.sh | sh

# Add Ollama binary to PATH (just to be sure)
ENV PATH="/usr/local/bin:${PATH}"

# -------------------------------------------------------------
# (Optional) Install llama-cpp-python if you still want local GGUF support
# Comment out if you’ll use Ollama exclusively
# RUN CMAKE_ARGS="-DGGML_BLAS=ON -DGGML_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python==0.3.7

# -------------------------------------------------------------
# Copy the rest of your app
# -------------------------------------------------------------
COPY . /app

# Environment variables
ENV OLLAMA_BASE_URL="http://localhost:11434"
ENV HF_HOME=/app/.cache/hf
ENV TRANSFORMERS_CACHE=/app/.cache/hf
ENV SENTENCE_TRANSFORMERS_HOME=/app/.cache/st

# Expose the FastAPI port
EXPOSE 8000

# -------------------------------------------------------------
# Entrypoint: start Ollama service + your API
# -------------------------------------------------------------
# Ollama runs as a background service; we then start FastAPI
CMD bash -c "\
echo 'Starting Ollama service...' && \
ollama serve & \
sleep 3 && \
echo 'Starting FastAPI app...' && \
uvicorn main:app --host 0.0.0.0 --port 8000 \
"
12 changes: 12 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -261,6 +261,18 @@ The project includes several GPU optimizations:

---

## Recent Additions and Defaults (skill + backend)

- **Perception websocket**: Robot streams frames to `/ws/perception` (binary prefix `0x01` video JPEG, `0x02` audio). Text messages carry `hello`, `turn`, `name_update`. Server soft-fails if face/voice libs are missing; user IDs are created on-demand for stats.
- **Trivia flow**: Kotlin skill calls `/quiz/question`, `/trivia/turn` (LLM phrasing/feedback), and `/memory/trivia` (stats). Local cache + backend persistence (`trivia_stats` table).
- **Language handling**: Heuristic EN/NO detector client-side; explicit language pinning via `LanguageManager` with Polly voices (`Kendra-Neural` EN, `Ida-Neural` NO). Backend language hinting in prompts; placeholder server-side lang detect.
- **URLs/IPs**: Default `BACKEND_URL` in the skill points to laptop IP (override via `BACKEND_URL` env). Robot IP tracked in params. Perception WS URL derives from `BACKEND_URL` (`ws://<host>:8000/ws/perception`).
- **RAG**: Lightweight BM25 (no vector DB) over `DOCUMENTS_PATH` with preference for `qa_pairs.json` when present; falls back to PDFs/txt. Chunk size 800 / overlap 150.
- **LLM defaults**: Backend uses Ollama (`llama3.2:latest`) by default; HF/LlamaCpp paths are guarded behind torch/transformers availability. System prompt in `config/settings.py` defines the Kaia persona and strict language policy.
- **Storage/layout**: Backend defaults to local `.cache` for models/caches/docs/vector store/DB (SQLite `furhat_memory.db`). Tables: `users`, `conversations`, `turns`, `trivia_stats`.
- **Ingestion helper**: `ingestion/web_ingest.py` fetches PDFs via DuckDuckGo HTML + regex (best-effort), storing into a folder you can set as `DOCUMENTS_PATH`.


## Requirements & Installation

This project uses a hybrid dependency management approach: some dependencies are installed via pip into your environment, and others are managed by Poetry (tracked in the `poetry.lock` file).
Expand Down
10 changes: 6 additions & 4 deletions furhat_skills/Conversation/build.gradle
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
plugins {
id "org.jetbrains.kotlin.jvm" version "1.8.21"
id 'com.github.johnrengelman.shadow' version '2.0.4'
id "org.jetbrains.kotlin.jvm" version "1.9.22"
id "com.github.johnrengelman.shadow" version "8.1.1"
}

apply plugin: 'java'
Expand Down Expand Up @@ -48,6 +48,8 @@ dependencies {
implementation 'com.furhatrobotics.assets:StandardLibraryCollection:1.2.0'
// Additional dependencies for HTTP calls, JSON processing, and coroutines
implementation "com.squareup.okhttp3:okhttp:4.10.0"
implementation "com.fasterxml.jackson.core:jackson-annotations:2.15.2"
implementation "com.fasterxml.jackson.module:jackson-module-kotlin:2.15.2"
implementation "org.json:json:20210307"
implementation "org.jetbrains.kotlinx:kotlinx-coroutines-core:1.6.4"
}
Expand All @@ -74,13 +76,13 @@ shadowJar {
properties.load(project.file('skill.properties').newDataInputStream())
def version = properties.getProperty('version')
def name = properties.getProperty('name')
archiveName = "${name}_${version}.skill"
archiveFileName.set("${name}_${version}.skill")
archiveExtension.set("skill")

manifest {
exclude '**/Log4j2Plugins.dat'
exclude '**/node_modules'
}
from "skill.properties"
from "assets"
extension 'skill'
}
Binary file modified furhat_skills/Conversation/gradle/wrapper/gradle-wrapper.jar
Binary file not shown.
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-8.5-bin.zip
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-6.9.4-bin.zip
Loading