Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 14 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -69,3 +69,17 @@ jobs:
- run: pip install -e ".[dev]"
- run: pip install mypy
- run: mypy src/

wheel-smoke:
name: Wheel Smoke
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
lfs: true
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5
with:
python-version: "3.12"
cache: pip
- run: PYTHON_BIN=python bash scripts/smoke_wheel_install.sh
17 changes: 16 additions & 1 deletion .github/workflows/release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -28,11 +28,26 @@ jobs:
- run: pip install -e ".[dev,mcp]"
- run: pytest tests/ -v -m "not integration"

wheel-smoke:
name: Wheel install smoke
runs-on: ubuntu-latest
timeout-minutes: 15
needs: [test]
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
lfs: true
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5
with:
python-version: "3.12"
cache: pip
- run: PYTHON_BIN=python bash scripts/smoke_wheel_install.sh

publish:
name: Publish to PyPI
runs-on: ubuntu-latest
timeout-minutes: 20
needs: [test]
needs: [test, wheel-smoke]
environment: pypi
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
Expand Down
10 changes: 6 additions & 4 deletions .github/workflows/security.yml
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,9 @@ jobs:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
fetch-depth: 0
- uses: gitleaks/gitleaks-action@v2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
continue-on-error: false
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5
with:
python-version: "3.12"
cache: pip
- run: pip install "gitleaks==8.24.2"
- run: gitleaks git --source . --redact --no-banner --exit-code 1
31 changes: 21 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# goop-face

Privacy-preserving facial protection -- detect faces locally, protect them from recognition systems using neural adversarial perturbations.
Facial protection tooling with local detection, on-device protection, and optional server-backed protection.

## Features

- **Local face detection** using InsightFace (buffalo_sc model, never leaves your machine)
- **Local face detection** using InsightFace (buffalo_sc model)
- **Local CPU protection** via ONNX perturbation generator (~30-60ms inference, no GPU required)
- **GPU backend protection** via goop-face-engine (diffusion-based, higher quality, multi-model evasion scoring)
- **MCP server** for AI agent integration (Claude Code, Claude Desktop, etc.)
Expand All @@ -24,13 +24,17 @@ For MCP server support (adds the `goop-face-mcp` entry point):
pip install "goop-face[mcp]"
```

If you install the base package only, `goop-face-mcp` exits with a short
install hint instead of a Python traceback. The MCP runtime is optional by
design.

For development:

```bash
pip install "goop-face[dev,mcp]"
```

**Note:** On first run, InsightFace will download the `buffalo_sc` detection model (~300MB). This is a one-time download.
**Note:** On first run, InsightFace will download the `buffalo_sc` detection model (~300MB). That one-time model download requires network access.

## Quick Start

Expand Down Expand Up @@ -135,6 +139,9 @@ for face in faces:

goop-face exposes tools via [MCP (Model Context Protocol)](https://modelcontextprotocol.io/) for AI agent integration. The `goop-face-mcp` entry point runs a FastMCP server over stdio transport.

The MCP server only exposes stdio transport. Keep it behind the MCP host
process rather than exposing it directly on the network.

### Claude Code

Add to your project (creates `.mcp.json` in the project root):
Expand All @@ -150,6 +157,11 @@ Add to your project (creates `.mcp.json` in the project root):
}
```

For production backends, prefer an `https://...` URL or a loopback endpoint
that you tunnel separately. `GOOP_FACE_API_KEY` is only forwarded to the
configured backend as an `Authorization: Bearer ...` header and is not used
during local-only operation.

Or add globally to `~/.claude/settings.local.json` with backend configuration:

```json
Expand Down Expand Up @@ -200,16 +212,15 @@ Use the same configuration in Claude Desktop's MCP settings.

### Privacy Model

- Face detection **always** runs locally -- original images never leave your machine.
- **Local tier**: Everything stays on-device. The ONNX generator runs via CPU, no network calls.
- **Server tier**: Only the detected 112x112 face crop is sent to the GPU backend, not the full image.
- `detect_faces`: Runs locally in this process. The detector itself does not make network requests after its model is installed.
- **Local tier**: The full image is read locally, faces are detected locally, and the bundled ONNX generator runs on aligned face crops. No inference-time network calls are made.
- **Server tier**: The current Python client and MCP server upload the full input image to `goop-face-engine` over HTTP. The backend returns the protected or analyzed result.
- **Auto tier**: Chooses `server` when `GOOP_FACE_ENGINE_URL` is configured; otherwise it falls back to `local`.

### Processing Pipeline

1. **Detect** -- InsightFace (buffalo_sc) finds faces and produces bounding boxes
2. **Align** -- Each face is cropped and resized to 112x112
3. **Perturb** -- The ONNX generator (local) or diffusion model (server) produces adversarial perturbations
4. **Paste back** -- Protected crops are resized and blended back into the original image with soft Gaussian-blurred edges
1. **Local protect path** -- Detect with InsightFace, align to 112x112, run the ONNX generator, then paste protected crops back into the original image.
2. **Server protect/analyze path** -- Base64-encode the full input image, send it to the configured backend, and write the backend response to disk.

## Configuration

Expand Down
7 changes: 7 additions & 0 deletions SECURITY.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,13 @@ Email: kobepaw@proton.me
- Local ONNX inference pipeline
- Face detection integration

## Privacy and Data Flow Notes

- `detect_faces` and the local protection path run in-process on the local machine.
- The first use of InsightFace downloads its model from upstream.
- Server-backed `protect` and `analyze` requests currently send the full input image to the configured `goop-face-engine` backend.
- `auto` mode prefers the configured backend over local inference.

Out of scope:
- goop-face-engine (separate project)
- InsightFace upstream vulnerabilities (report to their maintainers)
51 changes: 51 additions & 0 deletions scripts/smoke_wheel_install.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
#!/usr/bin/env bash

set -euo pipefail

PYTHON_BIN="${PYTHON_BIN:-python3}"

"${PYTHON_BIN}" - <<'PY'
import sys

if sys.version_info < (3, 10):
raise SystemExit("smoke_wheel_install.sh requires Python 3.10+")
PY

tmpdir="$(mktemp -d)"
trap 'rm -rf "${tmpdir}"' EXIT

"${PYTHON_BIN}" -m venv "${tmpdir}/build-venv"
build_bin="${tmpdir}/build-venv/bin"

"${build_bin}/pip" install --upgrade pip build
rm -rf dist build
"${build_bin}/python" -m build --wheel

wheel_path="$(find dist -maxdepth 1 -name '*.whl' | head -n 1)"
if [[ -z "${wheel_path}" ]]; then
echo "wheel build did not produce a dist/*.whl artifact" >&2
exit 1
fi

"${PYTHON_BIN}" -m venv "${tmpdir}/venv"
venv_bin="${tmpdir}/venv/bin"

"${venv_bin}/pip" install --upgrade pip
"${venv_bin}/pip" install "${wheel_path}"
"${venv_bin}/goop-face" --help >/dev/null

set +e
mcp_output="$("${venv_bin}/goop-face-mcp" 2>&1 >/dev/null)"
mcp_status=$?
set -e

if [[ "${mcp_status}" -eq 0 ]]; then
echo "goop-face-mcp unexpectedly succeeded without optional MCP dependencies" >&2
exit 1
fi

printf '%s' "${mcp_output}" | grep -F 'pip install "goop-face[mcp]"' >/dev/null

"${venv_bin}/pip" install "mcp>=1.2.0,<2"
"${venv_bin}/python" -c "from goop_face.mcp import mcp_server; print(mcp_server.name)" \
| grep -Fx "goop-face" >/dev/null
9 changes: 5 additions & 4 deletions src/goop_face/detector.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
"""Client-side face detection and alignment.

Runs locally — no data leaves the machine. Uses InsightFace for detection
and alignment before sending to the backend for protection.
This module runs locally and returns detected/aligned face crops to the
caller. It does not perform network I/O itself.
"""

from __future__ import annotations
Expand All @@ -26,8 +26,9 @@ class DetectedFace:
class FaceDetector:
"""Face detection and alignment using InsightFace.

Face detection runs client-side so raw images never leave the user's
machine. Only the detected/aligned face crop is sent to the backend.
Detection and alignment happen client-side. This class returns bounding
boxes and aligned crops to the caller; whether image data is later sent
elsewhere depends on the calling code.
"""

def __init__(self, min_face_size: int = 30, det_thresh: float = 0.5):
Expand Down
47 changes: 45 additions & 2 deletions src/goop_face/mcp/__init__.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,48 @@
"""goop-face MCP server — facial privacy protection tools for AI agents."""
"""goop-face MCP server entrypoints.

The ``goop-face-mcp`` console script is always installed so the base package has
a stable public interface. The actual MCP runtime remains optional and is
loaded lazily so missing extras produce an actionable error instead of a raw
import traceback.
"""

from __future__ import annotations

import importlib
import sys

_MCP_INSTALL_HINT = (
'goop-face-mcp requires the optional "mcp" dependency. '
'Install it with: pip install "goop-face[mcp]"'
)


def _load_server_module():
try:
return importlib.import_module("goop_face.mcp.server")
except ModuleNotFoundError as exc:
missing_root = (exc.name or "").split(".", 1)[0]
if missing_root == "mcp":
raise RuntimeError(_MCP_INSTALL_HINT) from exc
raise


def main() -> int:
"""Run the MCP server or print a dependency hint for base installs."""
try:
server_module = _load_server_module()
except RuntimeError as exc:
print(str(exc), file=sys.stderr)
return 1

server_module.main()
return 0


def __getattr__(name: str):
if name == "mcp_server":
return _load_server_module().mcp_server
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")

from goop_face.mcp.server import main, mcp_server

__all__ = ["main", "mcp_server"]
4 changes: 2 additions & 2 deletions src/goop_face/mcp/__main__.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
"""Allow ``python -m goop_face.mcp`` to start the MCP server."""

from goop_face.mcp.server import main
from goop_face.mcp import main

main()
raise SystemExit(main())
28 changes: 26 additions & 2 deletions src/goop_face/mcp/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,11 @@
from pathlib import Path
from typing import Literal

from pydantic import BaseModel, ConfigDict, model_validator
from pydantic import BaseModel, ConfigDict, Field, model_validator


def _split_path_list(value: str) -> tuple[Path, ...]:
return tuple(Path(part) for part in value.split(os.pathsep) if part)


class MCPConfig(BaseModel):
Expand All @@ -25,6 +29,8 @@ class MCPConfig(BaseModel):
api_key: str | None = None
default_mode: Literal["stealth", "balanced", "maximum"] = "balanced"
output_dir: Path = Path(".")
input_roots: tuple[Path, ...] = Field(default_factory=tuple)
output_roots: tuple[Path, ...] = Field(default_factory=tuple)

@model_validator(mode="before")
@classmethod
Expand All @@ -34,10 +40,28 @@ def _load_env_defaults(cls, values: dict) -> dict:
"api_key": "GOOP_FACE_API_KEY",
"default_mode": "GOOP_FACE_DEFAULT_MODE",
"output_dir": "GOOP_FACE_OUTPUT_DIR",
"input_roots": "GOOP_FACE_INPUT_ROOTS",
"output_roots": "GOOP_FACE_OUTPUT_ROOTS",
}
for field, env_var in env_map.items():
if field not in values:
env_val = os.environ.get(env_var)
if env_val:
values[field] = env_val
values[field] = (
_split_path_list(env_val)
if field in {"input_roots", "output_roots"}
else env_val
)
return values

@model_validator(mode="after")
def _apply_default_roots(self) -> MCPConfig:
cwd = Path.cwd()
output_root = self.output_dir if self.output_dir != Path(".") else cwd

if not self.input_roots:
object.__setattr__(self, "input_roots", (cwd, output_root))
if not self.output_roots:
object.__setattr__(self, "output_roots", (output_root,))

return self
Loading
Loading