Skip to content
Merged
Show file tree
Hide file tree
Changes from 14 commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
af46594
Add workspace system, SDK enhancement, jobs, visualizations, dashboar…
Jovonni Mar 14, 2026
0924766
Fix workspace JupyterLab iframe embedding for usable in-browser works…
Jovonni Mar 14, 2026
927cda4
Add Alembic migrations, SSE streaming, pipeline operator, enhanced CR…
Jovonni Mar 14, 2026
cbe036a
Add e2e test for workspace JupyterLab notebook SDK flow and fix Docke…
Jovonni Mar 14, 2026
2bc05a2
Fix operator workspace handler, add SDK notebooks, fix frontend build
Jovonni Mar 14, 2026
4edfe1a
Update dev environment setup for reliable workspace support
Jovonni Mar 15, 2026
aa43aa3
Fix 500 error from stale JWT tokens after database reset
Jovonni Mar 15, 2026
9617f54
Fix workspace operator resource limits and stale NodePort cleanup
Jovonni Mar 15, 2026
556729d
Add workspace status auto-polling on list page
Jovonni Mar 15, 2026
9cc676d
Proxy GraphQL through Next.js to eliminate PostGraphile port-forward
Jovonni Mar 15, 2026
b65dbaf
Update workspace e2e test for real K8s operator flow
Jovonni Mar 15, 2026
2150adf
Add e2e screenshots and stray dirs to gitignore
Jovonni Mar 15, 2026
4db1777
Fix CI failures: workspace build context, SDK dist conflict, frontend…
Jovonni Mar 15, 2026
32a3854
Address CI failures and code review feedback
Jovonni Mar 15, 2026
bfbb961
Add workspace auth token for seamless SDK authentication
Jovonni Mar 15, 2026
a9c1a12
Wire end-to-end visualization rendering pipeline
Jovonni Mar 15, 2026
f618963
Redesign visualization detail page with split code/preview layout
Jovonni Mar 15, 2026
b7c30a1
Fix visualization rendering: scroll, full-width scaling, and multi-ba…
Jovonni Mar 15, 2026
82e568d
Address Copilot PR review feedback across 11 files
Jovonni Mar 16, 2026
7c02453
Bump SDK version to 0.0.2 for workspace feature release
Jovonni Mar 16, 2026
e6c8fb1
Address second round of Copilot review feedback
Jovonni Mar 16, 2026
42c608f
Reconcile dual jobs systems and fix end-to-end model pipeline
Jovonni Mar 17, 2026
826f284
Address third round of Copilot review feedback
Jovonni Mar 17, 2026
cf0d4e9
Add Workspaces & SDK section to README
Jovonni Mar 17, 2026
52c5d14
Address fourth round of Copilot review feedback
Jovonni Mar 17, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/workflows/ci-frontend.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ jobs:
run: npm install --legacy-peer-deps

- name: Run ESLint
run: npm run lint
run: npx next lint --quiet

build:
name: Build
Expand All @@ -55,7 +55,7 @@ jobs:
run: npm run build
env:
NEXT_PUBLIC_API_URL: http://localhost:8000
NEXT_PUBLIC_GRAPHQL_URL: http://localhost:5001/graphql
NEXT_PUBLIC_GRAPHQL_URL: /graphql

typecheck:
name: TypeScript check
Expand Down
4 changes: 3 additions & 1 deletion .github/workflows/ci-sdk.yml
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,9 @@ jobs:

- name: Build package
working-directory: sdk
run: python -m build
run: |
rm -rf dist/
python -m build

- name: Install from built wheel
run: pip install sdk/dist/*.whl
Expand Down
70 changes: 70 additions & 0 deletions .github/workflows/ci-workspace.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
name: CI - Workspace Image

on:
push:
branches: [master, dev/workspaces]
paths:
- 'docker/workspace/**'
- 'sdk/**'
- 'workspace/**'
pull_request:
branches: [master]
paths:
- 'docker/workspace/**'
- 'sdk/**'
- 'workspace/**'

jobs:
build-workspace:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3

- name: Build workspace image
uses: docker/build-push-action@v5
with:
context: .
file: docker/workspace/Dockerfile
push: false
load: true
tags: openuba-workspace:latest
cache-from: type=gha
cache-to: type=gha,mode=max
Comment thread
Jovonni marked this conversation as resolved.

- name: Test workspace image starts
run: |
docker run --rm -d --name ws-test -p 8888:8888 openuba-workspace:latest
sleep 10
curl -sf http://localhost:8888/api || echo "JupyterLab not ready yet (expected in CI)"
docker stop ws-test

publish-workspace:
needs: build-workspace
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/master' && github.event_name == 'push'
steps:
- uses: actions/checkout@v4

- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3

- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}

- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
file: docker/workspace/Dockerfile
push: true
tags: |
gacwr/openuba-workspace:latest
gacwr/openuba-workspace:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
5 changes: 5 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -36,12 +36,17 @@ interface/tsconfig.tsbuildinfo
.vscode/
*.swp

# E2E test screenshots (generated on each run)
core/tests/e2e/screenshots/
interface/core/

# Runtime
.openuba/

# Build Artifacts
*.tar
watch_*
sdk/dist/

# Deployment & Data
metastore/
Expand Down
69 changes: 63 additions & 6 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -96,9 +96,25 @@ build-runner-networkx:
docker build -f docker/model-runner/Dockerfile.networkx -t openuba-model-runner:networkx --build-arg BASE_IMAGE=openuba-model-runner:base .
build-operator:
docker build -f docker/operator.dockerfile -t openuba-operator:latest .
build-containers: build-backend build-frontend build-model-runner build-operator
build-workspace:
docker build -f docker/workspace/Dockerfile -t openuba-workspace:latest .

build-containers: build-backend build-frontend build-model-runner build-operator build-workspace
@echo "all containers built successfully"

# workspace targets
apply-crds:
kubectl apply -f k8s/crds/

# sdk
test-sdk:
@echo "Running SDK tests..."
cd sdk && python -m pytest tests/ -v --tb=short

# all tests including sdk (unit + integration)
test-ci: test test-sdk
@echo "All unit and integration tests passed"

# load images into kind cluster
load-images:
@echo "pulling and loading external images..."
Expand All @@ -112,6 +128,7 @@ load-images:
kind load docker-image openuba-model-runner:tensorflow --name openuba-cluster || echo "kind cluster not found or not using kind"
kind load docker-image openuba-model-runner:networkx --name openuba-cluster || echo "kind cluster not found or not using kind"
kind load docker-image openuba-operator:latest --name openuba-cluster || echo "kind cluster not found or not using kind"
kind load docker-image openuba-workspace:latest --name openuba-cluster || echo "kind cluster not found or not using kind"
@echo "images loaded successfully"

# kubernetes deployment - deploys all services
Expand Down Expand Up @@ -247,9 +264,13 @@ k8s-logs-postgraphile:
kubectl logs -f -l app=postgraphile -n openuba
k8s-init-data:
@echo "Triggering data ingestion via API..."
@kubectl exec -n openuba deploy/backend -- curl -X POST http://localhost:8000/api/v1/data/ingest \
-H "Content-Type: application/json" \
-d '{"dataset_name": "toy_1", "ingest_to_spark": true, "ingest_to_es": true}'
@kubectl exec -n openuba deploy/backend -- bash -c '\
TOKEN=$$(curl -s -X POST http://localhost:8000/api/v1/auth/login \
-d "username=openuba&password=password" | python3 -c "import sys,json; print(json.load(sys.stdin)[\"access_token\"])") && \
curl -X POST http://localhost:8000/api/v1/data/ingest \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $$TOKEN" \
-d "{\"dataset_name\": \"toy_1\", \"ingest_to_spark\": true, \"ingest_to_es\": true}"'
@echo "Data ingestion triggered. Check logs or UI for status."
k8s-logs-all:
@echo "Viewing logs for all services (use Ctrl+C to exit)..."
Expand Down Expand Up @@ -303,6 +324,42 @@ e2e-test-display:
@echo "Running E2E display/dashboard tests..."
pytest core/tests/e2e/test_display_flow.py -v --tb=short

e2e-test-workspaces:
@echo "Running E2E workspace tests..."
pytest core/tests/e2e/test_workspaces_flow.py -v --tb=short

e2e-test-jobs:
@echo "Running E2E jobs tests..."
pytest core/tests/e2e/test_jobs_flow.py -v --tb=short

e2e-test-visualizations:
@echo "Running E2E visualizations tests..."
pytest core/tests/e2e/test_visualizations_flow.py -v --tb=short

e2e-test-dashboards:
@echo "Running E2E dashboards tests..."
pytest core/tests/e2e/test_dashboards_flow.py -v --tb=short

e2e-test-experiments:
@echo "Running E2E experiments tests..."
pytest core/tests/e2e/test_experiments_flow.py -v --tb=short

e2e-test-features:
@echo "Running E2E features tests..."
pytest core/tests/e2e/test_features_flow.py -v --tb=short

e2e-test-pipelines:
@echo "Running E2E pipelines tests..."
pytest core/tests/e2e/test_pipelines_flow.py -v --tb=short

e2e-test-datasets:
@echo "Running E2E datasets tests..."
pytest core/tests/e2e/test_datasets_flow.py -v --tb=short

e2e-test-navigation:
@echo "Running E2E platform navigation tests..."
pytest core/tests/e2e/test_platform_navigation.py -v --tb=short

e2e-cleanup:
@echo "Cleaning up E2E deployment..."
$(MAKE) k8s-delete
Expand All @@ -312,8 +369,8 @@ e2e-full: e2e-setup e2e-deploy
$(MAKE) e2e-test || ($(MAKE) e2e-cleanup && exit 1)
$(MAKE) e2e-cleanup

# run all tests (unit + e2e)
test-all: test e2e-full
# run all tests (unit + integration + sdk + e2e)
test-all: test test-sdk e2e-full
@echo "All tests completed"

# local development
Expand Down
37 changes: 37 additions & 0 deletions alembic.ini
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
[alembic]
script_location = alembic
prepend_sys_path = .
sqlalchemy.url = driver://user:pass@localhost/dbname

[loggers]
keys = root,sqlalchemy,alembic

[handlers]
keys = console

[formatters]
keys = generic

[logger_root]
level = WARN
handlers = console

[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine

[logger_alembic]
level = INFO
handlers =
qualname = alembic

[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic

[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S
61 changes: 61 additions & 0 deletions alembic/env.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
'''
Copyright 2019-Present The OpenUBA Platform Authors
alembic environment configuration
'''

import os
from logging.config import fileConfig

from sqlalchemy import engine_from_config, pool
from alembic import context

# import models so autogenerate can detect them
from core.db.connection import Base
import core.db.models # noqa: F401

config = context.config

# override sqlalchemy.url from environment if available
db_url = os.getenv("DATABASE_URL")
if db_url:
config.set_main_option("sqlalchemy.url", db_url)

if config.config_file_name is not None:
fileConfig(config.config_file_name)

target_metadata = Base.metadata


def run_migrations_offline() -> None:
'''run migrations in offline mode'''
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url,
target_metadata=target_metadata,
literal_binds=True,
dialect_opts={"paramstyle": "named"},
)
with context.begin_transaction():
context.run_migrations()


def run_migrations_online() -> None:
'''run migrations in online mode'''
connectable = engine_from_config(
config.get_section(config.config_ini_section, {}),
prefix="sqlalchemy.",
poolclass=pool.NullPool,
)
with connectable.connect() as connection:
context.configure(
connection=connection,
target_metadata=target_metadata,
)
with context.begin_transaction():
context.run_migrations()


if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()
25 changes: 25 additions & 0 deletions alembic/script.py.mako
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
"""${message}

Revision ID: ${up_revision}
Revises: ${down_revision | comma,n}
Create Date: ${create_date}
"""
from typing import Sequence, Union

from alembic import op
import sqlalchemy as sa
${imports if imports else ""}

# revision identifiers, used by Alembic.
revision: str = ${repr(up_revision)}
down_revision: Union[str, None] = ${repr(down_revision)}
branch_labels: Union[str, Sequence[str], None] = ${repr(branch_labels)}
depends_on: Union[str, Sequence[str], None] = ${repr(depends_on)}


def upgrade() -> None:
${upgrades if upgrades else "pass"}


def downgrade() -> None:
${downgrades if downgrades else "pass"}
Loading
Loading