The official whiteboard app for Nextcloud. Create and share whiteboards with real-time collaboration.
- 🎨 Drawing shapes, writing text, connecting elements
- 📝 Real-time collaboration with semi-offline support
- 💾 Client-first architecture with local storage
- 🔄 Automatic sync between local and server storage
- 🌐 Works semi-offline - changes saved locally and synced when online (websocker server configured successfully)
- 💪 Built on Excalidraw
Nextcloud Whiteboard uses a client-first architecture that prioritizes browser-based functionality:
- Browser-First: All whiteboard functionality works directly in the browser
- Local Storage: Changes are immediately saved to browser storage (IndexedDB)
- Real-time Collaboration: WebSocket server handles live collaboration sessions
- Simplified Connectivity: Only browsers need to connect to the websocket server
- Reduced Dependencies: Websocket server is only needed for real-time collaboration, not basic functionality
The websocket server handles real-time collaboration sessions between users. Important: The websocket server is only needed for live collaboration - basic whiteboard functionality works without it.
Essential (for real-time collaboration):
- User browsers need HTTP(S) access to the websocket server
- Nextcloud and websocket server share a JWT secret for authentication
Configure Nextcloud with the websocket server details: (Can be configured in the Nextcloud admin settings)
occ config:app:set whiteboard collabBackendUrl --value="https://nextcloud.local:3002"
occ config:app:set whiteboard jwt_secret_key --value="some-random-secret"
npm ci
JWT_SECRET_KEY="some-random-secret" NEXTCLOUD_URL=https://nextcloud.local npm run server:start
docker run -e JWT_SECRET_KEY=some-random-secret -e NEXTCLOUD_URL=https://nextcloud.local -p 3002:3002 --rm ghcr.io/nextcloud-releases/whiteboard:stable
Or using Docker Compose:
services:
nextcloud-whiteboard-server:
image: ghcr.io/nextcloud-releases/whiteboard:stable
ports:
- "3002:3002"
environment:
NEXTCLOUD_URL: https://nextcloud.local
JWT_SECRET_KEY: some-random-secret
Environment Variables:
JWT_SECRET_KEY
: Must match the secret configured in NextcloudNEXTCLOUD_URL
: Used for JWT token validation (not for server-to-server communication)
If running the websocket server manually, configure your reverse proxy to expose it:
Apache Configuration
Apache >= 2.4.47:
ProxyPass /whiteboard/ http://localhost:3002/ upgrade=websocket
Apache < 2.4.47:
ProxyPass /whiteboard/ http://localhost:3002/
RewriteEngine on
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteCond %{HTTP:Connection} upgrade [NC]
RewriteRule ^/?whiteboard/(.*) "ws://localhost:3002/$1" [P,L]
Nginx Configuration
location /whiteboard/ {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://localhost:3002/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
Other Reverse Proxies
Caddy v2:
handle_path /whiteboard/* {
reverse_proxy http://127.0.0.1:3002
}
Traefik v3:
- traefik.http.services.whiteboard.loadbalancer.server.port=3002
- traefik.http.middlewares.strip-whiteboard.stripprefix.prefixes=/whiteboard
- traefik.http.routers.whiteboard.rule=Host(`nextcloud.example.com`) && PathPrefix(`/whiteboard`)
- traefik.http.routers.whiteboard.middlewares=strip-whiteboard
The websocket server handles real-time collaboration sessions (not critical whiteboard data):
LRU Cache (Default)
- In-memory session storage, simple setup
- Suitable for most deployments
- Session data cleared on restart (whiteboard data remains safe in Nextcloud/local storage)
STORAGE_STRATEGY=lru
Redis
- For multi-server setups or session persistence
- Enables horizontal scaling with Redis Streams
STORAGE_STRATEGY=redis
REDIS_URL=redis://[username:password@]host[:port][/database_number]
For high-traffic environments with multiple websocket servers:
- Use Redis for shared session state
- Configure load balancer with session stickiness
- Redis Streams handles WebSocket scaling automatically
To size dedicated collaboration servers we profiled the websocket backend with the synthetic load harness in tools/benchmarks/
. The runBenchmarks.mjs
script boots the production server (TLS disabled, LRU cache) and spawns JWT-authenticated Socket.IO clients from loadTest.mjs
. Each run holds a room open for 60 seconds, with 10% of participants sending cursor and viewport updates at 2 Hz to mimic active sketching while the rest stay idle.
Test environment
- Apple M4 (10 logical cores, 16 GB RAM), Node v24.8.0
- Single websocket process,
NODE_OPTIONS=--max-old-space-size=8192
,STORAGE_STRATEGY=lru
- Aggregate ingress/egress recorded from client telemetry (
nettop
requires root on macOS) - Full JSON results are stored in
tools/benchmarks/results.json
Observed highlights
- Per-user CPU hovered around 0.2% for small teams and climbed to ~0.37% at 300 concurrent users.
- Memory footprint stayed near 5 MB/user at 50 users and ~10 MB/user at 300 users.
- Server egress reached ~3 Mbps (50 users), ~13 Mbps (100 users) and ~366 Mbps (300 users). Pushing to 500 synthetic users drove ~1.2 Gbps and the single process began dropping ~30% of sockets.
Key takeaways
- Expect roughly 0.2% CPU per connected collaborator; reserve additional headroom for presenters or rapid drawing.
- Budget ~5–10 MB of process RSS per user when running in a single-node, in-memory configuration.
- Throughput scales quickly with active senders—plan outbound bandwidth over-provisioning (≥15 Mbps / 100 users) when presentations or screen follow are frequent.
Concurrent users | Avg CPU (10-core test rig) | Avg RSS | Server egress (60 s run) | Recommended spec |
---|---|---|---|---|
50 | ~10% (~0.21% per user) | ~0.24 GB | ~23.5 MB total (≈3.1 Mbps) | 2 vCPU / 1 GB RAM |
100 | ~20% (~0.20% per user) | ~0.36 GB | ~96.6 MB total (≈12.9 Mbps) | 4 vCPU / 2 GB RAM |
500* | ~203% (≈2 cores) | ~3.6–4.5 GB | ~9.2 GB total (≈1.2 Gbps) | ≥8 vCPU / ≥8 GB RAM per node + Redis + ≥2 nodes |
*500-user test saturated a single instance and dropped ~30% of simulated clients. Treat this as an upper bound and plan to run multiple websocket workers behind a sticky load balancer with STORAGE_STRATEGY=redis
.
- Install dependencies:
npm ci
(andcomposer install
if you have not bootstrapped the PHP side yet). - Ensure the websocket server can start without TLS (set
TLS=false
or exportTLS=false
before running the script) and thatJWT_SECRET_KEY
/NEXTCLOUD_URL
are configured for your environment. - Execute
node tools/benchmarks/runBenchmarks.mjs
to run the default scenarios (50, 100, 300 concurrent users). - Adjust load with environment variables as needed:
LOAD_TEST_CONCURRENCY=50,150,300
to pick specific cohorts (comma separated).LOAD_TEST_ACTIVE_RATIO=0.15
to vary the percentage of active broadcasters.LOAD_TEST_RATE=3
to control per-sender update frequency (messages/sec).LOAD_TEST_DURATION=90
to lengthen each run.
- After each execution the summarized telemetry is printed to stdout and saved to
tools/benchmarks/results.json
; keep copies per hardware profile for future comparisons. - When testing in prod-like environments, monitor OS-level CPU/RAM/network metrics in parallel (e.g.,
top
,sar
, cloud dashboards) to validate the Node-level sampling.
Recommendations
- Keep
NODE_OPTIONS=--max-old-space-size=8192
(or higher) when targeting 300+ concurrent users to avoid heap exhaustion. - For 300+ users, switch to Redis-backed storage and deploy at least two websocket instances to spread load.
- Budget at least 15 Mbps outbound bandwidth for every 100 concurrently connected users; architecturally heavy sessions (live presenting, rapid drawing) can double that figure.
- Re-run
node tools/benchmarks/runBenchmarks.mjs
after feature changes or on target hardware to validate sizing before production rollout.
Real-time Collaboration Not Working
- Verify JWT secrets match between Nextcloud and websocket server
- Check that user browsers can access the websocket server URL
- Ensure reverse proxy correctly handles WebSocket upgrades
- Check browser console for connection errors
Legacy Integration App Conflict
If you previously had integration_whiteboard
installed, remove any whiteboard entries from config/mimetypealiases.json
and run:
occ maintenance:mimetype:update-db
occ maintenance:mimetype:update-js
Misleading Admin Errors Admin connectivity checks may show false negatives in Docker/proxy environments. These errors don't affect actual functionality since the architecture is client-first. Focus on browser-based connectivity tests instead.
To build the project locally:
npm ci
npm run build
For development with hot reload:
npm run watch
To run the websocket server in development:
npm run server:watch