Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker-examples single_port solution drops connection after some time #4236

Open
masenf opened this issue Oct 24, 2024 · 1 comment
Open
Assignees
Labels
bug Something isn't working

Comments

@masenf
Copy link
Collaborator

masenf commented Oct 24, 2024

Describe the bug
https://discord.com/channels/1029853095527727165/1139658820533104791/1299009470218768436

Hi! I'm reporting an issue with the Self-Hosting single port solution Dockerfile and Caddyfile.
I did my initial implementation about 6-8 months ago and then in the Github repo https://github.com/reflex-dev/reflex/tree/main/docker-example/simple-one-port there was a different solution.

What's the difference for me? When only changing the Dockerfile and Caddyfile, I've successfully reproduced a bug where in the new Dockerfile and Caddyfile combination, the live deployment will disconnect from it's websocket - and either will stay disconnected or will start agressively reconnecting and disconnecting again. This will only occur after 15 or so minutes of uptime and not before!

What is the result? Well, it's ugly. If the frontend loses the connection to the backend with websockets, then all State variables are reset for the duration of the disconnect. That means the user will start to see blinking forms, variables and error messages if you have validation and invalid input rules! As a result the Reflex application is unusable and the only quick fix was to redeploy the solution again, giving it a 15 minute buffer or so when the websocket connection remains alive.

This will not happen locally, only when building and deploying a single port Dockerfile solution.

I only by accident found the solution to "downgrade" the Dockerfile to the previous example where the Caddyfile is inserted as a string to the Dockerfile.

To me it seems the reverse proxy setup dies, due to an internal bug with either the formatting of the file or what ever and therefore renders the whole application useless.

Hope this helps! Will provide more info in the thread of this message! ❤️

To Reproduce
So the current new example in the github repo is:
Caddyfile:

:{$PORT}

encode gzip

@backend_routes path /_event/* /ping /_upload /_upload/*
handle @backend_routes {
    reverse_proxy localhost:8000
}

root * /srv
route {
    try_files {path} {path}/ /404.html
    file_server
}

Old Dockerfile that works:

FROM python:3.12.2

ARG PORT=8080
ARG API_URL
ENV PORT=$PORT API_URL=${API_URL:-http://localhost:$PORT}

RUN apt-get update -y && apt-get install -y caddy && apt-get install -y wkhtmltopdf && rm -rf /var/lib/apt/lists/*

WORKDIR /app

RUN cat > Caddyfile <<EOF
:{\$PORT}

encode gzip

@backend_routes path /_event/* /ping /_upload /_upload/*
handle @backend_routes {
    reverse_proxy localhost:8000
}

root * /srv
route {
    try_files {path} {path}/ /404.html
    file_server
}
EOF

COPY . .

RUN pip install -r requirements.txt

RUN reflex init

RUN reflex export --frontend-only --no-zip && mv .web/_static/* /srv/ && rm -rf .web

STOPSIGNAL SIGKILL

EXPOSE $PORT

CMD caddy start && reflex run --env prod --backend-only --loglevel debug

Logs that might be relevant:

2024-10-24 12:30:28.045    
Successfully started Caddy (pid=332) - Caddy is running in the background
2024-10-24 12:30:28.042    
{
  "level": "info",
  "ts": 1729762228.0412948,
  "logger": "tls",
  "msg": "cleaning storage unit",
  "description": "FileStorage:/root/.local/share/caddy"
}
2024-10-24 12:30:28.040    
{
  "level": "info",
  "ts": 1729762228.0389044,
  "msg": "autosaved config (load with --resume flag)",
  "file": "/root/.config/caddy/autosave.json"
}
2024-10-24 12:30:28.035    
{
  "level": "warn",
  "ts": 1729762228.031918,
  "msg": "Caddyfile input is not formatted; run the 'caddy fmt' command to fix inconsistencies",
  "adapter": "caddyfile",
  "file": "Caddyfile",
  "line": 14
4-10-24 09:49:49.536    
{
  "level": "info",
  "ts": 1729752589.536447,
  "logger": "tls",
  "msg": "cleaning storage unit",
  "description": "FileStorage:/root/.local/share/caddy"
}
Fields




event.provider    
app




fly.app.instance    
1781350b42d738




fly.app.name    
beamline-portal




fly.region    
fra




log.level    
info




message    
{"level":"info","ts":1729752589.536447,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/root/.local/share/caddy"}




sort    
[1729752589000000000]
2024-10-24 09:49:49.534    
{
  "level": "info",
  "ts": 1729752589.5346453,
  "msg": "autosaved config (load with --resume flag)",
  "file": "/root/.config/caddy/autosave.json"
}
2024-10-24 09:49:49.529    
{
  "level": "warn",
  "ts": 1729752589.5291812,
  "msg": "Caddyfile input is not formatted; run the 'caddy fmt' command to fix inconsistencies",
  "adapter": "caddyfile",
  "file": "Caddyfile",
  "line": 14
}
2024-10-24 09:49:49.227    
 INFO Preparing to run: `/bin/sh -c [ -d alembic ] && reflex db migrate;     caddy start &&     redis-server --daemonize yes &&     exec reflex run --env prod --backend-only` as root
2024-10-24 09:48:53.255    
Successfully started Caddy (pid=333) - Caddy is running in the background
2024-10-24 09:48:53.250    
{
  "level": "info",
  "ts": 1729752533.2503912,
  "logger": "tls",
  "msg": "cleaning storage unit",
  "description": "FileStorage:/root/.local/share/caddy"
}

What I do see with the old solution is that there is a lot of pings and pongs going on in the background, I'm assuming that is what keeps the websocket connection alive, for some odd reason.

message-17.txt

Expected behavior
Frontend and backend remain connected.

Specifics (please complete the following information):

  • Python Version: 3.12
  • Reflex Version: unknown
  • OS: linux/docker
  • Browser (Optional): unknown
@masenf masenf added the bug Something isn't working label Oct 24, 2024
Copy link

linear bot commented Oct 24, 2024

@linear linear bot assigned masenf Oct 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant