-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade Django #961
base: develop
Are you sure you want to change the base?
Upgrade Django #961
Conversation
8c0b0c2
to
d7b616f
Compare
3f5e376
to
d23c897
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good so far. I put a few questions and comments below. One thing I noticed was that when the update script runs migrations, it says:
Your models in app(s): 'pfb_analysis', 'users' have changes that are not yet reflected in a migration, and so won't be applied.
Run 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them.
My guess is that there's something like a change to how a field type works. I didn't go looking, but I would expect whatever it is to have an entry in the Django changelog with some description of why it changed and whether it requires any actual data changes or if the migration is just to keep the models in sync with the tables.
src/django/Dockerfile
Outdated
RUN pip3 install --no-cache-dir -r /tmp/requirements.txt | ||
RUN apt-get purge -y --auto-remove $buildDeps |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the Dockerfile for the old base image it strung these installation steps together into a single RUN command with &&
s, so that everything gets installed and then the cruft gets removed in a single image layer, to avoid making the image bigger than it needs to be. I think it's worth keeping that in this inlined version of the package installation step.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made the changes based on what's in the project template. I don't have strong preference on either solution. Let me know what you think!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting. I thought chaining them was standard practice and would have expected the project template to do it that way. It looks like it did, but was changed in 024ba06 with the comment "Cleanup the dockerfile so it has less chained commands (easier to debug this way)"
I won't argue that chained is easier to debug, but I'm not sure Bryan realized how much of a difference it makes in the image size. Or maybe he did and thought it was worth it, but I think it makes more sense to keep it chained and let people break it up locally if they need to for debugging.
I built this container both ways—as 3 separate commands and with them chained into one—and the image size was 1.56GB for the former, 901MB for the latter. That seems like a good enough argument for going with the chained command.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK. So it sounds like we are chaining all three of them into one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed in e0ea151
CMD ["-w", "1", \ | ||
"-b", "0.0.0.0:9202", \ | ||
"--reload", \ | ||
"--log-level", "info", \ | ||
"--error-logfile", "-", \ | ||
"--forwarded-allow-ips", "*", \ | ||
"-k", "gevent", \ | ||
"pfb_network_connectivity.wsgi"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you move this out because doing it this way is no longer supported, or because having it in a config file is just nicer?
It seems fine/better to me, but it seems like -k gevent
didn't make it into the config file. Is that because it's the default, or did it get missed? (I can't say I have any memory of why we set that option, but my default assumption is that there was a reason and it should stay unless there's a reason for it to change.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to the above, this was to align with what the project template does.
I don't have context regarding the reason for gevent
in there, but we could add the following to the gunicorn.conf.py
. What do you think?
worker_class = 'gevent'
Some sources might be helpful:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, adding worker_class = 'gevent'
sounds good to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed in e0ea151
@KlaasH Thanks for the review! I made the following two changes:
I responded two of your comments and asked for your thoughts- please let me know and I will make changes. |
839d5a2
to
e0ea151
Compare
Update, re:
I applied the changes locally and running the app seems fine to me. |
96a708f
to
0ed0693
Compare
e0ea151
to
818c1f5
Compare
I rebased on top of |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is likely fine—everything I've tested via the front end is working, and I don't know of any reason to think there are subtle issues waiting to bite us just beyond the reaches of those tests—but I'd like to run at least one full analysis before calling it good, and I haven't managed to do so. My Vagrant environment is not working at the moment, which hasn't bothered me because docker-on-host has been working well for the app and we don't want to keep using Vagrant anyway, but I have not succeeded in running a a docker-on-host analysis. I think it's likely a matter of just sorting out a few more environment variables, but yeah, I haven't gotten there yet.
We've talked about removing Vagrant but haven't made an issue for it yet. I'm not sure if it makes sense to make a self-contained issue for that then come back to this once the analysis-on-host is sorted, or if I should just work through the issues locally to get this issue to the finish line and fold that work into a "get rid of Vagrant" issue later on. Either way it'll have to be after the holidays.
Overview
This PR upgrades Django and some related dependencies.
Testing Instructions
./scripts/update
works fine./scripts/test
runs fine./scripts/server
spins up the app fineChecklist
Resolves #957