-
Notifications
You must be signed in to change notification settings - Fork 696
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve python dependency upgrade management for Tails workstations #1800
Comments
What about populating a wheel directory with all the dependencies ? Maybe this has been discussed before :-) But that's what came to mind as an alternative to tar the virtualenv. Another advantage is that it can have manylinux wheels when/if they are not provided in pypi upstream, so that there never is a need for compilation. There is a script within Ceph that populates a wheel directory for the purpose of creating packages on a machine that has no access to the network. And another script within python-crush that builds binary wheels. |
Thanks for bringing that up @dachary ! This hasn't been specifically discussed yet -- this virtualenv installation management is a new problem in the project (we were previously relying on Tails/Debian's version of |
I've been happy with wheels and manylinux so far. |
Why wouldn't
It does. I mostly did everything listed in #1617 at one point (work now lost 😿), so I can confirm that. I'm not in favor of the tarball idea--that seems really hacky and not very transparent. Not to mention the overhead for us to maintain yet another software distribution mechanism (we already have apt repos, git tags, Docker images, etc.).
I'm not follow here. Can you explain further? The wheel directory seems better, and is what we use on the app server, but there we have the problem of deprecated dependencies: #856. Maybe you have a way of dealing with this @dachary. Working out the persistence story to make sure our wheel directory is prepended to our PYTHONPATH shouldn't be difficult. I think the way that is most easy and surefire is to rebuild the virtualenv from scratch each time the admin needs to run a playbook and the Python dependencies have changed since the last time they did. While this is not the most time or bandwidth efficient solution (the slowdown would be minimal if we could take advantage of pip's default cacheing, but alas, we're working with an amnesiac OS), we can at least agree that it should be reliable. One thing to remember about keeping the same virtualenv long-term is that python, pip, setuptools, etc. are copied into the virtualenv and then do not get updated. |
Yes, deprecated as in deprecated for use by the SD project. Say we don't use a top-level dependency anymore, or one of our top-level dependencies no longer uses one of its dependencies. It's not urgent, but at the same time we'd rather remove that no-longer-used dependency.... Maybe deprecated is not the best word choice. |
Well so you gotta install
Oh 💩 - i totally didnt realize there was a
If you were able to confirm that |
I am less inclined towards this proposal for resolution because it requires us to maintain additional infrastructure (yes, even if it's "just an s3 bucket" it's still additional infrastructure). That said, "tar up the relevant virtualenv" might be more complicated than you think. I am slightly concerned there could be potential issues with cross-compiling dependencies that have binary components. I think, as @dachary suggested, wheels are probably your best bet here, since they were designed with this kind of use case in mind. All that said, is there any reason we wouldn't just use a pip wheel archive, like we already do for securedrop? |
Oh yeah, duh. I'm amnesiac about Tails amnesia sometimes lol.... It is an issue though, right? How do we keep |
See my concerns in #1800 (comment) re #856. |
This is still valid, and may see some attention as part of packaging changes (moving from deployment from github to deployment as a Debian package). |
Feature request
Description
Once PR for #1146 lands we want to re-evaluate the current strategy of upgrading and managing the virtualenv on the tails workstations. This was brought up in #1781 as part of the PR discussion.
There are two proposed solutions so far to improve this:
it has been suggested that
pip-sync
tool might be a better fit here to ensure we dont have dangling dependencies and the latest python dependencies are in place. The pip-sync command is part of pip-tools which will have to be installed in the virtualenv first. There would have to be some conditional logic to utilize pip install the first round and then utilizepip-sync
for subsequent runs. I havent tested this, but we would also want to ensure that pip-tools plays nice with the--require-hashes
option currently being utilized.another possible solution to investigate would be to tar up the relevant virtualenv and host that ourselves in a way that is pulled down in an abstracted way via the
securedrop-admin
script. This strategy provides a number of improvements - we can drop the apt installation of a slew of compilation tools (required during pip install ), and as developers we can remove the logic concerning pip installation (which means not worrying that different dependencies get out of sync). This overall means less time waiting for the setup portion of the installation to occur on the tails workstations. With this strategy, we also do not have to maintain the separate pip requirement.txt with the sha256 hashes which currently breaks the workflow of usingpip-compile
as intended.User Stories
As a user, I want to ensure that my securedrop workstation dependencies are in sync and are straight-forward to upgrade.
The text was updated successfully, but these errors were encountered: