-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor service configuration from Salt (at provisioning) to systemd (at boot) #1004
Comments
Instead of changing behavior based on hostname, we could qubes services instead. The only differences are:
Advantages:
Disadvantages:
|
This started being tackled a while ago via #840 and its cousins (freedomofpress/securedrop-builder#396 and freedomofpress/securedrop-client#1677) . I can try to bring it back into reviewable state after discussing with @zenmonkeykstop, but first we should converge on a strategy. Should we advanced with the original proposal of forking on hostname or via Qubes services? Whichever way we decide we should at least be consistent and document this practice. |
Switching to Qubes services makes sense, @deeplow. Arguably it extends #1001's configuration injection to enabling services by analogy, which I like.
I guess I take this for granted for as long as we're using Salt in this way at all. :-) |
I think I lean in the direction of using ConditionHost, because I think it better fits our goal of keeping in-VM stuff in packages and using salt for dom0 things. As a practical case is if we want to add a new "sd-log-whatever" service in a VM, we'd have to also do a corresponding workstation patch to enable the Qubes service through dom0, and gate the client release on the workstation one. |
Re: |
What if we instead call them with |
To clarify, that was just my suggestion if we wanted to work around the "qube rename side-effects" problem; I still prefer ConditionHost.
Agreed. |
OK. I see now what you mean by an extra level of indirection. Even though this would be at most one service per qube, having this service stated across two repos adds unnecessary release overhead. The counter-argument is that now the vm-name is set in two different repos. I think in theory within a particular qube it should not care about what it is called by the outside. But that's a wider discussion. So I am fine either way. But other ideas may come up in the meeting we're having later. |
Marek's point about wanting to set it in multiple VMs was pretty convincing to me. In theory we could do something like:
But I think that is less clean than the one ConditionPathExists. So, I'm down to move forwards with Qubes services, and if we end up running into problems, we can always revisit/adjust course. |
To summarize some of the (new) arguments for the use of services (as opposed to hostnames):
One important detail that Marek noted when implementing these services is to hook them up before the qrexec. This ensures that it's before the user's session and most other things. |
From my calculations, the biggest bottleneck to provisioning is the need to provision files in app qubes. Breakdown of what salt is provisioning (in VMs)
ProposalSecret-provisioning is what we can't avoid provisioning at the moment, but all else can go into templates / packages and provisioning on-boot. So for now my proposal would be to: Make disposable + provision via systemd + qubes services:
Provision via systemd + qubes services:
Impact: 4 less qubes that need provisoning with minor code changes. |
Once #1035 lands, proxy is fully ready to be disposable! (I'm not sure why it has the mime handling enabled, nothing in that VM should be opening other files...) |
Wasn't there mime handling config added in sd-proxy specifically to avoid it opening files? |
sidebar: istr Marek mentioning a better way to deny this kind of functionality rather than trying to compete with all the places that mime handling could be introduced, and rather than having to specify every filetype, which has been a source of errors for us in the past. But in any case for the purposes of this PR, I think we could either use the systemd approach that we're planning for other VMs, or just create the symbolic link to the "default" mime handling (which I think is just used for the proxy?) in the deb postinst and then override it in the other vms. |
I have move the mime-handling conversation to its own separate issue to keep this one focused how to approach this systemd provisioning in general. I hope that's OK. (I should have created that issue anyways as I did for the logging one). |
Duh, I was forgetting that sd-devices and sd-viewer were already disposable. So only sd-proxy can become disposable. |
Description
@zenmonkeykstop asked this morning whether #1001 is sufficient for all VM-level configuration, not just keys and values. I think we'll still want to use systemd units with
ConditionHost
conditions to enable individual services based on the hostname configured by Salt (and enforced by dom0 tests).How will this impact SecureDrop/SecureDrop Workstation users?
No user implications.
How would this affect the SecureDrop Workstation threat model?
Along with #1001, this assumes we are comfortable with runtime (boot-time) configuration of VMs' roles and services, except for secrets.
Tasks:
The text was updated successfully, but these errors were encountered: