Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: rocketpool service prune and rocketpool service reset #323

Closed
jshufro opened this issue Mar 16, 2023 · 4 comments
Closed

Comments

@jshufro
Copy link
Contributor

jshufro commented Mar 16, 2023

Every now and then docker-ce screws up its persisted state and needs a light cleaning.

Typically this takes the form of rocketpool service stop; docker system prune -ay; rocketpool service start
Ideally we'd add this to smartnode, with rocketpool service reset doing the above.

Additionally, many NOs are not aware of the fact that old images are retained until docker system prune -ay is run, so a command called rocketpool service prune would be nice, but probably only if it can be done in a way that doesn't deleting the latest images/containers

@activescott
Copy link
Contributor

FYI: Starting on this...

@activescott
Copy link
Contributor

activescott commented Mar 6, 2024

@jshufro docker system prune is pretty aggressive and will remove all images. This is recoverable but likely involves GBs of downloads again by the user. That can could be fine, after all the user is asking for pretty serious action and maybe that's what ya'll do in support now and it's fine.

Another another option is to go through all container names that we know we need for the current smartnode stack (I believe they're in config) and tag them. As far as I know if we explicitly tag them they won't be removed by prune. After the prune, then I think we'd need to remove that tag we used so that they're cleaned up when the various image's rev their version in the future. After reading more carefully and some experimentation to confirm, docker system prune -a will still end up removing tagged images since prune also removes all stopped containers first and then untags and removes all the images.

docker system prune (along with its cousins docker container prune and docker image prune) have a --filter option, but that option only allows filtering based on labels or timestamps and I don't see any existing label we could leverage. Potentially we could add a label to our compose templates to the services that indicate the image version it was started with so we could filter that way? But this seems to be getting overly complicated...

So with this all considered, what would you like to happen here with both rocketpool service reset and rocketpool service prune?

So I did figure out a way to preserve all of the latest images used by the Smartnode's compose stack that I like for both reset and prune. You can see the details in the PR and the specific approach I'm talking about in commit f16f09b .

Maybe ack this here just so we're all aligned and we can use the PR to highlight any changes you want to see in the code.

@jshufro
Copy link
Contributor Author

jshufro commented Mar 8, 2024

I think we'll want a --all option if we're going to preserve currently-used by default.

Mostly, this command will be used to rebuild rocketpool_net, which can only be deleted if all the containers within are removed. Hopefully your solution preserving currently-used images doesn't prevent their containers from being deleted?

Edit: I can see from the examples in the PR that this works perfectly

@activescott
Copy link
Contributor

activescott commented Mar 9, 2024

I added an --all option in the PR. Also rebased to current master. Careful with --all as it nukes your rocketpool/smartnode:v1.11.9-dev container on a dev machine and reset won't be able to start the node after the prune until you rebuild the daemon container :)

@jshufro jshufro closed this as completed Apr 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants