Skip to content
This repository has been archived by the owner on May 22, 2018. It is now read-only.

Dirty tree on deployed computers #48

Closed
roderik opened this issue Feb 16, 2013 · 7 comments
Closed

Dirty tree on deployed computers #48

roderik opened this issue Feb 16, 2013 · 7 comments

Comments

@roderik
Copy link

roderik commented Feb 16, 2013

I'm trying out boxen on a clean VM and every time i add something to my-boxen, i rerun boxen --debug on the VM.

I've run into a lot of dirty tree errors so it wouldn't update. Since i know for sure i didn't do anything on the VM except run boxen this poses a problem.

I fixed it by removing a lot of stuff from the git repo and ignoring them.

/vendor/cache
/vendor/puppet/cache
/.projects
/Puppetfile.lock

Is this the way to go or am i just missing something?

@jbarnette
Copy link
Contributor

With the exception of .projects, which should be in .gitignore (sorry about that, I probably missed it), changes to cached gems or modules and Puppetfile.lock should probably be committed.

@roderik
Copy link
Author

roderik commented Feb 16, 2013

What is the benefit of storing github downloads in a git repo? The only
benefit I see is that if a maintainer removes a tag or repo your system
keeps working. But does this counter the extra diskspace in git?

@marsadle
Copy link

I would also vote for ignoring the puppet cache. Storing the tar files in the repo makes little sense to me.

As for the Puppetfile.lock you should run bundle exec librarian-puppet update to update the dependencies like you do with bundler, when you add a module. This is a good way, to ensure, everything is found and working.

@jbarnette
Copy link
Contributor

Ignore whatever you want, but look at some actual disk space usage before you do. We've found that checking in dependencies makes for faster, less error-prone deploys.

@mmuehlberger Don't bother with librarian-puppet update, just run boxen.

@jbarnette
Copy link
Contributor

I'm closing this because it's template opinion, not a bug.

@wfarr
Copy link
Contributor

wfarr commented Feb 16, 2013

@roderik To answer your query directly, it saves every person running your boxen from making N API calls to GitHub.com to download N tarballs any time N modules change in the Puppetfile. This turns out to be quite a savings in runtime of Boxen. Most of the tarballs are around 4-8K in size, and git actually does a solid job of not blowing up the repository size (GitHub's boxen, which has had numerous module updates, clocks in at less than 1.2MB on our file servers after around 6 months or so).

@roderik
Copy link
Author

roderik commented Feb 16, 2013

@wfarr ok cool, unignored :)

Especially since i got hit with the api limit using boxen by myself, bypassing this by adding them to the repo makes it usable in an organisation as well #50

@wfarr wfarr mentioned this issue Feb 16, 2013
toocheap pushed a commit to toocheap/my-boxen that referenced this issue Oct 31, 2013
.projects should be ignored by default based on boxen/our-boxen#48 (comment)
tarVolcano pushed a commit to tarVolcano/my-boxen that referenced this issue Nov 13, 2013
.projects should be ignored by default based on boxen/our-boxen#48 (comment)
tsphethean pushed a commit to tsphethean/my-boxen that referenced this issue Jun 11, 2014
.projects should be ignored by default based on boxen/our-boxen#48 (comment)
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Development

No branches or pull requests

4 participants