Skip to content
This repository has been archived by the owner on Dec 5, 2023. It is now read-only.

Build order for flux and pin versions for kustomize and sops #45

Closed
onedr0p opened this issue Mar 9, 2020 · 13 comments
Closed

Build order for flux and pin versions for kustomize and sops #45

onedr0p opened this issue Mar 9, 2020 · 13 comments
Labels
enhancement New feature or request

Comments

@onedr0p
Copy link
Contributor

onedr0p commented Mar 9, 2020

I wonder if there's a way to set an order to build flux last as it depends on kustomize and sops.

Also might be wise to pin kustomize and sops versions in flux dockerfile instead of using latest.

@onedr0p onedr0p added the enhancement New feature or request label Mar 9, 2020
@onedr0p
Copy link
Contributor Author

onedr0p commented Mar 9, 2020

For automation sake, it may be better to pull the version file for each from

https://github.com/fluxcd/flux/tree/master/docker

We wouldn't be building the most recent version of kustomize or sops though but this ensures compatibility.

Same could be applied for helm-operator too.

@xunholy
Copy link
Member

xunholy commented Mar 9, 2020

Should be able to trigger separate workflows as dependencies to Flux: https://github.community/t5/GitHub-Actions/the-same-dependency-in-all-workflows-jobs/m-p/37367#M2904

I would keep them how they're but for the particular builds we should publish the versions and for latest and the dependent versions if possible.

@onedr0p
Copy link
Contributor Author

onedr0p commented Mar 10, 2020

@onedr0p
Copy link
Contributor Author

onedr0p commented Mar 11, 2020

There's a small issue with COPY --from that doesn't allow you to use variables in that directive.

moby/moby#34482

@xunholy looks like we may need to host the binaries for kustomize and sops somewhere and pull those in with curl.

Any thoughts on what we could use for hosting static binaries?

@ikaruswill
Copy link
Member

Hey guys, I came from the fluxcd thread, good to finally meet some people who work on kubernetes on the Pi. I have a cluster of 7 Pi3B+, 1 Pi4B+4G, 5Pi1B+ running K3s with fluxcd.

I have an automated image builder cicd pipeline that builds the sops and kustomize binaries in a multi-stage build. This ensures ordering but of course lengthens the build
If you're keen, I can submit a PR.

@ikaruswill
Copy link
Member

As for hosting static binaries, I'm thinking we can setup one repository for each component here instead of a monorepo like this, which will allow for using Github releases for hosting.

@xunholy
Copy link
Member

xunholy commented Mar 12, 2020

@ikaruswill would love to see your repo where you've built your flux image, we've currently also got it building and tracking with the upstream release, however, we want to maintain pinning the version matrix to the matrix the upstream issue and not need to manually configure this each time they upgrade. Happy to take a pull request with improvements.

We're currently pushing all our images to docker.io which has been fantastic for us, is there any benefits to the Github release hosting in this context? It might be worth capturing that into another issue, as of yet mono repo has not been an issue.

@ikaruswill
Copy link
Member

ikaruswill commented Mar 12, 2020

@xunholy So how I handle the pinning of the version matrics to the upstream version matrix is this:

I currently keep a fork of fluxcd with my arm docker image file in a branch. I run the GithubApp wei/pull to keep the fork synced with upstream by polling at an interval, opening a PR, auto-merging and doing hardresets.

On premise, I run a Drone CI pipeline that listens for push webhooks from my flux fork (triggered by the hardresets), and do a git tag pull from upstream and push new tags to origin to trigger tag webhooks (So we don't always build on every commit)

This then triggers another Drone CI pipeline that checks out my arm branch, merges the tagged commit in, and builds with the same files (sops, kustomize, fluxd versions) as the ones used in the official docker build process. This ensures that the dependency version matrix matches upstream automatically.

COPY ./kubectl.version ./kustomize.version ./sops.version /

The build is a multi-stage, building fluxd, kustomize, sops and downloading fluxctl and kubectl after which the build artifacts are copied into the final stage based off alpine 3:10, built and pushed to Docker hub.

You can find my branch here. All in all, I'd love to join you guys on this.

@ikaruswill
Copy link
Member

ikaruswill commented Mar 12, 2020

@xunholy On the Github release point. We can push just the binaries to Github release, and image building will be easier without having to deal with COPY --from variable substitution, an issue that @onedr0p faced.

So if we wanted sops version 3.5.0, in the Dockerfile, we just have to do a:

RUN curl -L https://github.com/raspbernetes/sops/releases/download/${SOPS_VERSION}/sops-v3.5.0.linux.arm

Much akin to how mozilla sops publishes their binaries

Monorepo will prove challenging to organize all the built binaries and tagging them neatly. However individual repos will allow clean tagging of binary versions without suffixes to differentiate each app.

What I suggest is we should definitely maintain docker image hosting, but we should on top of that, add Github release binary hosting.

@xunholy
Copy link
Member

xunholy commented Mar 12, 2020

Interesting, I've not seen the wei/pull Github app, we were able to achieve the same result through Github actions, we only pull the latest releases (not pulling on each commit).

The main concept behind this particular repository was to have a place where the community could request different architecture support, although ultimately the objective is to get these changes reflected in the upstream maintainers.

Using Github releases I don't think would be an issue in regards to artifact management, like we're currently doing we can have custom workflows suited for specific directory structures and tagging to suffice. I do like the idea of storing the binary because of the ability to then reference the version as a build arg.

We've got a great community of individuals, come to join us on discord: https://discord.gg/RGvKzVg

@onedr0p
Copy link
Contributor Author

onedr0p commented Mar 12, 2020

@ikaruswill please feel free to open a PR with any proposed changes, we'd love the help :)

@RobReus
Copy link
Member

RobReus commented Jun 21, 2020

I just created PR #115 which introduces using a build stage for the flux image. We could also add the building of kustomize and sops into this same stage. It will however increase the build time.

I would prefer the solution @ikaruswill mentioned about releasing the binary builds for them to github. We can then easily dynamically fetch the correct version during flux image building using the version files inside the fluxcd repo.

@xunholy
Copy link
Member

xunholy commented Nov 27, 2020

Flux is now in maintenance mode and fluxv2 has support for separate architectures using the --arch flag (although they don't support multi-arch images) so you can target a particular architecture. I will close this issue and recommend everyone to consider moving towards using the new flux as again this is no longer receiving updates upstream.

@xunholy xunholy closed this as completed Nov 27, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants