-
Notifications
You must be signed in to change notification settings - Fork 45
Conversation
427f1bf
to
db1f5dd
Compare
VERSION: ${{ matrix.BUILD_OS }}-${{ matrix.BUILD_ARCH }}-${{ env.VERSION_SUFFIX }} | ||
run: | | ||
docker images -a | ||
docker push ${{ env.IMAGE_ID }}:${{ env.VERSION }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what sort of naming scheme are we going for? Looking at some of the containers on dockerhub it seems many of them use the format <image name>:<program version>-<os>
(e.x. redis:6.2.7-bullseye
or redis:6.2.7-alpine
) with the arch information stored in a manifest that dockerhub takes care of. I'm not sure how ghcr handles different architectures
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Punt: we're not pinning down the naming structure as part of this PR, instead just trying to get anything out. Likely @gmatev and others will have future concerns about that design. I'm sure it'll be some variation/combination of version|schedule|platform.
On that note, we still need to set the foundation for Corso versioning semantics and release control patterns. I expect that much of the version tagging will cascade as a result of that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For image tag names the docs PR https://github.com/alcionai/corso/pull/658/files#diff-289cef99004de6fb5ad1a419ad9b4c2d491be1c7558eebfacc4ca484fd0eb58a actually has some relevant details.
|
||
COPY ./bin/corso ./ | ||
|
||
USER nonroot:nonroot |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's make sure we can have everything mapped from /app/corso
since this is what we are recommending in the docs (https://github.com/alcionai/corso/pull/658/files#diff-289cef99004de6fb5ad1a419ad9b4c2d491be1c7558eebfacc4ca484fd0eb58a). The goal is to simplify mapping so that users only map a single folder and we set everything in there nicely organized.
For example:
- /app/corso/config/corso.toml
- /app/corso/config/logs - we don't have logs yet, but soon
- Anything else that makes sense - maybe various Kopia directories if that is reasonable
I think the simplest way is to set environment variables that specify locations (and Corso picks up) and make sure the right folders exist.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When you say map
, in this case do you mean just to make sure we COPY /app/corso /app/corso
in the dockerfile?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or is this a request outside of the docker build; making sure that our system plays nicely (via defaults, etc), with the /app/corso location?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ryanfkeepers sorry this notification did not pop-up.
I mean the latter. Our docs will tell you to map a local folder to /app/corso
inside the container. The environment in the container should then be setup that the Corso interactions go to /app/corso
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Made an issue to track it. Feel free to add/change the details there.
# + Separates the golang build from the corso build. | ||
# - Haven't figured out multiplatform builds yet. | ||
# - Doesn't cache, always takes 10-15 minutes per build in the matrix. | ||
# Dockerfile: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Dockerfile
seems reasonable - I do think we need to move the corso
binary build out of the Dockerfile
though - that is likely the slow part here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's definitely the slow part. Can you explain more about moving the binary build out of the dockerfile? I'm still trying to wrap my head around docker, and I'm not seeing how we'd pull that part out without being left with, functionally, the scripted approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Scripted approach: ./build-container.sh
which internally calls ./build.sh
to build corso
. Uses build/Dockerfile
"Dockerfile approach" (very similar): Uses docker/Dockerfile
which is a variation of build/Dockerfile
that builds corso
as part of Docker build. Uses docker/build-push-action
I'm suggesting we always use ./build.sh
as a step in the deploy to build the corso
binary and then use build/Dockerfile
with the docker/build-push-action
action to build and push the image
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
okay, I made another branch to try out this combaintion:
Here's the diff.
The build result.
And the resulting image.
It mostly works. For some reason the tag isn't coming through on the docker pull. But otherwise seems to be handling it fine. Let me know if that's along the lines of what you were thinking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That looks about right. ./build.sh
does currently build within a container and I think for CI we want to invoke go build ...
directly (and enable GO caching). That should hopefully speed things up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tried to invoke go build ...
directly, but we're getting some cross-compilation failures with the azure libraries. Error messages can be seen here. Source seems to be this issue with CGO and cross-compilation.
If you could kindly check out the rest of the changes before I fold them into this PR, they're here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Georgi's comment needs to be addressed but can be done as a follow-up
Adds the `rolling_container` github action to build rolling-release container images on each branch push, and deploy them to ghcr.io on push to main. Also amends the build scripts to properly store and locate cached go packages.
2caa24b
to
7297590
Compare
/aviator merge |
Aviator status
This PR was merged using Aviator. |
Kudos, SonarCloud Quality Gate passed! |
Introduces the production of docker containers as a CI step.
Currently only provides a rolling-release version that builds
on every push to main. Images are deployed to ghcr.io.
The PR includes two variations on building the images. We'll
likely only want to stick with one or the other.