Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] Builds inside local SSDs on GtiLab + GKE #1431

Open
Laski opened this issue Sep 23, 2020 · 5 comments
Open

[Question] Builds inside local SSDs on GtiLab + GKE #1431

Laski opened this issue Sep 23, 2020 · 5 comments
Labels
priority/p1 Basic need feature compatibility with docker build. we should be working on this next.

Comments

@Laski
Copy link

Laski commented Sep 23, 2020

As the title says, we're using Kaniko on GitLab CI/CD with a Kubernetes runner in GKE (using the official GitLab Runner Helm Chart). This works great and allows us to parallelize builds, use a cache repo and disable privileged mode on the runner pods.

However, we build many react applications and they take a long time to build even with full caching and redo snapshot mode. We've successfully identified disk throughput as the bottleneck on those jobs, so we changed the nodes' bootdisk to SSDs. This was a great improvement, but is still reaching the network throughput limits (persistent disks in GCP are networked).

GCP provides local SSDs in order to avoid the network limits entirely, which seems ideal for Kaniko as we don't need any kind of durability after the image is pushed. But local SSDs get mounted inside a specific folder in the node (/mnt/disks/ or something like that) and I wasn't able to find documentation on how to take advantage of them with Kaniko. My understanding is that I need to accomplish three, possibly unrelated, things:

  • mount the SSD as a volume inside the job pod

  • tell the runner to checkout the repository (and run the job) inside that dir

  • tell Kaniko to make all the work inside that dir

Any hints on that? Has someone achieved something similar? Thanks in advance.

Disclaimer: I know that this is not a Kaniko specific issue, but this seems like a good place to find someone able to help. Feel free to close it I you disagree. I've made the same question in the GitLab Runner repository btw.

@tejal29
Copy link
Contributor

tejal29 commented Oct 6, 2020

Thanks @Laski for the issue.
This seems like a reasonable ask. I will find time to go through the documentation links and ways to integrate it with kaniko.

Thanks
Tejal

@tejal29 tejal29 added the priority/p1 Basic need feature compatibility with docker build. we should be working on this next. label Oct 6, 2020
@Laski
Copy link
Author

Laski commented Oct 8, 2020

Thanks @tejal29. FYI, while investigating this I found a bug in the gitlab runner that makes this ¿impossible? for the time being https://gitlab.com/gitlab-org/gitlab-runner/-/issues/27011

@mitar
Copy link

mitar commented Jan 27, 2021

I would suggest you use a ramdisk instead. Even faster than SSD. And already supported on GitLab CI. :-)

      builds_dir = "/ramdisk/builds"
      cache_dir = "/ramdisk/cache"

        [[runners.kubernetes.volumes.empty_dir]]
          # Mount ramdisk into build containers (8GB for n2d-highcpu-16).
          name = "ramdisk"
          mount_path = "/ramdisk"
          medium = "Memory"

@mitar
Copy link

mitar commented Jan 27, 2021

I do not think though that this approach (SSD or ramdisk) will really help. Because I think Kaniko is not building/touching the GitLab build directory a lot (just for initial files, not for layers it is building). See: #1310 Or maybe I am mistaken? Where do build files go? Into /kaniko or into context dir?

@mitar
Copy link

mitar commented Feb 9, 2021

Sadly, it turns out, GitLab CI does not yet properly do the ramdisk on Kubernetes executor: https://gitlab.com/gitlab-org/gitlab-runner/-/issues/27545

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority/p1 Basic need feature compatibility with docker build. we should be working on this next.
Projects
None yet
Development

No branches or pull requests

3 participants