-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
travis-ci.org is shutting down #3462
Comments
Yep this for sure is going to be a problem as Travis.com is a paid service with a free plan which for sure is not going to cover our needs. I was looking into hosting myself a Jenkins build server for this (already have a build server setup for "custom builds" which I'm working on) On my build server I can build an entire set in roughly 15 - 20 minutes, maybe even faster if I split the job over several build agents. (now the PIO envs are build in a single sequence, so half of the time is spent in linking) N.B. the build server has been paid for by the Patreon money Jimmy and I receive. |
GH build time is about as fast as Travis, queue and network speeds are superb though. For example:
As it turns out, none of the used scripts use TRAVIS_..., what I noticed via
Sidenote: Sidenote2: RE: build server, it is possible to use a custom 'runner' for the Actions, and at least one on a free plan: But, I'm out of my depth here. And one concern is:
Which is also true for any other self-hosted solution. |
The Building docs is not needed for a normal test build as we also don't do anything with its build output return value. Caching is a bit tricky now we concatenate files in some directories to overcome PlatformIO issues on Windows. PUYA patches can be removed as we also do not build older core library versions that do not include the PUYA patches. I will look into the concept of 'runner' for the GH Actions. I will also look into the security concern you mention as it is indeed something potentially serious. |
I noticed you're uring a fork based on a very old commit...
So maybe you can also test it on a more recent commit? |
Different tree! Github default branch is indeed the very old mega, see the ghactions instead.
CI only has Ubuntu build though? There are some syntax tricks that can check for the $ENV contents and select a different cache which I will try. I guess Note about the Will put the docs & puya aside though (at least for now), thanks for the clarification. |
Thank you for testing/trying as it removes a bit of the time critical burden of moving the CI/CD due to closing of Travis.org |
No problem 👍 https://github.com/mcspr/ESPEasy/runs/1695629152?check_suite_focus=true Another thing, I missed the |
Hmm that would be nice if we can get the tag from the build environment. |
With tags what it looks like:
e.g. def get_tag():
# since " If neither a branch or tag is available for the event type, the variable will not exist."
ref = os.environ.get("GITHUB_REF", "")
if ref and ref.startswith("refs/tags/"):
return ref[len("refs/tags/"):]
return ""
env.Append(CPPDEFINES=[("BUILD_GIT", "\\\"{}\\\"".format(get_tag()))] https://github.com/mcspr/ESPEasy/runs/1697632179?check_suite_focus=true#step:7:153 |
Please have a look at the Python script I made for setting compile time defines in the C++ code. |
Do you mean env vs. projenv in normal scripts, or something else? afaik, since I append in pre:script, it gets cloned into both |
I think it has something to do with handling command line arguments, so it is OS dependent. Appending defines in the projenv scripts indeed. |
So the BUILD_FLAGS is basically either CPPDEFINES though is something that gets generated some time into the build from the BUILD_FLAGS, which then gets injected into the env object variables. e.g. >>> env.ParseFlags("-DBLAH")["CPPDEFINES"]
["BLAH"] fwiw I have used this approach to inject commit sha which seems to work so far across all the platforms via edit: Still TODO - modify the github action to upload the resulting files. Will try to do that some time today (without getting distracted with pio internals research...) |
I also noticed the jobs list is hard-coded into CI and my previous approach to detect cache via env name does not always work. There's also could be any kind of enable / disable list in place of the code, when envs do not need to be built. I will go on generating artifacts per-job, since I can't exactly generate .zip from each env at once, like the existing build work via the build_ESPeasy.sh, and munge them together at the end to upload a release asset. General idea is to have:
idk if it is really worth generating source package when being hosted on github? it allows to download the 'tag' tree as an archive already, i.e.: |
This is the best moment to rethink what is the best approach for the build artifacts we like to create. Also ReadTheDocs does support tags (and thus versions) of the docs, so that's also something to look at to generate new tags/versions in RTD when a new tag is set. Jimmy (@Grovkillen) is hard working on improving the flasher so it will support also ESP32 and can also be adapted to download the separate bin files from GitHub if we have them as separate files. Maybe we can also create a link to a tagged version of the source, so you don't actually host the source ZIPs on GitHub, but rather let it generate when needed. About the cache functionality. |
Having per-prefix builds - test_..., custom_..., hard_... and etc., also came to mind, we don't necessarily need that many parallel jobs. And it may still generate artifacts from build results that succeeded. Job artifacts are strictly temporary though, release asset is the best approach imo. But, for example, it allows PRs to have some runnable builds.
Cache right now only keeps the platform tools, toolchain and the framework files. Will add pip. Libs are another story, since PIO keeps them per-env. But, it's possible to install libs in pre:script to a shared location or globally to avoid library manager fetching them in the main step: But, I am not familiar with flasher tool usage of github assets & where the source url may be used / referred to. The approach right now is to build call |
So, at least upload & download of artifacts can be detected in bulk: The workflow right now is:
Some side notes:
|
Looking good. |
re. pygit2, the issue comes from: Line 25 in 4747dd8
https://pypi.org/project/pygit2/#files the current 1.4.0 does have the 3.9 pre-built wheels I can also switch to a PR after all and add more envs to the build list |
Yes please do. Every now and then I just upgrade the packages to the then current version and test if it still works. So I guess now is the time for a new upgrade of Python packages. A PR will be much appreciated. |
@mcspr What is the status of your tests? |
The issue with the tag message was solved by manual fetch + replace of the current tag, since github checkout uses minimal depth to fetch stuff: Let me rebase this again so I don't have those tagged commits and leftover debug flags. |
This issue can be closed, as we have successfully moved to Github Actions 👍 |
Quoting most recent CI job https://travis-ci.org/github/letscontrolit/ESPEasy/builds/753527504
I have not found any issues even mentioning it, and idk if this was discussed internally or on forums, but simple "travis-ci.org" search did not find anything. Also, I assume this is the correct place for the issue / discussion, and not the forum.
Some more info:
Although, it is apparent that is not the case just yet, but very likely it will be in the near future
Very quick solution is to just use Github Actions, however build scripts will need to be changed (e.g. depending on any TRAVIS_... env variables or doing anything with it's API). Which I could try in the PR, unless some other solution is in the works
The text was updated successfully, but these errors were encountered: