-
Notifications
You must be signed in to change notification settings - Fork 426
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upload packages as they are built. #2503
Comments
I don't think this change would really help you. Wouldn't you still have at least one build that exceeded the time limit and wouldn't be uploaded? Building a large matrix of things on Travis is a problem. You need to find a way of breaking your build up into multiple Travis jobs. This is one of the main issues with cb3 vs. c-b-a. It's perfectly possible to use cb3 to split out jobs, it just takes some extra work. conda-concourse-ci does it by saving out recipes with separate conda_build_config.yaml files - each with some subsection of the full matrix. For example, your matrix of 3 pythons x 3 numpys would become 9 output recipes, each with a conda_build_config.yaml that had only one python and one numpy. The question is where those intermediate recipes should be stored, so that travis jobs can be created for each of them. Our build system uses rsync to put the recipes on an intermediary server that provides the recipes to build workers. You either need something similar, or perhaps conda-build 3 could support environment variables for these things with some pattern, and then your CI jobs would be broken up into several different env var combinations. For now, I recommend cutting out your numpy version matrix. Why are you doing that? You really only need to build for one (old) numpy version, and then your package will be compatible with newer numpy versions. For example, we build against 1.9 for mac and linux, and 1.11 for windows. We then use
in our run requirements to generate a dependency like
which works out to be something like:
|
Thanks for the suggestion -- I am still a bit confused about I've already split the numpy but just building for 3 Python versions still almost time out the build on OSX targets -- and we haven't even arrived the largest one yet. If I further split the 3 python versions then I am bypassing the entire matrix system. |
I suppose I can setup a build matrix on Travis, then generate conda_build_config.yaml in the bulid script? |
Here's an example: https://github.com/AnacondaRecipes/pytables-feedstock/blob/master/recipe/meta.yaml#L39 it's less about bypassing the matrix system, and more about how to use the matrix system to generate CI jobs. If it is necessary to manually set up CI scripts, then something is wrong - this is a job for automated tools. We had some good discussion about this in the conda-forge meeting this morning. @jakirkham had the good idea that we should use the matrix at re-rendering time, such that we generate the different conda_build_config.yaml files and check them into the feedstock. @jjhelmus had another good idea, to keep the main, full conda_build_config.yaml intact, but reduce it using environment variables or CLI arguments on a per-job basis. |
Thanks for the recipe - does this go with a special conda_build_config.yaml or is this alone enough? I don't see the familiar |
pinning matches packages by name. If there's a numpy key in conda_build_config.yaml, it is sufficient to have only If any constraint is in meta.yaml, such as |
PS: all of our recipes use this conda_build_config.yaml: https://github.com/AnacondaRecipes/aggregate/blob/master/conda_build_config.yaml |
Thanks for the example. I ended up splitting the build to one job per Python version. I also realized I have to call conda-build on individual packages anyways because there is dependency between the packages I build. This way I am still doing one upload per package. When conda-build becomes dependency aware on the recipes the upload-per-package will probably become more relevant. |
I don't think it'll become more relevant, because conda-build uses locally built packages. If you need distributed build workers, yes, you need to shuttle your build artifacts around somehow. With conda-concourse-ci, we do that with rsync to an internal server rather than uploading to anaconda.org, because we don't want to upload stuff to anaconda.org until we know that it works. |
We do not have the funding to setup and rsync to an internal server -- so we can only afford directly going to anaconda.org. I was hoping that conda-build has already asserted it works, especially if all local packages that have been built in the current session has already been uploaded to anaconda. BTW, fedora stages packages to a |
We do not have funding to support arbitrary use cases. If you want this done, submit a PR. Until then, I'm closing this issue. |
fair point |
Hi there, thank you for your contribution! This issue has been automatically locked because it has not had recent activity after being closed. Please open a new issue if needed. Thanks! |
Currently conda-build uploads the packages after all requested packages are built.
This is undesirable if the building is run on Travis where a limit of ~ 1 Hour is applied to the building jobs.
The build time of a single package, after a matrix of 3 Python versions with 3 numpy versions can easily exceed 1 hour, causing the built packages never being uploaded to anaconda.
Could you change the logic to upload immediately after the package is built?
The text was updated successfully, but these errors were encountered: