-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Missing optional parameters in experiment.yaml for 3dfgat_cycle #448
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why have a fork of swell that I contribute to rather than create a branch on the main repository? There's not really a reason other than that's how I've done things in the past. I've never really thought much about it, admittedly I don't yet have the strongest confidence on how things are done in GitHub |
Forking to your own GitHub account and then submitting pull requests from that is generally the "default" open source contribution workflow, because it doesn't require adding people as collaborators to a GitHub repository, and liberates folks to experiment more with branch naming, repo structure, etc. (because you don't have to worry about repository-specific things like branch protection, CODEOWNERS, etc.). I think the main reason we prefer to work directly off the |
Tier 1 actions is the main reason, but my question is mostly practical. If we were to collaborate on a branch I can't push changes directly to remote personal forks. I don't think I can even suggest as a PR (?). Huh, I was able to merge develop to this fork on Github web. I didn't know I could do that, I thought it was going to reject. |
Regarding forks and workflows: I'm not an expert on what can be done with the forks except to say that it makes sense that workflows are not going to launch once the repo is forked. I'm a little concerned that I have not adequately conveyed the workflow mechanism however. There is no limit on the range of triggers for the workflows. We can change the workflows to automatically run no matter who makes the PR request. Manual triggers are an option but not at the exclusion of automatic triggers. Dan had the idea that we could make the workflows require a label assignment as part of the credentials for triggering a workflow. We can remove that requirement and give permission to all or a large group of users to execute the on-prem workflows. I don't think we want too many Tier2+ workflows executing however. Just the cheaper Tier-1 workflows. I'm happy to discuss more. I will review the trigger configuration now. |
Here are the current triggers allowed: on: The "workflow_dispatch" is the manual trigger. We can add pull_request and we can add constraints on what branches are allowed for each trigger (or branches to exclude). I can then add users to the NAMS check that we implemented. That will enable more users to execute the workflow. |
Description
This (maybe just partially) addresses issue #440 . In the case of
3dfgat_cycle
, which seems to be the example mentioned in the issue, the consideration ofcice6_domain
in the experiment file is conditional on "cice6" being present inmarine_models
, as specified in that suite'sflow.cylc
.swell/src/swell/suites/3dfgat_cycle/flow.cylc
Lines 221 to 232 in 2b0f03b
In setting up the experiment, two sweeps of
flow.cylc
are, done using the functiontemplate_string_jinja2
, which comparesflow.cylc
and a dictionary set up by suite and task questions.marine_models
is a model-dependent variable, so it's actually addressed by the second, more exhaustive sweep.swell/src/swell/deployment/prepare_config_and_suite/prepare_config_and_suite.py
Lines 431 to 439 in 2b0f03b
The issue here is that the
marine_models
specified in the question dictionary are actually part of a sub-dictionary, so it's not resolved in the jinja2 sweep. This change just pointsmarine_models
to the right place, so.cice6_domains
is triggered to add toexperiment.yaml
. I think this issue is linked to a broader discussion of the configuration files, so I'm not sure if it completely addresses the issue, but the behavior in this case seems to be fixed