-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Testing/CI #12
Comments
Can an automated test be set up that runs a tiny example domain on NCI? Or do the tests normally get run locally / on Github? Idk what the standard approach is for HPC tools. Would be ideal if there was just a single python script that needs to be executed, generates an example domain using the tools and then calls payu run at the end. If outputs are created then build passes test |
NCI requires login access etc -- why you'd wanna/need to run something on NCI? I presume there is a lot of functionality that can be tested before reaching the Most functions I see at https://github.com/COSIMA/mom6-regional-scripts/blob/master/regional_library.py can be tested via small function that returns true/false without NCI -- right? |
It's certainly possible to test things on NCI, but it does require a lot of care to avoid security issues. I agree that a lot of things can be tested externally. Maybe it would be good to have a small domain to run the full way through as a proper shakedown. If it's small/short enough it mightn't need NCI either, however. |
Yeah very valid points! I was thinking that since the end goal is that the model runs, this would need to be part of testing. If we can make a bunch of boolean checks to ensure the files are legit then that works too! I just know that so many unforseen things crop up on run though that might not be immediately obvious when checking input files via ncdump. We'd have to be pretty thorough in designing checks that would guarantee a running model |
Start by design tests that test what each of your functions are supposed to be doing. Keep the tests simple -- it's a delicate balance: you want them as simple as possible but not trivial so they actually catch a bug when introduced. |
you might want to see how @micaeljtoliveira has set things up for these |
Is that using with pytest. We should use pytest for automated CI. |
#26 added a testing CI pipeline which automatically runs on every commit! We now have to populate the |
OK just to get up to speed with this again: Are we able to host some small test files on Github? eg a really tiny dummy domain over which to interpolate on Github's limited computational resources? I guess at a minimum we would then need to upload:
If we chose a domain that was 50x50 gridpoints horizontally, 10 vertical layers and 5 days forcing we'd keep all these files on the order of kb, and could upload them directly to the 'tests' folder? First test would just initialise experiment and do the basic interpolation steps. Perhaps second test do the text editing / file renaming step since this is also really easy to check for? OR I realise that alternatively we generate dummy boundary data instead using non-random, reproducible functions? Then somehow condense the 'solution' into a single floating point number by, say, averaging over a certain value, saving the need for us to save netcdfs to github Other questions
|
The status of this issue is that broader automated testing requires an easy way to integrate the FMS tools into the installation pipeline. Hopefully we can mostly replace them with Python tools. See issue #68 |
If you only need this for CI, you can create a docker image with the tools installed to use in github actions. |
Interesting! |
Ooh that would be nice! Does that mean that the image will sit somewhere separate to the repo? I.E users won't get it if they git clone on their machine? |
Images go into something called the github container registry, not the git repo. So no, users won't see it, unless they explicitly download it from the registry. |
I've published a container at https://ghcr.io/cosima/regional-test-env:latest, with the boundary forcing and cut-down RYF mentioned in #73 (comment), and all the FRE tools, as well as MOM6 and MOM6-SIS2 installed. Should be enough to figure out a test workflow for the moment. |
Hi @angus-g I was finally getting around to making tests on the docker image but this link appears to have expired. Do you still have this image somewhere? |
The image still exists, but you need to authenticate to download packages from the GitHub registry: https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry#authenticating-to-the-container-registry Having said that, the image is pretty old, so I'll update that with a newer build of MOM6. The idea is just to have an image that provides pre-compiled MOM6 and FRE tools, with the required forcing files. Are the files that are there the ones you need to run your test? |
Thanks Angus! Maybe we could incorporate the updated ninja tool to compile our frozen-in-time mom6 executable? If we decide to maintain a set of versions of each component that work with our package maybe the image could download and recompile from there to keep it up to date? That would of course be slow though... The forcing files are still the same, yes |
I think we can close this issue now? |
Agreed - there's always going to be more testing we can do (eg. ensuring that mom6 actually runs) but we can make separate issues for that |
We should have some tests and a CI pipeline that runs them on every commit. This helps in the development also as they will catch bugs introduced accidentally.
The text was updated successfully, but these errors were encountered: