Skip to content

Commit

Permalink
Sync container branch with main (#724)
Browse files Browse the repository at this point in the history
* Container (#678)

* Create Dockerfile

works with synthetic example_multicase POD

* Update Dockerfile

* Update Dockerfile

* Create docker-build-and-push.yml

* Update docker-build-and-push.yml

* Update docker-build-and-push.yml

* Update docker-build-and-push.yml

* Update docker-build-and-push.yml

* Container Documentation (#687)

* Create container_config_demo.jsonc

* Create container_cat.csv

* Create container_cat.json

* Update container_config_demo.jsonc

* docs

* Update ref_container.rst

* Update ref_container.rst

* Update ref_container.rst

* Update ref_container.rst

* Update ref_container.rst

* Update dev_start.rst

* Update ref_container.rst

* Update dev_start.rst

* Update ref_container.rst

* Update doc/sphinx/dev_start.rst

Co-authored-by: Jess <[email protected]>

* Update doc/sphinx/ref_container.rst

Co-authored-by: Jess <[email protected]>

* Update doc/sphinx/ref_container.rst

Co-authored-by: Jess <[email protected]>

* Update doc/sphinx/ref_container.rst

Co-authored-by: Jess <[email protected]>

* Update doc/sphinx/dev_start.rst

Co-authored-by: Jess <[email protected]>

---------

Co-authored-by: Jess <[email protected]>

* Fix ci bugs (#688)

* fix unresolved conda_root ref in pod_setup
comment out no_translation setting for matching POD and runtime conventions for testing

* fix coord_name def in translate_coord

* define var_id separately in pp query

* change new_coord definition to obtain ordered dict instead of generator object in translation.create_scalar_name so that deepcopy can pickle it

* change logic in pod_setup to set translation object to no_translation only if translate_data is false in runtime config file

* uncomment more set1 pods that pass initial testing in
house

* add checks for no_translation data source and assign query atts using the var object instead of the var.translation object if True to preprocessor

* remove old comment from preprocessor

* change value for for hourly data search in datelabel get_timedelta_kwargs to return 1hr instead of hr so that the frequency for hourly data matchew required catalog specification

* comment out some set1 tests, since they are timing out on CI

* rename github actions test config files
split group 1 CI tests into 2 runs to avoid timeout issues

* update mdtf_tests.yml to reference new config file names and clean up deprecated calls

* update mdtf_tests.yml

* update matrix refs in mdtf_tests.yml

* revert changes to datelabel and move hr --> 1hr freq conversion to preprocessor

* delete old test files
just run 1 POD in set1 tests
try adding timeouts mdtf_tests.yml

* fix typo in timeout call in mdtf_tests

* fix GFDL entries in test catalogs

* fix varid entries for wvp in test catalogs

* change atmosphere_mass_content_of_water_vapor id from prw to wvp in gfdl field table

* comment out long_name check in translation.py

* define src_unit for coords if available in preprocessor.ConvertUnitsFunction
redefine dest_unit using var.units.units so that parm is a string instead of a Units.units object in call to units.convert_dataarray

* log warning instead of raising error if attr name doesn't match in xr_parser.compare_attr so that values can be converted later

* fix variable refs in xarray datasets in units.convertdatarray
add check to convert mb to hPa to convertdataarray

* fix frequency entries for static vars in test catalogs

* remove duplicate realm entries from stc_eddy_heat_fluxes settings file

* remove non alphanumeric chars from atts in xr_parser check_metadata

* comment out non-working PODs in set 3 tests

* Remove timeout lines and comment unused test tarballs in mdtf_tests.yml

* infer 'start_time' and 'end_time' from 'time_range' due to type issues (#691)

* infer 'start_time' and 'end_time' from 'time_range' due to type issues

* add warning

* fix ci issue

* move line setting date_range in query_catalog() (#693)

* move line setting date_range in query_catalog()

* cleanup print

* Remove modifier entry from areacello in trop_pac_sea_lev POD settings file

* Fix issues in pp query (#692)

* fix hr -> 1hr freq conversion in pp query
try using regex string contains standard_name in query

* add check for parameter type to xr_parser approximate_attribute_value

* remove regex from pp query standard_name

* add check that bounds is populated in cf.assessor, then check coord attrs and only run coord bounds check if bounda s are not None in xr_parser

* add escape brackets to command-line commands (#694)

* Fix convective_transition_diag POD (#695)

* fix ctd file formatting and typos

* more formatting and typo fixes in ctd POD

* uncomment convective transistion diag POD in 1a CI test config files

* try moving convective_transition_pod to ubuntu suite 2 tests

* add wkdir cleanup between each test run step and separate obs data fetching for set 1 tests in ci config file

* move convective_transition_diag POD to set 1b tests

* just run 1 POD in set 1a and 2 PODs in set 1b to avoid runner timeouts

* reorganize 1b tests

* add ua200-850 and va200-850 to gfld-cmor-tables (#696)

* add ice/ocean precip entries to GFDL fieldlist (#697)

* Add alternate standard names entry to fieldlists and varlistEntry objects (#699)

* add alternate_stanadard_names entries to precipitation_flux vars in CMIP and GFDL fieldlists
add list of applicable realms to preciptitation flux

* add alternate_standard_names attributes and property setters to DMDependentvariable class that is VarlistEntry parent class
define realm parm as string or list

* extend realm search in fieldlist lookup tables to use a realm list in the translation
add list to realm type hints in translation module

* extend standard_name query to list that includes alternate_standard_names if present in the translation object

* break up rainfall_flux and precipitation_flux entries in CMIP and GFDL field tables since translator can't parse realm list correctly

* revert realm type hints defined  as string or list and casting realm strings to listsin translation module

* change assertion to log errof if translation is None in varlist_util

* define new standard_name for pp xarray vars using the translation standard_name if the query standard name is a list with alternates instead of a string

* add function check_multichunk to fix issue with chunk_freqs (#701)

* add function check_multichunk to fix issue with chunk_freqs

* fix function comment

grammar grammar grammar

* move log warning

* add plots link to pod_error_snippet.html (#705)

* add plots link to pod_error_snippet.html

* remove empty line

* add variable table tool and put output into docs (#706)

* add variable table script to docs

* move file

* Delete tools/get_POD_varname/MDTF_Variable_Lists.html

* rework ref_vartable.rst to link directly to html file of the table (#707)

* rework ref_vartable.rst to link directly to html file of the table

* Delete doc/sphinx/MDTF_Variable_Lists.html

* Update MDTF_Variable_Lists.html

* remove example_pp_script.py from user_pp_scripts list in multirun_config_template.jsonc

* remove .nc files found in OUTPUT_DIR depending on config file (#710)

* fix formatting issues in output reference documentation (#711)

* fix forcing_feedback settings.jsonc formatting and remove extra freq entries

* Add check for user_pp_scripts attribute in config object to DaskMultifilePP init method

* add check for user_pp-scripts attr to execute_pp_functions

* update 'standard_name' for each var in write_pp_catalog (#713)

* Update docs about --env_dir flag (#715)

* Update README.md

* Update start_install.rst

* fix logic when defining log messages in pod_setup

* Fix dummy translation method in NoTranslationFieldlist (#717)

* define missing entries in dummy translation object returned by NoTranslationFieldlist.translate
add logic to determine alternate_standard_names attribute to NoTranslationFieldlist.translate

* set translate_data to false for testing

* edit logging message for no translation setting in pod_setup

* add todo to translation translate_coord and cleanup comments

* remove checks for no_translation from preprocessor

* define TranslatedVarlistEntry name attribute using data convention field table variable id

* revert debugging changes from test config file

* update docs for translate_data flag in the runtime config file

* fix variable_id and var_id refs in dummy translate method

* Reimplement crop date range capability (#718)

* add placeholder functions for date range cropping

* refine crop_date_range function. Need to figure out how to pass calendar from subset df

* continue reworking crop_date_range

* revert changes to check_group_daterange, and add check that input files overlap start and end times
add option aggregate=false to to_dataset_dict call
look into replaceing check_time_bounds with crop date range call before the xarray merge

* reorder crop_date_range call
add calls to parse xr time coord and define start and end times for dataset

* finalize logic in crop_date_range

* remove start_time and end_time from, and add time_range column to catalog generated by define_pp_catalog_assets

* replace start_time and end_time entries with time_range entries populated from information in processed xarray dataset in write_pp_catalog

* remove unused dask import from preprocessor

* replace hard coded time dimension name with var.T.name in call to xarray concatenate

* add check_time_bounds call back to query and fix definitions for modified start and end points so that they use the dataset information

* fix hour, min, sec defs in crop_date_range for new start and end times

* strip non-numeric chars from strings passed to _coerce_to_datetime

* add logic to define start and end points for situation where desired date range is contained by xarray dataset to crop_date_range

* Create drop attributes func (#720)

* fix forcing_feedback settings formatting

* add check for user_pp_scripts attribute before looping through list to multifilepreprocessor add_user_pp_scripts method

* add snakeviz to env_dev.yml

* move drop_atts loop to a separate function that is called by crop_date_range and before merging xradate_range and before merging datasets in query_catalog in the preprocessor

* Update mdtf dev env file (#722)

* add snakeviz, gprof2dot, and intake-esgf packages to env_dev file

* add viztracer to dev environment file

* add kerchunk package to dev environment

* Fix various pp issues related to running seaice_suite (#721)

* fix pp issues for seaice_suite

* fix arg issue

* rename functions

* add default return for conversion function

---------

Co-authored-by: Aparna Radhakrishnan <[email protected]>
Co-authored-by: Jess <[email protected]>
  • Loading branch information
3 people authored Dec 19, 2024
1 parent 4e10d2a commit 7b82c00
Show file tree
Hide file tree
Showing 55 changed files with 5,446 additions and 1,702 deletions.
110 changes: 60 additions & 50 deletions .github/workflows/mdtf_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,30 +19,29 @@ jobs:
strategy:
matrix:
os: [ubuntu-latest, macos-13]
json-file: ["tests/github_actions_test_ubuntu_set1.jsonc","tests/github_actions_test_macos_set1.jsonc"]
json-file-set2: ["tests/github_actions_test_ubuntu_set2.jsonc", "tests/github_actions_test_macos_set2.jsonc"]
json-file-set3: ["tests/github_actions_test_ubuntu_set3.jsonc", "tests/github_actions_test_macos_set3.jsonc"]
json-file-1a: ["tests/github_actions_test_ubuntu_1a.jsonc","tests/github_actions_test_macos_1a.jsonc"]
json-file-1b: ["tests/github_actions_test_ubuntu_1b.jsonc","tests/github_actions_test_macos_1b.jsonc"]
json-file-2: ["tests/github_actions_test_ubuntu_2.jsonc", "tests/github_actions_test_macos_2.jsonc"]
json-file-3: ["tests/github_actions_test_ubuntu_3.jsonc", "tests/github_actions_test_macos_3.jsonc"]
# if experimental is true, other jobs to run if one fails
experimental: [false]
exclude:
- os: ubuntu-latest
json-file: "tests/github_actions_test_macos_set1.jsonc"
json-file-1a: "tests/github_actions_test_macos_1a.jsonc"
- os: ubuntu-latest
json-file-set2: "tests/github_actions_test_macos_set2.jsonc"
json-file-1b: "tests/github_actions_test_macos_1b.jsonc"
- os: ubuntu-latest
json-file-set3: "tests/github_actions_test_macos_set3.jsonc"
- os: macos-12
json-file: "tests/github_actions_test_ubuntu_set1.jsonc"
- os: macos-12
json-file-set2: "tests/github_actions_test_ubuntu_set2.jsonc"
- os: macos-12
json-file-set3: "tests/github_actions_test_ubuntu_set3.jsonc"
json-file-2: "tests/github_actions_test_macos_2.jsonc"
- os: ubuntu-latest
json-file-3: "tests/github_actions_test_macos_3.jsonc"
- os: macos-13
json-file-1a: "tests/github_actions_test_ubuntu_1a.jsonc"
- os: macos-13
json-file: "tests/github_actions_test_ubuntu_set1.jsonc"
json-file-1b: "tests/github_actions_test_ubuntu_1b.jsonc"
- os: macos-13
json-file-set2: "tests/github_actions_test_ubuntu_set2.jsonc"
json-file-2: "tests/github_actions_test_ubuntu_2.jsonc"
- os: macos-13
json-file-set3: "tests/github_actions_test_ubuntu_set3.jsonc"
json-file-3: "tests/github_actions_test_ubuntu_3.jsonc"
max-parallel: 3
steps:
- uses: actions/checkout@v3
Expand All @@ -62,19 +61,13 @@ jobs:
condarc: |
channels:
- conda-forge
- name: Install XQuartz if macOS
if: ${{ matrix.os == 'macos-12' || matrix.os == 'macos-13'}}
- name: Set conda environment variables for macOS
if: ${{ matrix.os == 'macos-13' }}
run: |
echo "Installing XQuartz"
brew install --cask xquartz
echo "CONDA_ROOT=$(echo /Users/runner/micromamba)" >> $GITHUB_ENV
echo "MICROMAMBA_EXE=$(echo /Users/runner/micromamba-bin/micromamba)" >> $GITHUB_ENV
echo "CONDA_ENV_DIR=$(echo /Users/runner/micromamba/envs)" >> $GITHUB_ENV
- name: Set environment variables
run: |
echo "POD_OUTPUT=$(echo $PWD/../wkdir)" >> $GITHUB_ENV
- name: Set conda vars
- name: Set conda environment variables for ubuntu
if: ${{ matrix.os == 'ubuntu-latest' }}
run: |
echo "MICROMAMBA_EXE=$(echo /home/runner/micromamba-bin/micromamba)" >> $GITHUB_ENV
Expand All @@ -84,7 +77,7 @@ jobs:
run: |
echo "Installing Conda Environments"
echo "conda root ${CONDA_ROOT}"
echo "env dir ${CONDA_ENV_DIR}"
echo "env dir ${CONDA_ENV_DIR}"
# MDTF-specific setup: install all conda envs
./src/conda/micromamba_env_setup.sh --all --micromamba_root ${CONDA_ROOT} --micromamba_exe ${MICROMAMBA_EXE} --env_dir ${CONDA_ENV_DIR}
echo "Creating the _MDTF_synthetic_data environment"
Expand All @@ -104,7 +97,7 @@ jobs:
mkdir wkdir
## make input data directories
mkdir -p inputdata/obs_data
- name: Get Observational Data for Set 1
- name: Get Observational Data for Set 1a
run: |
echo "${PWD}"
cd ../
Expand All @@ -113,39 +106,56 @@ jobs:
# attempt FTP data fetch
# allow 20 min for transfer before timeout; Github actions allows 6 hours for individual
# jobs, but we don't want to max out resources that are shared by the NOAA-GFDL repos.
curl --verbose --ipv4 --connect-timeout 8 --max-time 1200 --retry 128 --ftp-ssl --ftp-pasv -u "anonymous:anonymous" ftp://ftp.gfdl.noaa.gov/perm/oar.gfdl.mdtf/convective_transition_diag_obs_data.tar --output convective_transition_diag_obs_data.tar
curl --verbose --ipv4 --connect-timeout 8 --max-time 1200 --retry 128 --ftp-ssl --ftp-pasv -u "anonymous:anonymous" ftp://ftp.gfdl.noaa.gov/perm/oar.gfdl.mdtf/EOF_500hPa_obs_data.tar --output EOF_500hPa_obs_data.tar
# curl --verbose --ipv4 --connect-timeout 8 --max-time 1200 --retry 128 --ftp-ssl --ftp-pasv -u "anonymous:anonymous" ftp://ftp.gfdl.noaa.gov/perm/oar.gfdl.mdtf/EOF_500hPa_obs_data.tar --output EOF_500hPa_obs_data.tar
curl --verbose --ipv4 --connect-timeout 8 --max-time 1200 --retry 128 --ftp-ssl --ftp-pasv -u "anonymous:anonymous" ftp://ftp.gfdl.noaa.gov/perm/oar.gfdl.mdtf/Wheeler_Kiladis_obs_data.tar --output Wheeler_Kiladis_obs_data.tar
curl --verbose --ipv4 --connect-timeout 8 --max-time 1200 --retry 128 --ftp-ssl --ftp-pasv -u "anonymous:anonymous" ftp://ftp.gfdl.noaa.gov/perm/oar.gfdl.mdtf/MJO_teleconnection_obs_data.tar --output MJO_teleconnection_obs_data.tar
curl --verbose --ipv4 --connect-timeout 8 --max-time 1200 --retry 128 --ftp-ssl --ftp-pasv -u "anonymous:anonymous" ftp://ftp.gfdl.noaa.gov/perm/oar.gfdl.mdtf/MJO_suite_obs_data.tar --output MJO_suite_obs_data.tar
curl --verbose --ipv4 --connect-timeout 8 --max-time 1200 --retry 128 --ftp-ssl --ftp-pasv -u "anonymous:anonymous" ftp://ftp.gfdl.noaa.gov/perm/oar.gfdl.mdtf/precip_diurnal_cycle_obs_data.tar --output precip_diurnal_cycle_obs_data.tar
echo "Untarring set 1 NCAR/CESM standard test files"
tar -xvf convective_transition_diag_obs_data.tar
tar -xvf EOF_500hPa_obs_data.tar
echo "Untarring set 1a NCAR/CESM standard test files"
# tar -xvf EOF_500hPa_obs_data.tar
tar -xvf precip_diurnal_cycle_obs_data.tar
tar -xvf MJO_teleconnection_obs_data.tar
tar -xvf MJO_suite_obs_data.tar
tar -xvf Wheeler_Kiladis_obs_data.tar
# clean up tarballs
rm -f *.tar
- name: Run diagnostic tests set 1
- name: Run diagnostic tests set 1a
run: |
echo "POD_OUTPUT is: "
echo "POD_OUTPUT=$(echo $PWD/../wkdir)" >> $GITHUB_ENV
echo "POD_OUTPUT is "
echo "${POD_OUTPUT}"
micromamba activate _MDTF_base
# trivial check that install script worked
./mdtf_framework.py --help
# run the test PODs
./mdtf -f ${{matrix.json-file}}
./mdtf -f ${{matrix.json-file-1a}}
# Debug POD log(s)
# cat ${POD_OUTPUT}/MDTF_NCAR.Synthetic_1975_1981/Wheeler_Kiladis/Wheeler_Kiladis.log
- name: Get observational data for set 1b
run: |
# clean up data from previous runs
echo "deleting data from set 1a"
cd ../wkdir
rm -rf *
cd ../inputdata/obs_data
rm -rf *
cd ../../
curl --verbose --ipv4 --connect-timeout 8 --max-time 1200 --retry 128 --ftp-ssl --ftp-pasv -u "anonymous:anonymous" ftp://ftp.gfdl.noaa.gov/perm/oar.gfdl.mdtf/convective_transition_diag_obs_data.tar --output convective_transition_diag_obs_data.tar
curl --verbose --ipv4 --connect-timeout 8 --max-time 1200 --retry 128 --ftp-ssl --ftp-pasv -u "anonymous:anonymous" ftp://ftp.gfdl.noaa.gov/perm/oar.gfdl.mdtf/MJO_teleconnection_obs_data.tar --output MJO_teleconnection_obs_data.tar
curl --verbose --ipv4 --connect-timeout 8 --max-time 1200 --retry 128 --ftp-ssl --ftp-pasv -u "anonymous:anonymous" ftp://ftp.gfdl.noaa.gov/perm/oar.gfdl.mdtf/MJO_suite_obs_data.tar --output MJO_suite_obs_data.tar
tar -xvf MJO_teleconnection_obs_data.tar
tar -xvf MJO_suite_obs_data.tar
tar -xvf convective_transition_diag_obs_data.tar
# clean up tarballs
rm -f *.tar
- name: Run diagnostic tests set 1b
run: |
./mdtf -f ${{matrix.json-file-1b}}
- name: Get observational data for set 2
run: |
echo "${PWD}"
# remove data from previous run
# Actions moves you to the root repo directory in every step, so need to cd again
echo "deleting data from set 1b"
cd ../wkdir
rm -rf *
cd ../inputdata/obs_data
echo "deleting obs data from set 1"
rm -rf *
cd ../../
echo "Available Space"
Expand All @@ -160,18 +170,18 @@ jobs:
rm -f *.tar
- name: Run diagnostic tests set 2
run: |
micromamba activate _MDTF_base
# run the test PODs
./mdtf -f ${{matrix.json-file-set2}}
./mdtf -f ${{matrix.json-file-2}}
# Uncomment the following line for debugging
#cat ../wkdir/MDTF_GFDL.Synthetic_1_10/MJO_prop_amp/MJO_prop_amp.log
- name: Get observational data for set 3
run: |
echo "${PWD}"
# remove data from previous run
# Actions moves you to the root repo directory in every step, so need to cd again
echo "deleting data from set 2"
cd ../wkdir
rm -rf *
cd ../inputdata/obs_data
echo "deleting obs data from set 2"
rm -rf *
cd ../../
echo "Available Space"
Expand All @@ -181,27 +191,27 @@ jobs:
# jobs, but we don't want to max out resources that are shared by the NOAA-GFDL repos.
#curl --verbose --ipv4 --connect-timeout 8 --max-time 1200 --retry 128 --ftp-ssl --ftp-pasv -u "anonymous:anonymous" ftp://ftp.gfdl.noaa.gov/perm/oar.gfdl.mdtf/temp_extremes_distshape_obs_data.tar --output temp_extremes_distshape_obs_data.tar
#curl --verbose --ipv4 --connect-timeout 8 --max-time 1200 --retry 128 --ftp-ssl --ftp-pasv -u "anonymous:anonymous" ftp://ftp.gfdl.noaa.gov/perm/oar.gfdl.mdtf/tropical_pacific_sea_level_obs_data.tar.gz --output tropical_pacific_sea_level_obs_data.tar.gz
curl --verbose --ipv4 --connect-timeout 8 --max-time 1200 --retry 128 --ftp-ssl --ftp-pasv -u "anonymous:anonymous" ftp://ftp.gfdl.noaa.gov/perm/oar.gfdl.mdtf/mixed_layer_depth_obs_data.tar --output mixed_layer_depth_obs_data.tar
#curl --verbose --ipv4 --connect-timeout 8 --max-time 1200 --retry 128 --ftp-ssl --ftp-pasv -u "anonymous:anonymous" ftp://ftp.gfdl.noaa.gov/perm/oar.gfdl.mdtf/mixed_layer_depth_obs_data.tar --output mixed_layer_depth_obs_data.tar
curl --verbose --ipv4 --connect-timeout 8 --max-time 1200 --retry 128 --ftp-ssl --ftp-pasv -u "anonymous:anonymous" ftp://ftp.gfdl.noaa.gov/perm/oar.gfdl.mdtf/ocn_surf_flux_diag_obs_data.tar --output ocn_surf_flux_diag_obs_data.tar
# curl --verbose --ipv4 --connect-timeout 8 --max-time 1200 --retry 128 --ftp-ssl --ftp-pasv -u "anonymous:anonymous" ftp://ftp.gfdl.noaa.gov/perm/oar.gfdl.mdtf/albedofb_obs_data.tar --output albedofb_obs_data.tar
curl --verbose --ipv4 --connect-timeout 8 --max-time 1200 --retry 128 --ftp-ssl --ftp-pasv -u "anonymous:anonymous" ftp://ftp.gfdl.noaa.gov/perm/oar.gfdl.mdtf/seaice_suite_obs_data.tar --output seaice_suite_obs_data.tar
curl --verbose --ipv4 --connect-timeout 8 --max-time 1200 --retry 128 --ftp-ssl --ftp-pasv -u "anonymous:anonymous" ftp://ftp.gfdl.noaa.gov/perm/oar.gfdl.mdtf/stc_eddy_heat_fluxes_obs_data.tar --output stc_eddy_heat_fluxes_obs_data.tar
#curl --verbose --ipv4 --connect-timeout 8 --max-time 1200 --retry 128 --ftp-ssl --ftp-pasv -u "anonymous:anonymous" ftp://ftp.gfdl.noaa.gov/perm/oar.gfdl.mdtf/seaice_suite_obs_data.tar --output seaice_suite_obs_data.tar
#curl --verbose --ipv4 --connect-timeout 8 --max-time 1200 --retry 128 --ftp-ssl --ftp-pasv -u "anonymous:anonymous" ftp://ftp.gfdl.noaa.gov/perm/oar.gfdl.mdtf/stc_eddy_heat_fluxes_obs_data.tar --output stc_eddy_heat_fluxes_obs_data.tar
echo "Untarring set 3 CMIP standard test files"
#tar -xvf temp_extremes_distshape_obs_data.tar
#tar -zxvf tropical_pacific_sea_level_obs_data.tar.gz
tar -xvf mixed_layer_depth_obs_data.tar
#tar -xvf mixed_layer_depth_obs_data.tar
tar -xvf ocn_surf_flux_diag_obs_data.tar
# tar -xvf albedofb_obs_data.tar
tar -xvf seaice_suite_obs_data.tar
tar -xvf stc_eddy_heat_fluxes_obs_data.tar
# tar -xvf seaice_suite_obs_data.tar
# tar -xvf stc_eddy_heat_fluxes_obs_data.tar
# clean up tarballs
rm -f *.tar
rm -f *.tar.gz
- name: Run CMIP diagnostic tests set 3
run: |
micromamba activate _MDTF_base
# run the test PODs
./mdtf -f ${{matrix.json-file-set3}}
./mdtf -f ${{matrix.json-file-3}}
#- name: Run unit tests
# run: |
# micromamba activate _MDTF_base
Expand Down
4 changes: 1 addition & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -107,9 +107,7 @@ for, the Windows Subsystem for Linux.
when micromamba is installed
- `$MICROMAMBA_EXE` is full path to the micromamba executable on your system
(e.g., /home/${USER}/.local/bin/micromamba). This is defined by the `MAMBA_EXE` environment variable on your system
- The `--env_dir` flag allows you to put the program files in a designated location `$CONDA_ENV_DIR`
(for space reasons, or if you don’t have write access).
You can omit this flag, and the environments will be installed within `$CONDA_ROOT/envs/` by default.
- All flags noted for your system above must be supplied for the script to work.

#### NOTE: The micromamba environments may differ from the conda environments because of package compatibility discrepancies between solvers
`% ./src/conda/micromamba_env_setup.sh --all --micromamba_root $MICROMAMBA_ROOT --micromamba_exe $MICROMAMBA_EXE --env_dir $CONDA_ENV_DIR` builds
Expand Down
8 changes: 8 additions & 0 deletions data/fieldlist_CMIP.jsonc
Original file line number Diff line number Diff line change
Expand Up @@ -181,6 +181,14 @@
"standard_name": "precipitation_flux",
"realm": "atmos",
"units": "kg m-2 s-1",
"alternate_standard_names": ["rainfall_flux"],
"ndim": 3
},
"rainfall_flux": {
"standard_name": "rainfall_flux",
"realm": "seaIce",
"units": "kg m-2 s-1",
"alternate_standard_names": ["precipitation_flux"],
"ndim": 3
},
"prc": {
Expand Down
18 changes: 16 additions & 2 deletions data/fieldlist_GFDL.jsonc
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,13 @@
"realm": "atmos",
"units": "1",
"ndim": 3
},
},
"siconc": {
"standard_name": "sea_ice_area_fraction",
"realm": "seaIce",
"units": "0-1",
"ndim": 3
},
"IWP": {
"standard_name": "atmosphere_mass_content_of_cloud_ice",
"long_name": "Ice water path",
Expand Down Expand Up @@ -191,6 +197,14 @@
"long_name":"",
"realm": "atmos",
"units": "kg m-2 s-1",
"alternate_standard_names": ["rainfall_flux"],
"ndim": 3
},
"rainfall_flux": {
"standard_name": "rainfall_flux",
"realm": "seaIce",
"units": "kg m-2 s-1",
"alternate_standard_names": ["precipitation_flux"],
"ndim": 3
},
"prec_conv": {
Expand All @@ -214,7 +228,7 @@
"units": "kg m-2 s-1",
"ndim": 3
},
"prw": {
"wvp": {
"standard_name": "atmosphere_mass_content_of_water_vapor",
"long_name": "Water Vapor Path",
"realm": "atmos",
Expand Down
4 changes: 4 additions & 0 deletions data/gfdl-cmor-tables/gfdl_to_cmip5_vars.csv
Original file line number Diff line number Diff line change
Expand Up @@ -202,6 +202,8 @@ dfe,dfe,mole_concentration_of_dissolved_iron_in_sea_water,Dissolved Iron Concent
cfadDbze94,cfadDbze94,histogram_of_equivalent_reflectivity_factor_over_height_above_reference_ellipsoid,CloudSat Radar Reflectivity CFAD,atmos,1
dissic,dissic,mole_concentration_of_dissolved_inorganic_carbon_in_sea_water,Dissolved Inorganic Carbon Concentration,ocean_biochem,mol m-3
ua,ua,eastward_wind,Eastward Wind,atmos,m s-1
ua200,ua200,eastward_wind,Eastward Wind,atmos,m s-1
ua850,ua850,eastward_wind,Eastward Wind,atmos,m s-1
clhcalipso_sat,clhcalipso,cloud_area_fraction_in_atmosphere_layer,CALIPSO High Level Cloud Fraction,atmos,%
qo3v,tro3,mole_fraction_of_ozone_in_air,Mole Fraction of O3,atmos,1e-9
om_emis_col,emibb,tendency_of_atmosphere_mass_content_of_primary_particulate_organic_matter_dry_aerosol_due_to_emission,Total Emission of Primary Aerosol from Biomass Burning,aerosol,kg m-2 s-1
Expand Down Expand Up @@ -462,6 +464,8 @@ bddtdip,bddtdip,tendency_of_mole_concentration_of_dissolved_inorganic_phosphate_
hus,hus,specific_humidity,Specific Humidity,atmos,1
parasol_refl_sat,parasolRefl,toa_bidirectional_reflectance,PARASOL Reflectance,atmos,1
va,va,northward_wind,Northward Wind,atmos,m s-1
va200,va200,northward_wind,Northward Wind,atmos,m s-1
va850,va850,northward_wind,Northward Wind,atmos,m s-1
fl_ccsnow,prsnc,convective_snowfall_flux,Convective Snowfall Flux,atmos,kg m-2 s-1
zostoga,zostoga,global_average_thermosteric_sea_level_change,Global Average Thermosteric Sea Level Change,ocean,m
evap,evs,water_evaporation_flux,Water Evaporation Flux Where Ice Free Ocean over Sea,ocean,kg m-2 s-1
Expand Down
13 changes: 7 additions & 6 deletions diagnostics/convective_transition_diag/convecTransBasic.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,7 @@
from convecTransBasic_util import convecTransBasic_calc_model
from convecTransBasic_util import convecTransBasic_loadAnalyzedData
from convecTransBasic_util import convecTransBasic_plot

print("**************************************************")
print("Excuting Convective Transition Basic Statistics (convecTransBasic.py)......")
print("**************************************************")
Expand All @@ -77,8 +78,8 @@
print("Load user-specified binning parameters..."),

# Create and read user-specified parameters
os.system("python "+ os.environ["POD_HOME"]+ "/" + "convecTransBasic_usp_calc.py")
with open(os.environ["WORK_DIR"]+"/" + "convecTransBasic_calc_parameters.json") as outfile:
os.system("python " + os.environ["POD_HOME"] + "/" + "convecTransBasic_usp_calc.py")
with open(os.environ["WORK_DIR"] + "/" + "convecTransBasic_calc_parameters.json") as outfile:
bin_data = json.load(outfile)
print("...Loaded!")

Expand Down Expand Up @@ -108,15 +109,15 @@
+ ") will be saved to " + bin_data["PREPROCESSING_OUTPUT_DIR"] + "/")

# Load & pre-process region mask
REGION=generate_region_mask(bin_data["REGION_MASK_DIR"] + "/" + bin_data["REGION_MASK_FILENAME"],
bin_data["pr_list"][0], bin_data["LAT_VAR"], bin_data["LON_VAR"])
REGION = generate_region_mask(bin_data["REGION_MASK_DIR"] + "/" + bin_data["REGION_MASK_FILENAME"],
bin_data["pr_list"][0], bin_data["LAT_VAR"], bin_data["LON_VAR"])

# Pre-process temperature (if necessary) & bin & save binned results
binned_output=convecTransBasic_calc_model(REGION, bin_data["args1"])
binned_output = convecTransBasic_calc_model(REGION, bin_data["args1"])

else: # Binned data file exists & BIN_ANYWAY=False
print("Binned output detected..."),
binned_output=convecTransBasic_loadAnalyzedData(bin_data["args2"])
binned_output = convecTransBasic_loadAnalyzedData(bin_data["args2"])
print("...Loaded!")

# ======================================================================
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
import glob

with open(os.environ["WORK_DIR"] + "/" + "convecTransBasic_calc_parameters.json") as outfile:
bin_data=json.load(outfile)
bin_data = json.load(outfile)

# ======================================================================
# START USER SPECIFIED SECTION
Expand Down Expand Up @@ -174,7 +174,7 @@
bin_data["BULK_TROPOSPHERIC_TEMPERATURE_MEASURE"]
]

data["args4"] = [ bin_data["CWV_BIN_WIDTH"], PDF_THRESHOLD, CWV_RANGE_THRESHOLD,
data["args4"] = [bin_data["CWV_BIN_WIDTH"], PDF_THRESHOLD, CWV_RANGE_THRESHOLD,
CP_THRESHOLD, bin_data["MODEL"], bin_data["REGION_STR"], bin_data["NUMBER_OF_REGIONS"],
bin_data["BULK_TROPOSPHERIC_TEMPERATURE_MEASURE"], bin_data["PRECIP_THRESHOLD"],
FIG_OUTPUT_DIR, FIG_OUTPUT_FILENAME,
Expand Down
Loading

0 comments on commit 7b82c00

Please sign in to comment.