Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Port to Hercules #2196

Merged
merged 65 commits into from
Jan 8, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
65 commits
Select commit Hold shift + click to select a range
5a65ac5
Determine IC structure based on start type #1783
DavidHuber-NOAA Sep 22, 2023
5622b79
Fix spacing for pycodestyle tests #1783
DavidHuber-NOAA Sep 22, 2023
9761692
Added a space after 'if'
DavidHuber-NOAA Sep 22, 2023
df2fa1d
Merge remote-tracking branch 'emc/develop' into feature/spack-stack
DavidHuber-NOAA Sep 26, 2023
078d16b
Merge remote-tracking branch 'emc/develop' into feature/spack-stack
DavidHuber-NOAA Oct 13, 2023
90efb17
Point to new gw spack-stack.
DavidHuber-NOAA Oct 17, 2023
69513ea
Merge remote-tracking branch 'emc/develop' into feature/spack-stack
DavidHuber-NOAA Oct 17, 2023
c6bbb32
Merge remote-tracking branch 'origin/develop' into feature/spack-stack
DavidHuber-NOAA Oct 24, 2023
e565fba
Merge remote-tracking branch 'emc/develop' into feature/spack-stack
DavidHuber-NOAA Oct 31, 2023
67fa16f
Merge remote-tracking branch 'emc/develop' into feature/spack-stack
DavidHuber-NOAA Nov 7, 2023
f86f0d9
Merge branch 'feature/spack-stack' of github.com:DavidHuber-NOAA/glob…
DavidHuber-NOAA Nov 14, 2023
b860e45
Merge remote-tracking branch 'emc/develop' into feature/spack-stack
DavidHuber-NOAA Nov 14, 2023
678ce48
Upgrade to spack-stack/1.5.1 on Hera #1868
DavidHuber-NOAA Nov 16, 2023
7039de6
Update version files for all systems on spack-stack. #1868
DavidHuber-NOAA Nov 16, 2023
94981b6
Update Orion and Jet modulefiles. #1868
DavidHuber-NOAA Nov 16, 2023
8c0b50e
Fix AWIPS variable names.
DavidHuber-NOAA Nov 16, 2023
0af18c4
Merge remote-tracking branch 'emc/develop' into feature/spack-stack
DavidHuber-NOAA Nov 16, 2023
58c917a
Disable METplus jobs. #1868
DavidHuber-NOAA Nov 17, 2023
5b614d1
Implement single spack version build/run files, update modulefiles fo…
DavidHuber-NOAA Nov 17, 2023
6093a08
Merge remote-tracking branch 'emc/develop' into feature/spack-stack
DavidHuber-NOAA Nov 17, 2023
14931d7
Update modulefiles, reinstate (pared down) machine version files.
DavidHuber-NOAA Nov 20, 2023
004b564
Merge branch 'feature/spack-stack' of github.com:DavidHuber-NOAA/glob…
DavidHuber-NOAA Nov 20, 2023
123c8ff
Fix Orion modules. #1868
DavidHuber-NOAA Nov 20, 2023
856f7cc
Fix S4 modules. #1868
DavidHuber-NOAA Nov 20, 2023
755a5e8
Fix gwci module files.
DavidHuber-NOAA Nov 20, 2023
e817c85
Add comment about metplus support. #1868
DavidHuber-NOAA Nov 21, 2023
6faca55
Address shellcheck warnings.
DavidHuber-NOAA Nov 21, 2023
1cdde69
Update gfs-utils hash. #1868
DavidHuber-NOAA Nov 21, 2023
55e8985
Remove post hack. #1868
DavidHuber-NOAA Nov 21, 2023
6b71128
Merge remote-tracking branch 'emc/develop' into feature/spack-stack
DavidHuber-NOAA Nov 27, 2023
906a9fc
Update GSI-utils, GSI-mon hashes #1868
DavidHuber-NOAA Nov 30, 2023
e1dbf68
Merge remote-tracking branch 'emc/develop' into feature/spack-stack
DavidHuber-NOAA Nov 30, 2023
bb2fc24
Address lint warnings.
DavidHuber-NOAA Nov 30, 2023
87fd614
Reinstated hacks for WCOSS2 in efcs, fcst, and post jobs. #1868
DavidHuber-NOAA Nov 30, 2023
a3cb8ef
Address lint warnings.
DavidHuber-NOAA Nov 30, 2023
7181fb2
Update GSI hash #1868
DavidHuber-NOAA Nov 30, 2023
e041132
Cleanup Orion module files. #1868
DavidHuber-NOAA Nov 30, 2023
6e5ec63
Correct WCOSS2 modulefile comment.
DavidHuber-NOAA Nov 30, 2023
f87c09a
Merge branch 'feature/spack-stack' of github.com:DavidHuber-NOAA/glob…
DavidHuber-NOAA Nov 30, 2023
06f52bd
Initial Hercules port. #1588
DavidHuber-NOAA Dec 6, 2023
3ff0325
Merge remote-tracking branch 'emc/develop' into feature/spack-stack
DavidHuber-NOAA Dec 6, 2023
e181fc3
Remove debug print.
DavidHuber-NOAA Dec 6, 2023
6b1c0af
Correct Hercules modules. #1588
DavidHuber-NOAA Dec 6, 2023
7796272
Enable MPMD for atmos_products on Hercules. #1588
DavidHuber-NOAA Dec 6, 2023
5a3e8b5
Merge remote-tracking branch 'emc/develop' into feature/hercules
DavidHuber-NOAA Dec 6, 2023
b7c179d
Fix tabbing.
DavidHuber-NOAA Dec 6, 2023
301a76d
Separate Orion and Hercules in gefs config to match gfs.
DavidHuber-NOAA Dec 6, 2023
bd6ce5f
Improve logic in config.aero
DavidHuber-NOAA Dec 7, 2023
1c042ce
Use current hercules hostname in detect_machine.sh.
DavidHuber-NOAA Dec 7, 2023
1f5a3d1
Convert HERCULES.env to use cases. Update Hercules URL. #1588
DavidHuber-NOAA Dec 12, 2023
b3f15c5
Update Hercules environment to include cycled tasks. #1588
DavidHuber-NOAA Dec 18, 2023
8348b1f
Merge remote-tracking branch 'emc/develop' into feature/hercules
DavidHuber-NOAA Dec 18, 2023
f175952
Update ufs_utils hash.
DavidHuber-NOAA Dec 20, 2023
11cb070
Merge remote-tracking branch 'emc/develop' into feature/hercules
DavidHuber-NOAA Dec 22, 2023
219b7e5
Initial cycled support on Hercules. #1588
DavidHuber-NOAA Jan 3, 2024
2329036
Merge remote-tracking branch 'emc/develop' into feature/hercules
DavidHuber-NOAA Jan 3, 2024
16abb48
Load UFS modules from modulefiles/ instead of test/.
DavidHuber-NOAA Jan 3, 2024
5519d1c
Added comment for monitor jobs on Hercules. #1588
DavidHuber-NOAA Jan 3, 2024
9bc0c8d
Minor fixes.
DavidHuber-NOAA Jan 3, 2024
cdf7461
Address lint errors.
DavidHuber-NOAA Jan 3, 2024
7d4ef7f
Enable cycled CI testing on Hercules. #1588
DavidHuber-NOAA Jan 4, 2024
2d6551f
Disable cycled testing on Hercules. #1588
DavidHuber-NOAA Jan 5, 2024
044323e
Update gsi_utils hash. #1588
DavidHuber-NOAA Jan 5, 2024
5cab43e
Move npe_node_eomg to capture HPC-specific resources.
DavidHuber-NOAA Jan 5, 2024
836391c
Disable gsi-mon, gdas builds, enable cycled CI on hercules. #1588
DavidHuber-NOAA Jan 8, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 0 additions & 3 deletions ci/cases/pr/C96C48_hybatmDA.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,3 @@ arguments:
gfs_cyc: 1
start: cold
yaml: {{ HOMEgfs }}/ci/platforms/gfs_defaults_ci.yaml

skip_ci_on_hosts:
- hercules
3 changes: 0 additions & 3 deletions ci/cases/pr/C96_atm3DVar.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,3 @@ arguments:
gfs_cyc: 1
start: cold
yaml: {{ HOMEgfs }}/ci/platforms/gfs_defaults_ci.yaml

skip_ci_on_hosts:
- hercules
256 changes: 247 additions & 9 deletions env/HERCULES.env
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ fi

step=$1

export npe_node_max=40
export npe_node_max=80
export launcher="srun -l --export=ALL"
export mpmd_opt="--multi-prog --output=mpmd.%j.%t.out"

Expand All @@ -26,19 +26,164 @@ export KMP_AFFINITY=scatter
export OMP_STACKSIZE=2048000
export NTHSTACK=1024000000
#export LD_BIND_NOW=1
export I_MPI_EXTRA_FILESYSTEM=1
export I_MPI_EXTRA_FILESYSTEM_LIST=lustre

ulimit -s unlimited
ulimit -a

if [[ "${step}" = "waveinit" ]] || [[ "${step}" = "waveprep" ]] || [[ "${step}" = "wavepostsbs" ]] || \
[[ "${step}" = "wavepostbndpnt" ]] || [[ "${step}" = "wavepostpnt" ]] || [[ "${step}" == "wavepostbndpntbll" ]]; then
case ${step} in
Fixed Show fixed Hide fixed
"prep" | "prepbufr")

nth_max=$((npe_node_max / npe_node_prep))

export POE="NO"
export BACK=${BACK:-"YES"}
export sys_tp="HERCULES"
export launcher_PREP="srun"
;;
"preplandobs")

export APRUN_CALCFIMS="${launcher} -n 1"
;;
"waveinit" | "waveprep" | "wavepostsbs" | "wavepostbndpnt" | "wavepostpnt" | "wavepostbndpntbll")

export CFP_MP="YES"
if [[ "${step}" = "waveprep" ]]; then export MP_PULSE=0 ; fi
[[ "${step}" = "waveprep" ]] && export MP_PULSE=0
export wavempexec=${launcher}
export wave_mpmd=${mpmd_opt}

elif [[ "${step}" = "fcst" ]]; then
;;
"atmanlrun")

nth_max=$((npe_node_max / npe_node_atmanlrun))

export NTHREADS_ATMANL=${nth_atmanlrun:-${nth_max}}
[[ ${NTHREADS_ATMANL} -gt ${nth_max} ]] && export NTHREADS_ATMANL=${nth_max}
export APRUN_ATMANL="${launcher} -n ${npe_atmanlrun} --cpus-per-task=${NTHREADS_ATMANL}"
;;
"atmensanlrun")

nth_max=$((npe_node_max / npe_node_atmensanlrun))

export NTHREADS_ATMENSANL=${nth_atmensanlrun:-${nth_max}}
[[ ${NTHREADS_ATMENSANL} -gt ${nth_max} ]] && export NTHREADS_ATMENSANL=${nth_max}
export APRUN_ATMENSANL="${launcher} -n ${npe_atmensanlrun} --cpus-per-task=${NTHREADS_ATMENSANL}"
;;
"aeroanlrun")

export APRUNCFP="${launcher} -n \$ncmd ${mpmd_opt}"

nth_max=$((npe_node_max / npe_node_aeroanlrun))

export NTHREADS_AEROANL=${nth_aeroanlrun:-${nth_max}}
[[ ${NTHREADS_AEROANL} -gt ${nth_max} ]] && export NTHREADS_AEROANL=${nth_max}
export APRUN_AEROANL="${launcher} -n ${npe_aeroanlrun} --cpus-per-task=${NTHREADS_AEROANL}"
;;
"landanl")

nth_max=$((npe_node_max / npe_node_landanl))

export NTHREADS_LANDANL=${nth_landanl:-${nth_max}}
[[ ${NTHREADS_LANDANL} -gt ${nth_max} ]] && export NTHREADS_LANDANL=${nth_max}
export APRUN_LANDANL="${launcher} -n ${npe_landanl} --cpus-per-task=${NTHREADS_LANDANL}"

export APRUN_APPLY_INCR="${launcher} -n 6"
;;
"ocnanalbmat")

export APRUNCFP="${launcher} -n \$ncmd ${mpmd_opt}"

nth_max=$((npe_node_max / npe_node_ocnanalbmat))

export NTHREADS_OCNANAL=${nth_ocnanalbmat:-${nth_max}}
[[ ${NTHREADS_OCNANAL} -gt ${nth_max} ]] && export NTHREADS_OCNANAL=${nth_max}
export APRUN_OCNANAL="${launcher} -n ${npe_ocnanalbmat} --cpus-per-task=${NTHREADS_OCNANAL}"
;;
"ocnanalrun")

export APRUNCFP="${launcher} -n \$ncmd ${mpmd_opt}"

nth_max=$((npe_node_max / npe_node_ocnanalrun))

export NTHREADS_OCNANAL=${nth_ocnanalrun:-${nth_max}}
[[ ${NTHREADS_OCNANAL} -gt ${nth_max} ]] && export NTHREADS_OCNANAL=${nth_max}
export APRUN_OCNANAL="${launcher} -n ${npe_ocnanalrun} --cpus-per-task=${NTHREADS_OCNANAL}"
;;
"ocnanalchkpt")

export APRUNCFP="${launcher} -n \$ncmd ${mpmd_opt}"

nth_max=$((npe_node_max / npe_node_ocnanalchkpt))

export NTHREADS_OCNANAL=${nth_ocnanalchkpt:-${nth_max}}
[[ ${NTHREADS_OCNANAL} -gt ${nth_max} ]] && export NTHREADS_OCNANAL=${nth_max}
export APRUN_OCNANAL="${launcher} -n ${npe_ocnanalchkpt} --cpus-per-task=${NTHREADS_OCNANAL}"
;;
"anal" | "analcalc")

export MKL_NUM_THREADS=4
export MKL_CBWR=AUTO

export CFP_MP=${CFP_MP:-"YES"}
export USE_CFP=${USE_CFP:-"YES"}
export APRUNCFP="${launcher} -n \$ncmd ${mpmd_opt}"

nth_max=$((npe_node_max / npe_node_anal))

export NTHREADS_GSI=${nth_anal:-${nth_max}}
[[ ${NTHREADS_GSI} -gt ${nth_max} ]] && export NTHREADS_GSI=${nth_max}
export APRUN_GSI="${launcher} -n ${npe_gsi:-${npe_anal}} --cpus-per-task=${NTHREADS_GSI}"

export NTHREADS_CALCINC=${nth_calcinc:-1}
[[ ${NTHREADS_CALCINC} -gt ${nth_max} ]] && export NTHREADS_CALCINC=${nth_max}
export APRUN_CALCINC="${launcher} \$ncmd --cpus-per-task=${NTHREADS_CALCINC}"

export NTHREADS_CYCLE=${nth_cycle:-12}
[[ ${NTHREADS_CYCLE} -gt ${npe_node_max} ]] && export NTHREADS_CYCLE=${npe_node_max}
npe_cycle=${ntiles:-6}
export APRUN_CYCLE="${launcher} -n ${npe_cycle} --cpus-per-task=${NTHREADS_CYCLE}"

export NTHREADS_GAUSFCANL=1
npe_gausfcanl=${npe_gausfcanl:-1}
export APRUN_GAUSFCANL="${launcher} -n ${npe_gausfcanl} --cpus-per-task=${NTHREADS_GAUSFCANL}"
;;
"sfcanl")
nth_max=$((npe_node_max / npe_node_sfcanl))

export NTHREADS_CYCLE=${nth_sfcanl:-14}
[[ ${NTHREADS_CYCLE} -gt ${npe_node_max} ]] && export NTHREADS_CYCLE=${npe_node_max}
npe_sfcanl=${ntiles:-6}
export APRUN_CYCLE="${launcher} -n ${npe_sfcanl} --cpus-per-task=${NTHREADS_CYCLE}"
;;
"eobs")

export MKL_NUM_THREADS=4
export MKL_CBWR=AUTO

export CFP_MP=${CFP_MP:-"YES"}
export USE_CFP=${USE_CFP:-"YES"}
export APRUNCFP="${launcher} -n \$ncmd ${mpmd_opt}"

nth_max=$((npe_node_max / npe_node_eobs))

export NTHREADS_GSI=${nth_eobs:-${nth_max}}
[[ ${NTHREADS_GSI} -gt ${nth_max} ]] && export NTHREADS_GSI=${nth_max}
export APRUN_GSI="${launcher} -n ${npe_gsi:-${npe_eobs}} --cpus-per-task=${NTHREADS_GSI}"
;;
"eupd")

export CFP_MP=${CFP_MP:-"YES"}
export USE_CFP=${USE_CFP:-"YES"}
export APRUNCFP="${launcher} -n \$ncmd ${mpmd_opt}"

nth_max=$((npe_node_max / npe_node_eupd))

export NTHREADS_ENKF=${nth_eupd:-${nth_max}}
[[ ${NTHREADS_ENKF} -gt ${nth_max} ]] && export NTHREADS_ENKF=${nth_max}
export APRUN_ENKF="${launcher} -n ${npe_enkf:-${npe_eupd}} --cpus-per-task=${NTHREADS_ENKF}"
;;
"fcst" | "efcs")

export OMP_STACKSIZE=512M
if [[ "${CDUMP}" =~ "gfs" ]]; then
Expand All @@ -53,17 +198,110 @@ elif [[ "${step}" = "fcst" ]]; then
# With ESMF threading, the model wants to use the full node
export APRUN_UFS="${launcher} -n ${ntasks}"
unset nprocs ppn nnodes ntasks
;;

elif [[ "${step}" = "upp" ]]; then
"upp")

nth_max=$((npe_node_max / npe_node_upp))

export NTHREADS_UPP=${nth_upp:-1}
[[ ${NTHREADS_UPP} -gt ${nth_max} ]] && export NTHREADS_UPP=${nth_max}
export APRUN_UPP="${launcher} -n ${npe_upp} --cpus-per-task=${NTHREADS_UPP}"

elif [[ "${step}" = "atmos_products" ]]; then
;;
"atmos_products")

export USE_CFP="YES" # Use MPMD for downstream product generation
;;
"ecen")

fi
nth_max=$((npe_node_max / npe_node_ecen))

export NTHREADS_ECEN=${nth_ecen:-${nth_max}}
[[ ${NTHREADS_ECEN} -gt ${nth_max} ]] && export NTHREADS_ECEN=${nth_max}
export APRUN_ECEN="${launcher} -n ${npe_ecen} --cpus-per-task=${NTHREADS_ECEN}"

export NTHREADS_CHGRES=${nth_chgres:-12}
[[ ${NTHREADS_CHGRES} -gt ${npe_node_max} ]] && export NTHREADS_CHGRES=${npe_node_max}
export APRUN_CHGRES="time"

export NTHREADS_CALCINC=${nth_calcinc:-1}
[[ ${NTHREADS_CALCINC} -gt ${nth_max} ]] && export NTHREADS_CALCINC=${nth_max}
export APRUN_CALCINC="${launcher} -n ${npe_ecen} --cpus-per-task=${NTHREADS_CALCINC}"

;;
"esfc")

nth_max=$((npe_node_max / npe_node_esfc))

export NTHREADS_ESFC=${nth_esfc:-${nth_max}}
[[ ${NTHREADS_ESFC} -gt ${nth_max} ]] && export NTHREADS_ESFC=${nth_max}
export APRUN_ESFC="${launcher} -n ${npe_esfc} --cpus-per-task=${NTHREADS_ESFC}"

export NTHREADS_CYCLE=${nth_cycle:-14}
[[ ${NTHREADS_CYCLE} -gt ${npe_node_max} ]] && export NTHREADS_CYCLE=${npe_node_max}
export APRUN_CYCLE="${launcher} -n ${npe_esfc} --cpus-per-task=${NTHREADS_CYCLE}"

;;
"epos")

nth_max=$((npe_node_max / npe_node_epos))

export NTHREADS_EPOS=${nth_epos:-${nth_max}}
[[ ${NTHREADS_EPOS} -gt ${nth_max} ]] && export NTHREADS_EPOS=${nth_max}
export APRUN_EPOS="${launcher} -n ${npe_epos} --cpus-per-task=${NTHREADS_EPOS}"

;;
"postsnd")

export CFP_MP="YES"

nth_max=$((npe_node_max / npe_node_postsnd))

export NTHREADS_POSTSND=${nth_postsnd:-1}
[[ ${NTHREADS_POSTSND} -gt ${nth_max} ]] && export NTHREADS_POSTSND=${nth_max}
export APRUN_POSTSND="${launcher} -n ${npe_postsnd} --cpus-per-task=${NTHREADS_POSTSND}"

export NTHREADS_POSTSNDCFP=${nth_postsndcfp:-1}
[[ ${NTHREADS_POSTSNDCFP} -gt ${nth_max} ]] && export NTHREADS_POSTSNDCFP=${nth_max}
export APRUN_POSTSNDCFP="${launcher} -n ${npe_postsndcfp} ${mpmd_opt}"

;;
"awips")

nth_max=$((npe_node_max / npe_node_awips))

export NTHREADS_AWIPS=${nth_awips:-2}
[[ ${NTHREADS_AWIPS} -gt ${nth_max} ]] && export NTHREADS_AWIPS=${nth_max}
export APRUN_AWIPSCFP="${launcher} -n ${npe_awips} ${mpmd_opt}"

;;
"gempak")

export CFP_MP="YES"

if [[ ${CDUMP} == "gfs" ]]; then
npe_gempak=${npe_gempak_gfs}
npe_node_gempak=${npe_node_gempak_gfs}
fi

nth_max=$((npe_node_max / npe_node_gempak))

export NTHREADS_GEMPAK=${nth_gempak:-1}
[[ ${NTHREADS_GEMPAK} -gt ${nth_max} ]] && export NTHREADS_GEMPAK=${nth_max}
export APRUN="${launcher} -n ${npe_gempak} ${mpmd_opt}"

;;
"fit2obs")

nth_max=$((npe_node_max / npe_node_fit2obs))

export NTHREADS_FIT2OBS=${nth_fit2obs:-1}
[[ ${NTHREADS_FIT2OBS} -gt ${nth_max} ]] && export NTHREADS_FIT2OBS=${nth_max}
export MPIRUN="${launcher} -n ${npe_fit2obs} --cpus-per-task=${NTHREADS_FIT2OBS}"

;;
*)
# Some other job not yet defined here
echo "WARNING: The job step ${step} does not specify Hercules-specific resources"
;;
esac
2 changes: 2 additions & 0 deletions modulefiles/module_base.hercules.lua
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,9 @@ prepend_path("MODULEPATH", "/work/noaa/epic/role-epic/spack-stack/hercules/spack

load(pathJoin("stack-intel", os.getenv("stack_intel_ver")))
load(pathJoin("stack-intel-oneapi-mpi", os.getenv("stack_impi_ver")))
load(pathJoin("intel-oneapi-mkl", os.getenv("intel_mkl_ver")))
load(pathJoin("python", os.getenv("python_ver")))
load(pathJoin("perl", os.getenv("perl_ver")))

-- TODO load NCL once the SAs remove the 'depends_on' statements within it
-- NCL is a static installation and does not depend on any libraries
Expand Down
15 changes: 11 additions & 4 deletions parm/config/gfs/config.base.emc.dyn
Original file line number Diff line number Diff line change
Expand Up @@ -63,13 +63,20 @@ export DO_GOES="NO" # GOES products
export DO_BUFRSND="NO" # BUFR sounding products
export DO_GEMPAK="NO" # GEMPAK products
export DO_AWIPS="NO" # AWIPS products
export DO_NPOESS="NO" # NPOESS products
export DO_NPOESS="NO" # NPOESS products
export DO_TRACKER="YES" # Hurricane track verification
export DO_GENESIS="YES" # Cyclone genesis verification
export DO_GENESIS_FSU="NO" # Cyclone genesis verification (FSU)
export DO_VERFOZN="YES" # Ozone data assimilation monitoring
export DO_VERFRAD="YES" # Radiance data assimilation monitoring
export DO_VMINMON="YES" # GSI minimization monitoring
# The monitor is not yet supported on Hercules
if [[ "${machine}" == "HERCULES" ]]; then
export DO_VERFOZN="NO" # Ozone data assimilation monitoring
export DO_VERFRAD="NO" # Radiance data assimilation monitoring
export DO_VMINMON="NO" # GSI minimization monitoring
else
export DO_VERFOZN="YES" # Ozone data assimilation monitoring
export DO_VERFRAD="YES" # Radiance data assimilation monitoring
export DO_VMINMON="YES" # GSI minimization monitoring
fi
export DO_MOS="NO" # GFS Model Output Statistics - Only supported on WCOSS2

# NO for retrospective parallel; YES for real-time parallel
Expand Down
14 changes: 10 additions & 4 deletions parm/config/gfs/config.resources
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ elif [[ "${machine}" = "AWSPW" ]]; then
elif [[ ${machine} = "ORION" ]]; then
export npe_node_max=40
elif [[ ${machine} = "HERCULES" ]]; then
export npe_node_max=40
export npe_node_max=80
fi

if [[ ${step} = "prep" ]]; then
Expand Down Expand Up @@ -905,13 +905,19 @@ elif [[ ${step} = "eobs" || ${step} = "eomg" ]]; then
export nth_eomg=${nth_eobs}
npe_node_eobs=$(echo "${npe_node_max} / ${nth_eobs}" | bc)
export npe_node_eobs
export npe_node_eomg=${npe_node_eobs}
export is_exclusive=True
#The number of tasks and cores used must be the same for eobs
#For S4, this is accomplished by running 10 tasks/node
# The number of tasks and cores used must be the same for eobs
# See https://github.com/NOAA-EMC/global-workflow/issues/2092 for details
# For S4, this is accomplished by running 10 tasks/node
if [[ ${machine} = "S4" ]]; then
export npe_node_eobs=10
elif [[ ${machine} = "HERCULES" ]]; then
# For Hercules, this is only an issue at C384; use 20 tasks/node
if [[ ${CASE} = "C384" ]]; then
export npe_node_eobs=20
KateFriedman-NOAA marked this conversation as resolved.
Show resolved Hide resolved
fi
fi
export npe_node_eomg=${npe_node_eobs}

elif [[ ${step} = "ediag" ]]; then

Expand Down
5 changes: 4 additions & 1 deletion parm/config/gfs/config.ufs
Original file line number Diff line number Diff line change
Expand Up @@ -72,9 +72,12 @@ case "${machine}" in
"WCOSS2")
npe_node_max=128
;;
"HERA" | "ORION" | "HERCULES")
"HERA" | "ORION" )
npe_node_max=40
;;
"HERCULES" )
npe_node_max=80
;;
"JET")
case "${PARTITION_BATCH}" in
"xjet")
Expand Down
4 changes: 3 additions & 1 deletion scripts/exglobal_archive.sh
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,9 @@ source "${HOMEgfs}/ush/file_utils.sh"

[[ ! -d ${ARCDIR} ]] && mkdir -p "${ARCDIR}"
nb_copy "${COM_ATMOS_ANALYSIS}/${APREFIX}gsistat" "${ARCDIR}/gsistat.${RUN}.${PDY}${cyc}"
nb_copy "${COM_CHEM_ANALYSIS}/${APREFIX}aerostat" "${ARCDIR}/aerostat.${RUN}.${PDY}${cyc}"
if [[ ${DO_AERO} = "YES" ]]; then
nb_copy "${COM_CHEM_ANALYSIS}/${APREFIX}aerostat" "${ARCDIR}/aerostat.${RUN}.${PDY}${cyc}"
fi
nb_copy "${COM_ATMOS_GRIB_1p00}/${APREFIX}pgrb2.1p00.anl" "${ARCDIR}/pgbanl.${RUN}.${PDY}${cyc}.grib2"

# Archive 1 degree forecast GRIB2 files for verification
Expand Down
Loading