Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
8ea4771
add bit of code for building on head node
aerorahul Dec 23, 2025
6c7eef3
Merge branch 'NOAA-EMC:develop' into feature/build_compute_or_head
aerorahul Dec 23, 2025
7f4f666
make building on compute an option
aerorahul Dec 23, 2025
250ac15
keep default behavior of build_compute.sh
aerorahul Dec 23, 2025
a6ad226
Merge branch 'develop' into feature/build_compute_or_head
DavidHuber-NOAA Dec 26, 2025
439eedb
Merge branch 'develop' into feature/build_compute_or_head
aerorahul Jan 5, 2026
733337a
update build_compute.sh to fix errors
aerorahul Jan 6, 2026
62f8719
loop over names
aerorahul Jan 6, 2026
2ee389b
Merge branch 'develop' into feature/build_compute_or_head
aerorahul Jan 7, 2026
d077349
remove build_all.sh
aerorahul Jan 7, 2026
81b6481
update build scripts and add an option in generate_workflows.sh
aerorahul Jan 7, 2026
8084876
respect max_cores when on head node
aerorahul Jan 7, 2026
4fb8148
update dox
aerorahul Jan 7, 2026
8f926a4
Empty push since gh is having issues
aerorahul Jan 7, 2026
7a182e8
print build status consistently between head node and compute node op…
aerorahul Jan 8, 2026
57f71da
update ci_utils.sh for build_all.sh update
aerorahul Jan 8, 2026
0e5250e
build_command was getting long on the stdout
aerorahul Jan 8, 2026
f85e563
Merge branch 'develop' into feature/build_compute_or_head
aerorahul Jan 9, 2026
ff63a72
Update docs/source/clone.rst
aerorahul Jan 9, 2026
2371682
Update sorc/build_all.sh
aerorahul Jan 9, 2026
4dbaa1b
Update sorc/build_all.sh
aerorahul Jan 9, 2026
fa42916
Merge branch 'develop' into feature/build_compute_or_head
aerorahul Jan 13, 2026
d45be6d
fix bug
aerorahul Jan 13, 2026
1071af9
squash bugs
aerorahul Jan 14, 2026
ebaed95
Merge branch 'develop' into feature/build_compute_or_head
aerorahul Jan 14, 2026
dd1c144
apply corrections from shellcheck;
aerorahul Jan 14, 2026
01f5b8f
Merge branch 'develop' into feature/build_compute_or_head
aerorahul Jan 14, 2026
5553776
Merge branch 'develop' into feature/build_compute_or_head
aerorahul Jan 14, 2026
039ee18
Merge branch 'develop' into feature/build_compute_or_head
aerorahul Jan 14, 2026
c4962f6
Merge branch 'develop' into feature/build_compute_or_head
aerorahul Jan 15, 2026
0d10f51
Merge branch 'develop' into feature/build_compute_or_head
aerorahul Jan 16, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 10 additions & 11 deletions .github/copilot-instructions.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ This document provides comprehensive guidance for AI agents working on the NOAA
```
jobs/ # Production Job Control Language (JCL) scripts (89 files)
├── JGDAS_* # GDAS (Global Data Assimilation System) jobs
├── JGFS_* # GFS (Global Forecast System) jobs
├── JGFS_* # GFS (Global Forecast System) jobs
├── JGLOBAL_* # Cross-system global jobs
├── Analysis Jobs (41) # Data assimilation and analysis
├── Forecast Jobs (13) # Model forecast execution
Expand Down Expand Up @@ -107,7 +107,7 @@ dev/workflow/rocoto/ # Rocoto-specific implementations
├── tasks.py # Base Tasks class with common task functionality
├── workflow_tasks.py # Task orchestration and dependency management
├── gfs_*.py # GFS-specific implementations
├── gefs_*.py # GEFS-specific implementations
├── gefs_*.py # GEFS-specific implementations
├── sfs_*.py # SFS-specific implementations
└── gcafs_*.py # GCAFS-specific implementations

Expand All @@ -121,15 +121,14 @@ ush/ # Utility scripts and environment setup
### Build System Commands
```bash
# Build all components (from sorc/)
./build_all.sh # Default build
./build_all.sh -d # Debug mode
./build_all.sh -f # Fast build with -DFASTER=ON
./build_all.sh -v # Verbose output
./build_all.sh -k # Kill all builds if any fails
./build_all.sh # Default build
./build_all.sh -d # Debug mode
./build_all.sh -v # Verbose output
./build_all.sh -c -A <HPC_ACCOUNT> # Compute node build with HPC account

# Build specific systems
./build_all.sh gfs # GFS forecast system
./build_all.sh gefs # GEFS ensemble system
./build_all.sh gefs # GEFS ensemble system
./build_all.sh sfs # Seasonal forecast system
./build_all.sh gcafs # Climate analysis system
./build_all.sh gsi # GSI data assimilation
Expand Down Expand Up @@ -161,7 +160,7 @@ python setup_xml.py /path/to/experiment rocoto
```bash
# Supported platforms (use detect_machine.sh)
WCOSS2 # Tier 1 - Full operational support
Hercules # Tier 1 - MSU, no TC Tracker
Hercules # Tier 1 - MSU, no TC Tracker
Hera # Tier 2 - NOAA RDHPCS
Orion # Tier 2 - MSU, GSI runs slowly
Gaea-C6 # Tier 1 - Fully supported platform capable of running retrospectives
Expand Down Expand Up @@ -427,7 +426,7 @@ def test_task_creation():

### New Hosts
1. Add machine detection in `detect_machine.sh`
2. Create host configuration in `hosts/` directory
2. Create host configuration in `hosts/` directory
3. Create modulefiles for environment setup
4. Update environment configurations in `env/` directory

Expand Down Expand Up @@ -634,7 +633,7 @@ For remote MCP clients (e.g., LangFlow) without filesystem access, tools support
analyze_ee2_compliance({ content: "#!/bin/bash\nset -x\n..." })

// Batch file analysis:
scan_repository_compliance({
scan_repository_compliance({
files: [
{ name: "JGFS_FORECAST", content: "..." },
{ name: "exgfs_fcst.sh", content: "..." }
Expand Down
2 changes: 1 addition & 1 deletion dev/ci/scripts/utils/ci_utils.sh
Original file line number Diff line number Diff line change
Expand Up @@ -244,7 +244,7 @@ function build() {
echo "Creating logs folder"
mkdir -p "${logs_dir}" || exit 1
fi
"${HOMEgfs_}/sorc/build_compute.sh" -A "${HPC_ACCOUNT}" all
"${HOMEgfs_}/sorc/build_all.sh" -c -A "${HPC_ACCOUNT}" all

}

Expand Down
14 changes: 11 additions & 3 deletions dev/workflow/generate_workflows.sh
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,10 @@ function _usage() {
directory up from this script's residing directory.

-b Run build_all.sh with default flags
(build the UFS, UPP, UFS_Utils, and GFS-utils only)
(build the UFS, UPP, UFS_Utils, and GFS-utils only on login nodes)

-B Run build_all.sh -c with default flags [-c triggers build on compute nodes]
(build the UFS, UPP, UFS_Utils, and GFS-utils only on compute nodes)

-u Update submodules before building and/or generating experiments.

Expand Down Expand Up @@ -84,6 +87,7 @@ set -eu
HOMEgfs=""
_specified_home=false
_build=false
_compute_build=false
_build_flags=""
_update_submods=false
declare -a _yaml_list=("C48_ATM")
Expand All @@ -110,7 +114,7 @@ _auto_del=false
_nonflag_option_count=0

while [[ $# -gt 0 && "$1" != "--" ]]; do
while getopts ":H:bDuy:Y:GESCA:ce:t:vVdh" option; do
while getopts ":H:bBDuy:Y:GESCA:ce:t:vVdh" option; do
case "${option}" in
H)
HOMEgfs="${OPTARG}"
Expand All @@ -121,6 +125,7 @@ while [[ $# -gt 0 && "$1" != "--" ]]; do
fi
;;
b) _build=true ;;
B) _build=true && _compute_build=true ;;
D) _auto_del=true ;;
u) _update_submods=true ;;
y) # Start over with an empty _yaml_list
Expand Down Expand Up @@ -442,8 +447,11 @@ fi
if [[ "${_build}" == "true" ]]; then
printf "Building via build_all.sh %s\n\n" "${_build_flags}"
# Let the output of build_all.sh go to stdout regardless of verbose options
if [[ "${_compute_build}" == true ]]; then
_compute_build_flag="-c -A ${HPC_ACCOUNT}"
fi
#shellcheck disable=SC2086,SC2248
${HOMEgfs}/sorc/build_all.sh ${_verbose_flag} ${_build_flags}
${HOMEgfs}/sorc/build_all.sh ${_compute_build_flag:-} ${_verbose_flag} ${_build_flags}
fi

# Link the workflow silently unless there's an error
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#!/usr/bin/env python3

"""
Entry point for setting up a compute-node build with node exclusion support.
Entry point for setting up a builds of global-workflow programs
"""

import os
Expand All @@ -20,11 +20,11 @@

def input_args(*argv):
"""
Method to collect user arguments for `compute_build.py`
Method to collect user arguments for `setup_buildxml.py`
"""

description = """
Setup files and directories to start a compute build.
Setup buildXML to compile global-workflow programs.
"""

parser = ArgumentParser(description=description,
Expand Down
24 changes: 17 additions & 7 deletions docs/source/clone.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,15 @@ Clone the `global-workflow` and `cd` into the `sorc` directory:

.. _build_examples:

The build_all.sh script can be used to build all required components of the global workflow. The accepted arguments is a list of systems to be built. This includes builds for GFS, GEFS, and SFS forecast-only experiments, GSI and GDASApp-based DA for cycled GFS experiments. See `feature availability <hpc.html#feature-availability-by-hpc>`__ to see which system(s) are available on each supported system.
The `build_all.sh` script can be used to build all required components of the global workflow.
`build_all.sh` allows for optional flags to modify the build behavior:

- ``-c``: Build on compute nodes. The default behavior is to build on the head node.
- ``-A HPC_ACCOUNT``: Specify the HPC account to be used when building on compute nodes.
- ``-v``: Execute all build scripts with -v option to turn on verbose where supported
- ``-h``: Print help message and exit

The accepted arguments is a list of systems to be built. This includes builds for GFS, GEFS, and SFS forecast-only experiments, GSI and GDASApp-based DA for cycled GFS experiments. See `feature availability <hpc.html#feature-availability-by-hpc>`__ to see which system(s) are available on each supported system.

::

Expand Down Expand Up @@ -125,17 +133,19 @@ Under the ``/sorc`` folder is a script to build all components called ``build_al

::

./build_all.sh [-a UFS_app][-k][-h][-v] [list of system(s) to build]
-a UFS_app:
Build a specific UFS app instead of the default
-k:
Kill all builds immediately if one fails
./build_all.sh [-c][-A HPC_ACCOUNT][-h][-v] [list of system(s) to build]
-c:
Build on compute nodes. The default behaviour is to build on the head node.
-A HPC_ACCOUNT:
Specify the HPC account to be used when building on compute nodes.
-h:
Print this help message and exit
-v:
Execute all build scripts with -v option to turn on verbose where supported

Lastly, pass to build_all.sh a list of systems to build. This includes `gfs`, `gefs`, `sfs`, `gcafs`, `gsi`, `gdas`, and `all`.
Lastly, pass to `build_all.sh` a list of systems to build. This includes `gfs`, `gefs`, `sfs`, `gcafs`, `gsi`, `gdas`, and `all`.

To configure the build with specific flags or options for the various components, you can update the respective build command in the `build_opts.yaml` file.

For examples of how to use this script, see :ref:`build examples <build_examples>`.

Expand Down
2 changes: 1 addition & 1 deletion docs/source/development.rst
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ The commonly run tests are written in YAML format and can be found in the ``dev/
where:

* ``-A`` is used to specify the HPC (slurm or PBS) account to use
* ``-b`` indicates that the workflow should be built fresh
* ``-b|B`` indicates that the workflow should be built fresh (`-B` uses compute nodes for the build)
* ``-GESC`` specifies that all of the GFS, GEFS, SFS, GCAFS cases should be run (this also influences the build flags to use)
* ``-c`` tells the tool to append the rocotorun commands for each experiment to your crontab

Expand Down
Loading
Loading