2.2. HAFS Extended Quick Start Guide
Currently, the HAFS application works on these NOAA HPC platforms:
wcoss_dell_p3
wcoss_cray
hera
jet
orion
2.2.1. Clone and Checkout the Repository
git clone <-b BRANCH> --recursive https://github.com/hafs-community/HAFS.git
Select the branch to clone by setting the <-b BRANCH>
option to the branch name.
Note
develop
is the default branch.
2.2.2. Build and Install HAFS
cd /path/to/HAFS/sorc
./install_hafs.sh > install_hafs.log 2>&1
Hint
Got errors? Look into the HAFS/sorc/logs
directory.
2.2.2.1. Parts of install_hafs.sh
machine-setup.sh
Determine shell, identify machine, and load modulesbuild_all.sh
Compile components: forecast, post, tracker, utils, tools, hycom, ww3, and gsiinstall_all.sh
Copy executables toexec
directorylink_fix.sh
Link fix files (fix files are available on disk on supported platforms)
2.2.3. Run the HAFS System
2.2.3.1. Edit system.conf
To configure an experiment, run:
cd /path/to/HAFS/parm
cp system.conf.<system> system.conf
vi system.conf
where <system>
is replaced by the name of one of the supported platforms listed above.
Edit the following:
disk_project
: Project name for disk space.tape_project
: HPSS project name.cpu_account
: CPU account name for submitting jobs to the batch system (may be the same asdisk_project
)archive=disk
: Archive location (make sure you have write permission)CDSAVE
: HAFS parent directoryCDNOSCRUB
: Track files will be copied to this location — contents will not be scrubbed (user must have write permission)CDSCRUB
If scrub is set to yes, this directory will be removed (user must have write permission)
2.2.3.2. HAFS Physics Configuration
Look in HAFS/parm/hafs.conf
to determine what physics suites are running.

To determine what physics schemes are included in the suites mentioned above, run:
more HAFS/sorc/hafs_forecast.fd/FV3/ccpp/suites/suite_FV3_HAFS_v1_gfdlmp_tedmf_nonsst.xml
2.2.3.3. HAFS Nesting Configuration
Two types of nesting configurations are available: (i) regional* and (ii) globnest.
Two namelist files (templates) for regional configuration are:
HAFS/parm/forecast/regional/imput.nml.tmp
HAFS/parm/forecast/regional/input_nest.nml.tmp
One namelist file (template) for globnest configuration is:
HAFS/parm/forecast/globnest/input.nml.tmp

* operational implementation
2.2.3.4. XML File to Run the Workflow
cd /path/to/HAFS/rocoto
vi system.conf
In HAFS/rocoto/hafs_workflow.xml.in
the following can be modified to set the number of cycles and tasks.
<!ENTITY CYCLE THROTTLE “5”>
: The number of cycles that can be activated at one time<!ENTITY TASK_THROTTLE “120”>
: The number of tasks that can be activated at one time<!ENTITY MAX_TRIES “1”>
: The maximum number of tries for all tasks
2.2.3.5. Edit the Cron Job Driver Script
Change the cron job driver script to set up the experiment and storm.
cd /path/to/HAFS/rocoto
vi cronjob_hafs.sh
Make sure to check HOMEhafs
and edit as appropriate.
2.2.3.6. Run HAFS
Add the driver script to cron or simply run the driver script.
./cronjob_hafs.sh
To add the script to cron:
crontab -e
*/5 * * * * <path-to-HAFS>/rocoto/cronjob_hafs.sh
Note
On Orion, cron is only available on the orion-login-1 node.