New User Guide

This section describes how to use BenchPRO and its features to automate your benchmarking process. To start fast, refer to the Quick Start guide.

Note

This guide uses long format input arguments for context, corresponding short format arguments are described here.

Terminology

Application: a program or set of programs compiled and used to execute benchmark workloads.
Benchmark: a specific workload/simulation/dataset used to produce a figure of merit. Typically has an application dependency.
Task: an execution instance (via the scheduler or locally on the node) of a compilation or benchmark run.
Template file: a shell script with some variables declared.
Config file: contains a set of variable key-value pairs used to populate the template.
Profile: an application or benchmark available within BenchPRO (i.e. a config & corresponding template file pair)
Overload: replacing a default setting or variable with another one.

Setup BenchPRO

Add BenchPRO to your MODULEPATH and load it:

ml use /scratch1/hpc_tools/benchpro/modulefiles
ml benchpro
ml save (optional)

Run the initial setup, set your SLURM allocation, then print some helpful info:

bp --validate
bps allocation=A-ccsc
bp --version
bp --help
bp --defaults
bp --notices

Compile an Application

This section will walk you through installing the LAMMPS application onto Frontera. This guide should also work on other example applications and other TACC systems.

First, print all the pre-configured example applications and benchmark profiles currently provided by BenchPRO with

benchpro --avail

Install the LAMMPS application with

benchpro --build lammps

List applications currently installed

benchpro --listApps

You should see that the status of LAMMPS is DRY RUN, this is because dry run mode is enabled by default (dry_run=True). Therefore BenchPRO generated a LAMMPS compilation script but did not submit the job to the scheduler. This is useful for testing and troubleshooting a workflow without impacting the system scheduler. You can obtain more information about your LAMMPS build with:

benchpro --queryApp lammps

Pertenant information is shown here, you can also examine the build script (by default named job.qsub) located in the path directory. You can submit this LAMMPS compilation script to the scheduler manually, or

Remove the existing dry_run version of LAMMPS with

benchpro --delApp lammps

Overload the default ‘dry_run’ value and rebuild LAMMPS with

Check the details and status of your LAMMPS compilation again with

benchpro --queryApp lammps

In this example, parameters in $BPS_INC/build/config/frontera/lammps.cfg were used to populate the template script $BPS_INC/build/template/lammps.template and produce a job script within a hierarchical directory structure under $BP_APPS ($SCRATCH/benchpro by default). Parameters for the scheduler, system architecture and compile-time optimizations, as well as a module file, were all automatically generated. You can load your LAMMPS module manually with ml frontera/.../lammps. Each application built with BenchPRO has a build report generated in order to preserve compilation metadata. BenchPRO uses the module file and build report whenever this application is used to execute a benchmark. You can manually examine LAMMPS’s build report located in the build directory or by using the --queryApp argument.

Execute a Benchmark

We can now run a benchmark with our LAMMPS installation.

Note

There is no need to wait for the LAMMPS compilation job to complete, BenchPRO is able to create scheduler job dependencies between tasks as required (i.e. the benchmark job will depend on the successful completion of the compilation job). In fact, if the setting build_if_missing=True, BenchPRO would detect that LAMMPS was not available for the current system when attempting to run a benchmark and build it automatically without us doing the steps above. The process to run a benchmark is similar to application compilation; a configuration file is used to populate a template script. A benchmark run is specified with --bench / -B. Once again you can check for available benchmarks with the --avail argument.

Permanently disable the dry run mode with bps dry_run=False so that we don’t have to overload manually overload the setting on the command line. Refer to the Changing settings section for more information.

Execute the Lennard-Jones benchmark for LAMMPS with

benchpro --bench ljmelt

Check the benchmark report with

benchpro --queryResult ljmelt

As this benchmark was the most recent BenchPRO job executed, you can use a useful shortcut to check this report

benchpro --last

Note

In this example, parameters in $BPS_INC/bench/config/lammps_ljmelt.cfg were used to populate the template $BPS_INC/bench/template/lammps.template. Much like the application build process, a benchmark report was generated to store metadata associated with this run. It is stored in the benchmark working directory and will be used in the next step to capture the result to the database.

Capture Benchmark Result

Note

A BenchPRO result is considered to be in one of four states, ‘pending’, ‘complete’, ‘failed’ or ‘captured’. The benchmark result will remain on the local system until it has been captured to the database, at which time its state is updated to captured or failed.

Once the benchmark job has been completed, capture results to the database with:

benchpro --capture

Note

Your unique instance of LAMMPS was recently compiled and is not present in the database, therefore it is also captured to the database automatically.

Display the status of all benchmark runs with

benchpro --listResults

Query the results database with

benchpro --dbList

You can print an abridged report of your benchmark with

You can also query your LAMMPS application entry in the database using the [APPID] from above

benchpro --dbApp [APPID]

Once you are satisfied the benchmark result is valid and its associated files have been uploaded to the database, you can remove the local files with

benchpro --delResult captured

Web frontend

The captured applications and benchmark results for the TACC site are available through a web portal at http://benchpro.tacc.utexas.edu/