$Id$ | Latest: www.spec.org/cpu2017/Docs/ |
---|
1.1 Defaults 1.2 Syntax 1.3 Benchmarks and suites 1.4 Run order |
1.5 Disk Usage 1.5.1 Directory tree 1.5.2 Hey! Where did all my disk space go? 1.6 Multi-user support and limitations
|
|
--action --check_version --config --copies --flagsurl --help --ignore_errors --iterations --loose --output_format --rawformat --rebuild --reportable --threadsNew --tune
--baseonly --basepeak --nobuild --comment --define --delay --deletework --expid --fake --fakereport --fakereportable --[no]feedback --[no]graph_auto --graph_max --graph_min --http_proxy --http_timeout --info_wrap_column --keeptmp --labelNew --log_timestamp --make_no_clobber --notes_wrap_column --output_root --parallel_test --parallel_test_workloadsNew --[no]powerNew --preenv --reportonly --review --[no]setprocgroup --size --[no]table --test --undef --update --use_submit_for_compare --use_submit_for_speed --username --verbose --version
4.1 No longer needed: --rate --speed --parallel_setup 4.2 Feature removed: --machine --maxcompares |
4.3 Unsupported: --make_bundle --unpack_bundle --use_bundle |
What is runcpu? runcpu is the primary tool for SPEC CPU2017. You use it from a Unix shell or the Microsoft Windows command line to build and run benchmarks, with commands such as these:
runcpu --config=eniac.cfg --action=build 519.lbm_r runcpu --config=colossus.cfg --threads=16 628.pop2_s runcpu --config=z3.cfg --copies=64 fprate
The first command compiles the benchmark named 519.lbm_r. The second runs the OpenMP benchmark 628.pop2_s using 16 threads. The third runs 64 copies of all the SPECrate Floating Point benchmarks.
New with CPU2017: The former runspec utility is renamed runcpu in SPEC CPU2017. [Why?]
Before reading this document: If you have not already done so, please install and test your SPEC CPU2017 distribution (ISO image). This document assumes that you have already:
If you have not done the above, please see the brief instructions in the Quick Start guide, or the more detailed section "Testing Your Installation" Unix, Windows.
The SPEC CPU default settings described in this document may be adjusted by config files.
The order of precedence for settings is:
Highest precedence: | runcpu command |
Middle: | config file |
Lowest: | the tools as shipped by SPEC |
Therefore, when this document tells you that something is the default, bear in mind that your config file may have changed that setting. With luck, the author of the config file will tell you so.
The syntax for the runcpu command is:
runcpu [options] [list of benchmarks to run]
Options are described in the following sections. There, you will notice that many options have both long and short names. The long names are invoked using two dashes, and the short names use only a single dash. For long names that take a parameter, you can optionally use an equals sign. Long names can also be abbreviated, provided that you still enter enough letters for uniqueness. For example, the following commands all do the same thing:
runcpu --config=dianne_july25a --debug=99 fprate runcpu --config dianne_july25a --debug 99 fprate runcpu --conf dianne_july25a --deb 99 fprate runcpu -c dianne_july25a -v 99 fprate
In the list of benchmarks to run, you can use one or more individual benchmarks, such as 500.perlbench_r, or you can run entire suites, using one of the Short Tags below.
Short Tag |
Suite | Contents | Metrics | How many copies? What do Higher Scores Mean? |
intspeed | SPECspeed 2017 Integer | 10 integer benchmarks | SPECspeed2017_int_base SPECspeed2017_int_peak |
SPECspeed suites always run one copy of each benchmark.
Higher scores indicate that less time is needed. |
fpspeed | SPECspeed 2017 Floating Point | 10 floating point benchmarks | SPECspeed2017_fp_base SPECspeed2017_fp_peak |
|
intrate | SPECrate 2017 Integer | 10 integer benchmarks | SPECrate2017_int_base SPECrate2017_int_peak |
SPECrate suites run multiple concurrent copies of
each benchmark.
The tester selects how many. Higher scores indicate more throughput (work per unit of time). |
fprate | SPECrate 2017 Floating Point | 13 floating point benchmarks | SPECrate2017_fp_base SPECrate2017_fp_peak |
|
The "Short Tag" is the canonical abbreviation for use with runcpu, where context
is defined by the tools. In a published document, context may not be clear.
To avoid ambiguity in published documents, the Suite Name or the Metrics should be spelled as shown above. |
Supersets: There are several supersets which run more than one of the above.
Synonyms -
Suite selection is done with the short tags: intrate fprate intspeed fpspeed
You can also use full metric names. You can say: runcpu SPECspeed2017_int_base
Some alternates (such as int_rate or CPU2017) may provoke
runcpu to say that it is trying to DWIM (wikipedia) but these are not recommended.
Benchmark names: Individual benchmarks can be named, numbered, or both.
Separate them with a space.
Names can be abbreviated, as long as you enter enough characters for uniqueness.
Each of the following commands does the same thing:
runcpu -c jason_july09d --noreportable 503.bwaves_r 510.parest 603.bwaves_s runcpu -c jason_july09d --noreportable 503 510 603 runcpu -c jason_july09d --noreportable parest bwaves_r bwaves_s runcpu -c jason_july09d --noreportable pare bwaves_r bwaves_s
To exclude a benchmark: Use a hat (^, also known as carat, typically found as shift-6). Note that if hat has significance to your shell, you may need to protect it from interpretation by the shell, for example by putting it in single quotes. On Windows, you will need to use both a hat and double quotes for each benchmark you want to exclude.
bash-n.n.n$ runcpu --noreportable -c kathy_sep14c fprate ^503 ^pare pickyShell% runcpu --noreportable -c kathy_sep14c fprate '^503' '^pare' E:\cpu2017> runcpu --noreportable -c kathy_sep14c fprate "^503" "^pare"
Turning off reportable: If your config file sets reportable=yes then you cannot run a subset unless you turn that option off.
[/usr/cathy/cpu2017]$ runcpu --config cathy_apr21b --noreportable fprate ^parest
A reportable run does these steps:
Test: Set up all of the benchmarks using the test workload. Run them. Verify that they get correct answers. The test workloads are run merely as an additional verification of correct operation of the generated executables; their times are not reported and do not contribute to overall metrics. Therefore multiple benchmarks can be run simultaneously, as in the example below where the tester has set --parallel_test to allow up to 20 simultaneous tests.
Train: Do the same steps for the train workload, for the same reasons, with the same verification, non-reporting, and parallelism.
Ref: Run the refrate (5xx benchmarks) or the refspeed (6xx) workload
If running refspeed, multiple --threads are optionally allowed.
If running refrate multiple --copies are optionally allowed, as in
the example below which uses 256 copies in base.
(*) For reportable runs, --iterations must be 2 or 3.
Report:
Summarizing reportable run order: The order can be summarized as:
setup for test test (*) setup for train train (*) setup for ref ref1, ref2 [, ref3] (**) (*) Multiple benchmarks may overlap if --parallel_test > 1 (**) One benchmark at a time. Third run only if --iterations=3.
Reportable order when more than one tuning is present: If you run both base and peak tuning, base is always run first.
setup for test test base and peak (*) setup for train train base and peak (*) setup for ref base ref1, base ref2 [, base ref3] (**) peak ref1, peak ref2 [, peak ref3] (**) (*) Multiple benchmarks may overlap if --parallel_test > 1 Peak and base may also overlap. (**) One benchmark at a time. Third run only if --iterations=3.
Reportable order when more than one suite is present: If you start a reportable using more than one suite, all the work is done for one suite before proceeding to the next.
For example runcpu --iterations=3 --reportable intspeed fprate would cause:
intspeed setup test intspeed test intspeed setup train intspeed train intspeed setup refspeed intspeed refspeed #1 intspeed refspeed #2 intspeed refspeed #3 fprate setup test fprate test fprate setup train fprate train fprate setup refrate fprate refrate #1 fprate refrate #2 fprate refrate #3
(This is a change with CPU2017; the prior suite would run int test, fp test, int train, fp train, int ref, fp ref.)
If you request more than one suite (for example, by using all) then a table is printed to show you the run order:
Action Run Mode Workload Report Type Benchmarks -------- -------- -------- ----------------- -------------------------- validate rate refrate SPECrate2017_fp fprate validate speed refspeed SPECspeed2017_fp fpspeed validate rate refrate SPECrate2017_int intrate validate speed refspeed SPECspeed2017_int intspeed
Reportable example: A log from a published reportable run is excerpted below. The Unix grep command picks out lines that match one of the quoted strings; Microsoft Windows users could try findstr instead.
$ grep -e 'Running B' -e 'Starting' -e '#' CPU2017.052.log
Running Benchmarks (up to 20 concurrent processes)
Starting runcpu for 500.perlbench_r test base oct12a-rate
Starting runcpu for 502.gcc_r test base oct12a-rate
Starting runcpu for 505.mcf_r test base oct12a-rate
Starting runcpu for 520.omnetpp_r test base oct12a-rate
Starting runcpu for 523.xalancbmk_r test base oct12a-rate
Starting runcpu for 525.x264_r test base oct12a-rate
Starting runcpu for 531.deepsjeng_r test base oct12a-rate
Starting runcpu for 541.leela_r test base oct12a-rate
Starting runcpu for 548.exchange2_r test base oct12a-rate
Starting runcpu for 557.xz_r test base oct12a-rate
Starting runcpu for 999.specrand_ir test base oct12a-rate
Starting runcpu for 500.perlbench_r test peak oct12a-rate
Starting runcpu for 502.gcc_r test peak oct12a-rate
Starting runcpu for 505.mcf_r test peak oct12a-rate
Starting runcpu for 520.omnetpp_r test peak oct12a-rate
Starting runcpu for 523.xalancbmk_r test peak oct12a-rate
Starting runcpu for 525.x264_r test peak oct12a-rate
Starting runcpu for 531.deepsjeng_r test peak oct12a-rate
Starting runcpu for 541.leela_r test peak oct12a-rate
Starting runcpu for 548.exchange2_r test peak oct12a-rate
Starting runcpu for 557.xz_r test peak oct12a-rate
Starting runcpu for 999.specrand_ir test peak oct12a-rate
Running Benchmarks (up to 20 concurrent processes)
Starting runcpu for 500.perlbench_r train base oct12a-rate
Starting runcpu for 502.gcc_r train base oct12a-rate
Starting runcpu for 505.mcf_r train base oct12a-rate
Starting runcpu for 520.omnetpp_r train base oct12a-rate
Starting runcpu for 523.xalancbmk_r train base oct12a-rate
Starting runcpu for 525.x264_r train base oct12a-rate
Starting runcpu for 531.deepsjeng_r train base oct12a-rate
Starting runcpu for 541.leela_r train base oct12a-rate
Starting runcpu for 548.exchange2_r train base oct12a-rate
Starting runcpu for 557.xz_r train base oct12a-rate
Starting runcpu for 999.specrand_ir train base oct12a-rate
Starting runcpu for 500.perlbench_r train peak oct12a-rate
Starting runcpu for 502.gcc_r train peak oct12a-rate
Starting runcpu for 505.mcf_r train peak oct12a-rate
Starting runcpu for 520.omnetpp_r train peak oct12a-rate
Starting runcpu for 523.xalancbmk_r train peak oct12a-rate
Starting runcpu for 525.x264_r train peak oct12a-rate
Starting runcpu for 531.deepsjeng_r train peak oct12a-rate
Starting runcpu for 541.leela_r train peak oct12a-rate
Starting runcpu for 548.exchange2_r train peak oct12a-rate
Starting runcpu for 557.xz_r train peak oct12a-rate
Starting runcpu for 999.specrand_ir train peak oct12a-rate
Running Benchmarks
Running (#1) 500.perlbench_r refrate (ref) base oct12a-rate (256 copies) [2016-10-12 22:18:24]
Running (#1) 502.gcc_r refrate (ref) base oct12a-rate (256 copies) [2016-10-12 23:10:43]
Running (#1) 505.mcf_r refrate (ref) base oct12a-rate (256 copies) [2016-10-13 00:11:01]
Running (#1) 520.omnetpp_r refrate (ref) base oct12a-rate (256 copies) [2016-10-13 01:53:35]
Running (#1) 523.xalancbmk_r refrate (ref) base oct12a-rate (256 copies) [2016-10-13 02:40:23]
Running (#1) 525.x264_r refrate (ref) base oct12a-rate (256 copies) [2016-10-13 03:21:31]
Running (#1) 531.deepsjeng_r refrate (ref) base oct12a-rate (256 copies) [2016-10-13 04:36:07]
Running (#1) 541.leela_r refrate (ref) base oct12a-rate (256 copies) [2016-10-13 05:12:11]
Running (#1) 548.exchange2_r refrate (ref) base oct12a-rate (256 copies) [2016-10-13 05:59:16]
Running (#1) 557.xz_r refrate (ref) base oct12a-rate (256 copies) [2016-10-13 07:28:53]
Running (#1) 999.specrand_ir refrate (ref) base oct12a-rate (256 copies) [2016-10-13 08:08:23]
Running (#2) 500.perlbench_r refrate (ref) base oct12a-rate (256 copies) [2016-10-13 08:11:14]
Running (#2) 502.gcc_r refrate (ref) base oct12a-rate (256 copies) [2016-10-13 09:03:38]
Running (#2) 505.mcf_r refrate (ref) base oct12a-rate (256 copies) [2016-10-13 10:03:57]
Running (#2) 520.omnetpp_r refrate (ref) base oct12a-rate (256 copies) [2016-10-13 11:46:36]
Running (#2) 523.xalancbmk_r refrate (ref) base oct12a-rate (256 copies) [2016-10-13 12:33:11]
Running (#2) 525.x264_r refrate (ref) base oct12a-rate (256 copies) [2016-10-13 13:14:07]
Running (#2) 531.deepsjeng_r refrate (ref) base oct12a-rate (256 copies) [2016-10-13 14:28:47]
Running (#2) 541.leela_r refrate (ref) base oct12a-rate (256 copies) [2016-10-13 15:04:49]
Running (#2) 548.exchange2_r refrate (ref) base oct12a-rate (256 copies) [2016-10-13 15:51:53]
Running (#2) 557.xz_r refrate (ref) base oct12a-rate (256 copies) [2016-10-13 17:21:33]
Running (#2) 999.specrand_ir refrate (ref) base oct12a-rate (256 copies) [2016-10-13 18:01:04]
Running (#1) 500.perlbench_r refrate (ref) peak oct12a-rate (224 copies) [2016-10-13 18:03:29]
Running (#1) 502.gcc_r refrate (ref) peak oct12a-rate (256 copies) [2016-10-13 18:49:17]
Running (#1) 505.mcf_r refrate (ref) peak oct12a-rate (64 copies) [2016-10-13 19:44:21]
Running (#1) 520.omnetpp_r refrate (ref) peak oct12a-rate (256 copies) [2016-10-13 20:06:29]
Running (#1) 523.xalancbmk_r refrate (ref) peak oct12a-rate (256 copies) [2016-10-13 20:54:49]
Running (#1) 525.x264_r refrate (ref) peak oct12a-rate (256 copies) [2016-10-13 21:28:24]
Running (#1) 531.deepsjeng_r refrate (ref) peak oct12a-rate (256 copies) [2016-10-13 22:41:43]
Running (#1) 541.leela_r refrate (ref) peak oct12a-rate (256 copies) [2016-10-13 23:16:40]
Running (#1) 548.exchange2_r refrate (ref) peak oct12a-rate (256 copies) [2016-10-14 00:01:53]
Running (#1) 557.xz_r refrate (ref) peak oct12a-rate (256 copies) [2016-10-14 01:11:23]
Running (#1) 999.specrand_ir refrate (ref) peak oct12a-rate (1 copy) [2016-10-14 01:50:51]
Running (#2) 500.perlbench_r refrate (ref) peak oct12a-rate (224 copies) [2016-10-14 01:53:13]
Running (#2) 502.gcc_r refrate (ref) peak oct12a-rate (256 copies) [2016-10-14 02:39:03]
Running (#2) 505.mcf_r refrate (ref) peak oct12a-rate (64 copies) [2016-10-14 03:33:57]
Running (#2) 520.omnetpp_r refrate (ref) peak oct12a-rate (256 copies) [2016-10-14 03:56:04]
Running (#2) 523.xalancbmk_r refrate (ref) peak oct12a-rate (256 copies) [2016-10-14 04:44:33]
Running (#2) 525.x264_r refrate (ref) peak oct12a-rate (256 copies) [2016-10-14 05:18:13]
Running (#2) 531.deepsjeng_r refrate (ref) peak oct12a-rate (256 copies) [2016-10-14 06:31:34]
Running (#2) 541.leela_r refrate (ref) peak oct12a-rate (256 copies) [2016-10-14 07:06:33]
Running (#2) 548.exchange2_r refrate (ref) peak oct12a-rate (256 copies) [2016-10-14 07:51:48]
Running (#2) 557.xz_r refrate (ref) peak oct12a-rate (256 copies) [2016-10-14 09:01:43]
Running (#2) 999.specrand_ir refrate (ref) peak oct12a-rate (1 copy) [2016-10-14 09:41:13]
$ (white space adjusted for readability)
The structure of the CPU2017 directory tree is:
$SPEC or %SPEC% - the root directory benchspec - Some suite-wide files CPU - The benchmarks bin - Tools to run and report on the suite config - Config files Docs - HTML documentation Docs.txt - plaintext documentation result - Log files and reports tmp - Temporary files tools - Sources for the CPU2017 tools
Within each of the individual benchmarks, the structure is:
nnn.benchmark - root for this benchmark build - Benchmark binaries are built here data all - Data used by all runs (if needed by the benchmark) ref - The timed data set test - Data for a simple test that an executable is functional train - Data for feedback-directed optimization Docs - Documentation for this benchmark exe - Compiled versions of the benchmark run - Benchmarks are run here Spec - SPEC metadata about the benchmark src - The sources for the benchmark
Most SPECspeed benchmarks (6nn.benchmark_s) share content that is located under a corresponding SPECrate benchmark (5nn.benchmark_r). Shared sources files may be compiled differently for SPECspeed vs. SPECrate. For example, the sources for 619.lbm_s can be found at 519.lbm_r/src/ and only 619.lbm_s can be compiled with OpenMP.
Look for the output of your runcpu command in the directory $SPEC/result (Unix) or %SPEC%\result (Windows). There, you will find log files and result files. More information about log files can be found in the Config Files document.
The format of the result files depends on what was selected in your config file, but will typically include at least .txt for ASCII text, and will always include .rsf, for raw (unformatted) run data. More information about result formats can be found below, under --output_format. Note that you can always re-generate the output, using the --rawformat option, also documented below.
When you find yourself wondering "Where did all my disk space go?", the answer is usually "The run directories." Most activity takes place in automatically created subdirectories of $SPEC/benchspec/CPU/*/run/ (Unix) or %SPEC%\benchspec\CPU\*\run\ (Windows). Other consumers of disk space underneath individual nnn.benchmark directories include the build/ and exe/ directories.
At the top of the directory tree, space is used by your config/ and result/ directories, and for temporary directories
$SPEC/tmp output_root/tmp
Usually, the largest amount of space is in the run directories. For example, the tester who generated the result excerpted above is lazy about cleaning, and at the moment this paragraph is written, there are many SPECrate run directories on the system:
--------------------------------------- One lazy user's space. Yours will vary. --------------------------------------- Directories GB ---------------------------- ----- Top-level (config,result,tmp) 0.1 Benchmarks $SPEC/benchspec/CPU/*/exe 2 $SPEC/benchspec/CPU/*/build 9 $SPEC/benchspec/CPU/*/run 198 ---------------------------------------
If you use the config file label feature, then directories are named to try to make it easy for you to hunt them down, For example, suppose Bob has a config file that he is using to test some new memory optimizations using SPECrate (multi-copy) mode. He has set
in his config file. In that case, the tools would create directories such as these:
$ pwd /Users/bob/cpu2017/rc4/benchspec/CPU/505.mcf_r $ ls -d */*Bob* build/build_base_BobMemoryOpt.0000 exe/mcf_r_base.BobMemoryOpt run/run_base_refrate_BobMemoryOpt.0000 run/run_base_refrate_BobMemoryOpt.0001 run/run_base_refrate_BobMemoryOpt.0002 run/run_base_refrate_BobMemoryOpt.0003 run/run_base_refrate_BobMemoryOpt.0004 run/run_base_refrate_BobMemoryOpt.0005 run/run_base_refrate_BobMemoryOpt.0006 run/run_base_refrate_BobMemoryOpt.0007 run/run_base_refrate_BobMemoryOpt.0008 run/run_base_refrate_BobMemoryOpt.0009 run/run_base_refrate_BobMemoryOpt.0010 run/run_base_refrate_BobMemoryOpt.0011 run/run_base_refrate_BobMemoryOpt.0012 run/run_base_test_BobMemoryOpt.0000 run/run_base_train_BobMemoryOpt.0000 $
To get your disk space back, see the documentation of the various cleaning options, below.
SPEC CPU2017 supports multiple users sharing an installation; however you must choose carefully regarding file protections. This section describes the multi-user features and protection options.
Features that are always enabled:
Limitations: The default methods impose two key limitations, which will not be safe in some environments:
Partial solution(?) expid+conventions:
You can deal with limitation #2 if users adopt certain habits. For example, Darryl could name all his config files darryl-something.cfg. He could use runcpu --expid=darryl or the
corresponding config file expid=darryl to cause his results to be placed under
$SPEC/result/darryl (or %SPEC%\result\darryl\) and binaries under
nnn.benchmark/exe/darryl/. Unfortunately, this alleged solution still requires that the tree be writeable by all
users, and will not help Darryl at all when John comes along and blithely does one of the alternate cleaning methods.
Solution(?) Give up:
You could just choose to spend the disk space to give each person their own
tree. For SPEC CPU2017 V1.0, this may increase disk space requirement by about 3 GB per user.
Recommended Solution: output_root. The recommended method uses 4 steps:
Step | Example (Unix) |
(1) Protect most of the SPEC tree read-only | chmod -R ugo-w $SPEC |
(2) Allow shared access to the config directory | chmod 1777 $SPEC/config chmod u+w $SPEC/config/*cfg |
(3) Keep your own config files | cp config/assignment1.cfg config/alan1.cfg |
(4) Use the --output_root switch or
add an output_root to your config file. |
runcpu --output_root=~/cpu2017
output_root = /home/${username}/cpu2017 |
More detail:
Most of the CPU2017 tree is shared, and can be protected read-only. For example, on a Unix system, you might set protections with:
chmod -R ugo-w $SPEC
The one exception is the config directory, $SPEC/config/ (Unix) or %SPEC%\config\ (Windows), which needs to be a read/write directory shared by all the users, and config files must be writeable. On most Unix system, chmod 1777 is very useful: it lets anyone create files, which they own, control, and protect. (1777 is commonly used for /tmp for this very reason.)
chmod 1777 $SPEC/config chmod u+w $SPEC/config/*cfg
Config files usually would not be shared between users. For example, students might create their own copies of a config file:
Alan enters: cd /cs403/cpu2017 . ./shrc cd config cp assignment1.cfg alan1.cfg chmod u+w alan1.cfg runcpu --config=alan1 --action=build 557.xz_r |
Venkatesh enters: cd /cs403/cpu2017 . ./shrc cd config cp assignment1.cfg venkatesh1.cfg chmod u+w venkatesh1.cfg runcpu --config=venkatesh1 --action=build 557.xz_r |
Set output_root in the config files to change the destinations of the outputs. For example, if config files include (near the top):
output_root=/home/${username}/spec label=feb27a
then these directories will be used for the above runcpu command:
Alan's directories |
build: /home/alan/spec/benchspec/CPU/557.xz_r/build/build_base_feb27a.0001 Logs: /home/alan/spec/result |
Venkatesh's |
build: /home/venkatesh/spec/benchspec/CPU/557.xz_r/build/build_base_feb27a.0000 Logs: /home/venkatesh/spec/result |
Navigation: Unix users can easily navigate an output_root tree using ogo
Most runcpu
commands perform an action on a set of benchmarks.
The default action is validate.
The actions are described in two tables below: first, actions that relate to building and running; and then actions
regarding cleanup.
--action build | Compile the benchmarks, using the config file specmake options. |
--action buildsetup | Set up build directories for the benchmarks.
This option may be useful when debugging a build: you can set up a directory and play with it as a private sandbox. |
--action onlyrun | Run the benchmarks but do not verify that they got the correct answers.
This option may be useful while applying CPU2017 for some other purpose, such as tracing instructions for a hardware simulator, or generating a system load while debugging an operating system feature. |
--action report | Synonym for --fakereport; see also --fakereportable. |
--action run | Synonym for --action validate. |
--action runsetup | Set up the run directory (or directories).
This option may be useful when debugging a run.
|
--action setup | Synonym for --action runsetup |
--action validate | Build (if needed), set up directories, run, check for correct answers, generate reports.
This is the default action. |
Cleaning actions are listed in order from least thorough to most:
--action clean | Empty run and build directories for the specified benchmark set for the current user. For example, if the current OS username is set to jeff and this command is entered: D:\cpu2017\> runcpu --action clean --config may12a fprate then the tools will remove build and run directories with username jeff for fprate benchmarks generated by config file may12a.cfg. |
--action clobber | Clean + remove the corresponding executables. |
--action trash | Remove run and build directories for all users and all labels for the specified benchmarks. |
--action realclean | A synonym for --action trash |
--action scrub | Trash + remove the corresponding executables. |
Caution | Fake mode is not implemented for the cleaning actions.
For example, if you say runcpu --fake --action=clean the cleaning really happens. |
Clean by hand:
If you prefer, you can clean disk space by entering commands such as the following (on Unix systems):
rm -Rf $SPEC/benchspec/C*/*/run rm -Rf $SPEC/benchspec/C*/*/build rm -Rf $SPEC/benchspec/C*/*/exe
The above commands not only empty the contents of the run and exe directories; they also delete the directories themselves. That's fine; the tools will re-create the run and exe directories if they are needed again later on.
result directories can be cleaned or renamed. Don't worry about creating a new directory;
runcpu will do so automatically. You should be careful to ensure no surprises for any currently-running users.
If you move result directories, it is a good idea to also clean temporary directories at the same time.
Example:
cd $SPEC
mv result old-result
rm -Rf tmp/
cd output_root # (If you use an output_root)
rm -Rf tmp/
Windows users: Windows users can achieve similar effects using the rename command to move directories, and the rd command to remove directories.
I have so much disk space, I'll never use all of it:
Run directories are automatically re-used for subsequent runs. If you prefer, you can ask the tools to never touch a used run directory. Do this by setting the environment variable:
SPEC_CPU2017_NO_RUNDIR_DEL
In this case, you should be prepared to do frequent cleaning, perhaps after reviewing the results of each run.
Most users of runcpu will want to become familiar with the following options.
This section is organized alphabetically without regard to upper/lower case and without regard to presence or absence of no at the start of the switch.
runcpu --check_version --http_proxy http://webcache.tom.spokewrenchdad.com:8080or, equivalently, for those who prefer to abbreviate to the shortest possible amount of typing:
runcpu --ch --http_p http://webcache.tom.spokewrenchdad.com:8080The command downloads a small file (~15 bytes) from www.spec.org which contains information about the most recent release, and compares that to your release. If your version is out of date, a warning will be printed.
Meaning: Use number copies for a SPECrate run.
Note that specifying the number of copies on the command line will override any config file setting of copies. This behaviour is new with CPU2017: in the previous suite, there were some circumstances where the config file would win, and some where the command line would win. For CPU2017, it's simple: the command line wins.
Meaning: A "flags file" tells runcpu -- and the reader -- how to interpret tuning options, for
example -O3 or -Ofast.
If you want more than one, separate them with commas, or repeat the --flagsurl
switch.
These are equivalent:
runcpu --flagsurl=$SPEC/compiler.xml,$SPEC/platform.xml runcpu --flagsurl=$SPEC/compiler.xml --flagsurl=$SPEC/platform.xml
You can use either a file path or an http:// address. If needed, add an --http_proxy (or use the corresponding config file option).
The special value noflags may be used to cause rawformat to remove a stored flags file when re-formatting a previously run result.
Help, I got an error message about INVALID RUN:
############################################################################## # INVALID RUN -- INVALID RUN -- INVALID RUN -- INVALID RUN -- INVALID RUN # # # # Your run was marked invalid because it has one or more flags in the # # "unknown" category. You might be able to resolve this problem without # # re-running your test; see # # https://www.spec.org/cpu2017/Docs/runcpu.html#flagsurl # # for more information. # # # # INVALID RUN -- INVALID RUN -- INVALID RUN -- INVALID RUN -- INVALID RUN # ##############################################################################
Flags files are required by rule 4.6. If you don't
have one, or if your flags file is obsolete, you will see the above error.
To fix it:
Find the sections of the report marked "Unknown".
You can ask your compiler vendor for help, or you can adapt a flags file from other results, or you can fix it yourself.
Once your new flags files are available, make a copy of your rawfile.
Then, insert the flags files, using either of these two equivalent commands:
rawformat --flagsurl=... runcpu --rawformat --flagsurl=...
cp CPU2017.138.intrate.rsf retry.rsf rawformat --flagsurl $SPEC/new.compiler.xml,$SPEC/new.platform.xml retry.rsf
copy CPU2017.138.intrate.rsf retry.rsf rawformat --flagsurl %SPEC%\new.compiler.xml,%SPEC%\new.platform.xml retry.rsf
[/usr/mwong/cpu2017]$ runcpu --config golden --iterations 1 xalancbmk_ras the SPEC tools will inform you that you cannot change the number of iterations on a reportable run. But either of the following commands will override the config file and just run 523.xalancbmk_r once:
[/usr/mwong/cpu2017]$ runcpu --config golden --iterations 1 --loose xalancbmk_r [/usr/mwong/cpu2017]$ runcpu --config golden --iterations 1 --noreportable xalancbmk_r
Name|synonyms... | Meaning |
all | implies all of the following except screen, check, and mail |
---|---|
config cfg|cfgfile configfile conffile |
config file used for this run, written as a numbered file in the result directory, for example, $SPEC/result/CPU2017.030.fprate.refrate.cfg
|
check subcheck reportcheck reportable reportablecheck chk|sub|subtest|test |
Reportable syntax check (automatically enabled for reportable runs).
|
csv spreadsheet |
Comma-separated variable. If you populate spreadsheets from your runs, you probably should not cut/paste data from text files; you'll get more accurate data by using --output_format csv. The csv report includes all runs, more decimal places, system information, and even the compiler flags. |
default |
implies HTML and text |
flag|flags |
Flag report. Will also be produced when formats that use it are requested (PDF, HTML). |
html xhtml|www|web |
web page |
mail mailto|email |
All generated reports will be sent to an address specified in the config file. |
pdf adobe |
Portable Document Format. This format is the design center for SPEC CPU2017 reporting. Other formats contain less information: text lacks graphs, postscript lacks hyperlinks, and HTML is less structured. (PDF does not appear as part of "default" only because some systems may lack the ability to read it.) |
postscript ps|printer|print |
PostScript |
raw rsf |
The unformatted raw results, written to a numbered file in the result directory that ends with .rsf (e.g. /spec/cpu2017/rc4/result/CPU2017.042.fpspeed.rsf). Your raw result files are your most important, because the other formats are generated from them. |
screen|scr|disp display|terminal|term |
ASCII text output to stdout. |
text txt|ASCII|asc |
Plain ASCII text file |
Meaning: Do not attempt to do a run; instead, just generate reports from an existing rawfile.
Output will always include the results of format check unless you add nocheck to your list of output_formats.
Using this option will cause any specified --actions to be ignored. The runcpu program is actually exited and rawformat is executed instead. These commands do the same thing:
runcpu --rawformat something rawformat something
The rawformat utility or the --rawformat switch can be useful if (for example) you are just doing ASCII output during most of your runs, but now you would like to create additional reports for one or more especially interesting runs. To create the html and PDF files for experiment number 77, you could say either of these:
runcpu --rawformat --output_format html,ps $SPEC/result/CPU2017.077.fpspeed.rsf rawformat --output_format html,ps $SPEC/result/CPU2017.077.fpspeed.rsf
For more information about rawformat, please see utility.html.
Meaning: When the benchmarks are run, set the environment variable OMP_NUM_THREADS=N
Notes
This section is organized alphabetically without regard to upper/lower case and without regard to presence or absence of no at the start of the switch.
Meaning: Do not build binaries, even if they don't exist or checksums don't match.
The --nobuild feature can be very handy if, for example, you have script with multiple invocations of runcpu, and you would like to ensure that the build is only attempted once. (Perhaps your thought process might be, "If it fails the first time, fine, just forget about it until I come in Monday and look things over.") By adding --nobuild --ignore_errors to all runs after the first one, no attempt will be made to build the failed benchmarks after the first attempt.
The --nobuild feature also comes in handy when testing whether proposed config file options would potentially force an automatic rebuild.
Meaning: Define a preprocessor macro named SYMBOL
and optionally give it the value VALUE.
If no value is specified, the macro is defined with no value.
SYMBOL
may not contain equals signs ("=") or colons (":").
This option may be used multiple times.
Many of the Example config files in your config/ directory have sections similar to this:
%ifndef %{build_ncpus} % define build_ncpus 8 %endif . . . makeflags = --jobs=%{build_ncpus}
If you have a large server and want compiles to complete more quickly, you could say runcpu --define build_ncpus=99 and specmake will create up to 99 compile jobs at a time.
Meaning: enable or disable FDO options in the config file.
Normally, when Feedback-Directed Optimization (FDO) options are set in the
config file, multiple-pass compilation is done, along with training
runs. Using --nofeedback will cause the config file FDO settings to be ignored and a single-pass
compilation will occur.
Explicitly specifying --feedback will have an effect only if
there are appropriate FDO options in the configuration file.
New with CPU2017: The command line wins unconditionally over the config file.
See FDO Example 7 in SPEC CPU2017 Config Files.
Meaning: In some cases, such as when doing version checks and loading flag description files, runcpu will attempt to fetch a file, using http. If your web browser needs a proxy server in order to access the outside world, then runcpu will probably want to use the same proxy server. The proxy server can be set by:
For example, a failure of this form:
$ runcpu --rawformat --output_format txt \ --flagsurl http://portlandcyclers.net/evan.xml CPU2017.007.fprate.rsf ... Retrieving flags file (http://portlandcyclers.net/evan.xml)... ERROR: Specified flags URL (http://portlandcyclers.net/evan.xml) could not be retrieved. The error returned was: 500 Can't connect to portlandcyclers.net:80 (Bad hostname 'portlandcyclers.net')
improves when a proxy is provided:
$ runcpu --rawformat --output_format txt \ --flagsurl http://portlandcyclers.net/evan.xml \ --http_proxy=http://webcache.tom.spokewrenchdad.com:8080 CPU2017.007.fprate.rsf
Note that this setting will override the value of the http_proxy environment variable, as well as any setting in the config file.
By default, no proxy is used. The special value none may be used to unset any proxies set in the environment or via config file.
Meaning: Do not delete existing object files before attempting to build. This option should only be used for troubleshooting a problematic compile. It cannot be used for a reportable run.
Rather than using this option, it would probably be easier to just go to the build directory and use specmake
Meaning: If set to a non-empty value, all output files will be rooted under the named
directory, instead of under $SPEC (or %SPEC%).
If directory is not an absolute path (one that begins with "/" on Unix, or a device name on Windows), the path
will be created under $SPEC.
This option can be useful for sharing an installation.
It can also be useful if you want to optimize your I/O, as discussed in the corresponding SPEC CPU2017 Config
Files section on output_root.
Meaning: Enable/disable the optional power measurement mode of the benchmark.
Meaning: Selects size of input data to run: test, train, or ref.
The reference workload ("ref") is the only size whose time appears in reports.
You might choose to use runcpu --size=test while debugging a new set of compilation options.
Reportable runs automatically invoke all three sizes: they ensure that your binaries can produce correct results with the test and train workloads and then run the ref workload either 2 or 3 times for the actual measurements.
Caution: When requesting workloads, it is best to stick with the above three: test, train, and ref. Other options (or synonyms) may be useful to benchmark developers or with other suites that use this toolset; they are not documented here because it is not possible to generate SPEC CPU2017 metrics using workloads other than the ones that correspond to these three.
Meaning: Run the Perl test suite to verify correct operation of specperl, the SPEC CPU
pre-compiled version of Perl.
When this option is used, runcpu will not perform any other actions.
specperl is added when you run install.sh or install.bat.
If something goes wrong while installing and you want support, the output
of runcpu --test may be needed.
Meaning: Use submit commands during the comparison phase of the run, if submit was used for the measurement phase of the run.
Meaning: Use submit commands for SPECspeed runs. The submit facility is by default only used for SPECrate runs.
Meaning: Print detailed version information, including versions of:
specdiff
specinvoke
specmake
specperl
specpp
specrxp
specxz
When this option is used, runcpu will not perform any other actions.
If something goes wrong and you want support, the
output of runcpu --version may be needed.
Rate and Speed
The CPU2006 feature
--rate[link goes to CPU2006]
and the CPU2006 feature
--speed[link goes to CPU2006]
are not needed in SPEC CPU2017 because of a change of how benchmarks are defined.
In SPEC CPU2006, a given benchmark had a single source code version and a single workload version.
The workload could be run in two ways: either single-copy (SPECspeed) or multi-copy (SPECrate).
For SPEC CPU2017 the SPECrate and SPECspeed versions of a benchmark:
For example:
For more information, see the System Requirements discussion of Using Multiple CPUs.
Parallel setup
The SPEC CPU2006 feature
--parallel_setup[link goes to CPU2006]
and the CPU2006 feature
--parallel_setup_prefork[link goes to CPU2006]
and the CPU2006 feature
--parallel_setup_type[link goes to CPU2006]
are not needed in SPEC CPU2017 because of a change in how benchmarks are set up.
For SPEC CPU2006, every SPECrate copy was set up with its own unique copy of the input data. For large SPECrate runs, large amounts of space were needed, and a lot of time (in some cases, hours).
For SPEC CPU2017, file system hard links are used to avoid copying such large amounts of data, and the features for parallel setup are no longer needed.
For example, on one particular system,
(One particular system, your space may vary.)
The SPEC CPU2006 feature --machine[link goes to CPU2006]
was removed because it was rarely used; and the additional complexity and confusion that it caused was deemed not
worthwhile.
The CPU2006 feature --maxcompares[link goes to CPU2006]
was removed due to complexity considerations when implementing the new parallel setup methods.
The SPEC CPU2006 feature --make_bundle[link goes to CPU2006]
and the CPU2006 feature --unpack_bundle[link goes to CPU2006]
and the CPU2006 feature --use_bundle[link goes to CPU2006]
have not been tested in the CPU2017 environment.
It is not known whether anyone uses the features, and they were deemed not
a priority for V1.
It is possible that you might be able to get them to work by following the CPU2006 instructions linked above,
but no promises are made.
(This table is organized alphabetically, without regard to upper/lower case, and without regard to the presence of a leading "no").
-a | Same as --action |
---|---|
--action action | Do: build|buildsetup|clean|clobber| onlyrun|realclean|report|run|runsetup|scrub| setup|trash|validate |
--basepeak | Copy base results to peak (use with --rawformat) |
--nobuild | Do not attempt to build binaries |
-c | Same as --config |
-C | Same as --copies |
--check_version | Check whether an updated version of CPU2017 is available |
--comment "text" | Add a comment to the log and the stored configfile. |
--config file | Set config file for runcpu to use |
--copies | Set the number of copies for a SPECrate run |
-D | Same as --rebuild |
-d | Same as --deletework |
--debug | Same as --verbose |
--define SYMBOL[=VALUE] | Define a config preprocessor macro |
--delay secs | Add delay before and after benchmark invocation |
--deletework | Force work directories to be rebuilt |
--dryrun | Same as --fake |
--dry-run | Same as --fake |
--expid=dir | Experiment id, a subdirectory to use for results/runs/exe |
-F | Same as --flagsurl |
--fake | Show what commands would be executed. |
--fakereport | Generate a report without compiling codes or doing a run. |
--fakereportable | Generate a fake report as if "--reportable" were set. |
--[no]feedback | Control whether builds use feedback directed optimization |
--flagsurl url | Location (url or filespec) where to find your flags file |
--graph_auto | Let the tools pick minimum and maximum for the graph |
--graph_min N | Set the minimum for the graph |
--graph_max N | Set the maximum for the graph |
-h | Same as --help |
--help | Print usage message |
--http_proxy | Specify the proxy for internet access |
--http_timeout | Timeout when attempting http access |
-I | Same as --ignore_errors |
-i | Same as --size |
--ignore_errors | Continue with benchmark runs even if some fail |
--ignoreerror | Same as --ignore_errors |
--info_wrap_column N | Set wrap width for non-notes informational items |
--infowrap | Same as --info_wrap_column |
--input | Same as --size |
--iterations N | Run each benchmark N times |
--keeptmp | Keep temporary files |
-L | Same as --label |
-l | Same as --loose |
--label label | Set the label for executables, build directories, and run directories |
--loose | Do not produce a reportable result |
--noloose | Same as --reportable |
-M | Same as --make_no_clobber |
--make_no_clobber | Do not delete existing object files before building. |
--mockup | Same as --fakereportable |
-n | Same as --iterations |
-N | Same as --nobuild |
--notes_wrap_column N | Set wrap width for notes lines |
-noteswrap | Same as --notes_wrap_column |
-o | Same as --output_format |
--output_format format[,format...] | Generate: all|cfg|check|csv|flags|html|mail|pdf|ps|raw|screen|text |
--output_root=dir | Write all files here instead of under $SPEC |
--parallel_test | Number of test/train workloads to run in parallel |
--[no]power | Control power measurement during run |
--preenv | Allow environment settings in config file to be applied |
-R | Same as --rawformat |
--rawformat | Format raw file |
--rebuild | Force a rebuild of binaries |
--reportable | Produce a reportable result |
--noreportable | Same as --loose |
--reportonly | Same as --fakereport |
--[no]review | Format results for review |
-s | Same as --reportable |
-S SYMBOL[=VALUE] | Same as --define |
-S SYMBOL:VALUE | Same as --define |
--[no]setprocgroup | [Don't] try to create all processes in one group. |
--size size[,size...] | Select data set(s): test|train|ref |
--strict | Same as --reportable |
--nostrict | Same as --loose |
-T | Same as --tune |
--[no]table | Do [not] include a detailed table of results |
--threads=N | Set number of OpenMP threads for a SPECspeed run |
--test | Run various perl validation tests on specperl |
--train_with | Change the training workload |
--tune | Set the tuning levels to one of: base|peak|all |
--tuning | Same as --tune |
--undef SYMBOL | Remove any definition of this config preprocessor macro |
-U | Same as --username |
--update | Check www.spec.org for updates to benchmark and example flag files, and config files |
--username | Name of user to tag as owner for run directories |
--use_submit_for_compare | If submit was used for the run, use it for comparisons too. |
--use_submit_for_speed | Use submit commands for SPECspeed (default is only for SPECrate). |
-v | Same as --verbose |
--verbose | Set verbosity level for messages to N |
-V | Same as --version |
--version | Output lots of version information |
-? | Same as --help |
Using SPEC CPU®2017: The 'runcpu' Command: Copyright © 2017 Standard Performance Evaluation Corporation (SPEC)