HPC/Benchmarks/Generation 1 vs 2: Difference between revisions
Line 22: | Line 22: | ||
; tmax → min: The wallclock runtime of a job in seconds. When multiple applications are run, the longest runtime is used. This is the time metric used for calculating the job charge. | ; tmax → min: The wallclock runtime of a job in seconds. When multiple applications are run, the longest runtime is used. This is the time metric used for calculating the job charge. | ||
; charge | ; charge → min: The charge for the job, expressed in core-hours per application. Since the present test may use more than one application in a job, the value is scaled by the number of applications run within the job. This reflects the situation that the user will be interested in getting things done for minimal charge. | ||
: '''Note:''' In the near future, a ''charge factor'' will be introduced for gen2 nodes which will scale ''actual'' node-hours to ''effective'' node hours, in a manner that levels the performance difference between the node generations. At present, no charge factor is applied. | : '''Note:''' In the near future, a ''charge factor'' will be introduced for gen2 nodes which will scale ''actual'' node-hours to ''effective'' node hours, in a manner that levels the performance difference between the node generations. At present, no charge factor is applied. | ||
; combined objective tmax × charge → min: An empirical objective to minimize ''both'' time and charge. Since the charge is proportional to time, the time effectively enters quadratically. | ; combined objective tmax × charge → min: An empirical objective to minimize ''both'' time and charge. Since the charge is proportional to time, the time effectively enters quadratically. | ||
; perf → max: A measure for the performance per core. It should be proportional to the [http://en.wikipedia.org/wiki/FLOPS FLoating point Operations Per Second (FLOPS)] achieved. Since the actual number of operations for the chosen workload is unknown, the value here is calculated using an arbitrary constant to produce convenient values: | |||
perf := 100000 * napps / tmax / charge_cores | |||
=== Miscellaneous === | === Miscellaneous === | ||
Line 30: | Line 32: | ||
; run: a sequence number, used to identify the run and its various files and directories. | ; run: a sequence number, used to identify the run and its various files and directories. | ||
; charge_cores: the number of cores requested from the queing system, i.e., those ''blocked'' from use by other users. The tests within the benchmark ran with the qsub option (lowercase "ell") <code>-l naccesspolicy=singlejob</code>. | ; charge_cores: the number of cores requested from the queing system, i.e., those ''blocked'' from use by other users. The tests within the benchmark ran with the qsub option (lowercase "ell") <code>-l naccesspolicy=singlejob</code>. | ||
; HT: hyperthreading is in effect for this run | ; HT: hyperthreading is in effect for this run | ||
Revision as of 03:40, March 15, 2010
Introduction
Earlier this year, we received 200 additional nodes with two E5540 processors each. The processors have 4 cores each, and support hyperthreading, a feature which allows 2 threads per core. This benchmark investigates the benefit of hyperthreading (HT), and suggests optimal values for the nodes and ppn (processors per node) parameters in PBS. The choice is not trivial and involves tradeoffs between various metrics such as execution time or compute-hours charged.
Test description
This test runs /opt/soft/vasp-4.6.35-mkl-8/bin/vasp
.
- Full input data (Credit: D. Shin, Northwestern University).
Explanation of data columns
Parameters
- napps
- The number of applications run in parallel within the job. Typically, a user will run only one (MPI) application at a time. This benchmark allows to run more than one application in parallel, equally subdividing the available cores on any participant node via the OpenMPI
--npernode n
flag. Motivation: As a hypothesis, I considered it possible to be beneficial to run several (related or unrelated) applications on a processor, but not within the same MPI job. Different workloads would minimize the chance of congestion of the processors' pipelining architecture. - cores/app
- The number of cores that a single application workload is executed on. Typically, this value is used in studies of the parallel scaling (parallelization efficiency) of an application.
- gen
- The node hardware generation
- gen1 = Intel Xeon X5355, 2.66GHz, 8 cores per node, 16 GB RAM per node (2 GB/core)
- gen2 = Intel Xeon E5540, 2.53GHz, 8 cores per node, 24 GB RAM per node (3 GB/core), hyperthreading enabled in hardware (BIOS)
- nodes
- Number of nodes requested from the queing system.
- ppn
- Processors per node requested from the queing system.
Objective variables
- tmax → min
- The wallclock runtime of a job in seconds. When multiple applications are run, the longest runtime is used. This is the time metric used for calculating the job charge.
- charge → min
- The charge for the job, expressed in core-hours per application. Since the present test may use more than one application in a job, the value is scaled by the number of applications run within the job. This reflects the situation that the user will be interested in getting things done for minimal charge.
- Note: In the near future, a charge factor will be introduced for gen2 nodes which will scale actual node-hours to effective node hours, in a manner that levels the performance difference between the node generations. At present, no charge factor is applied.
- combined objective tmax × charge → min
- An empirical objective to minimize both time and charge. Since the charge is proportional to time, the time effectively enters quadratically.
- perf → max
- A measure for the performance per core. It should be proportional to the FLoating point Operations Per Second (FLOPS) achieved. Since the actual number of operations for the chosen workload is unknown, the value here is calculated using an arbitrary constant to produce convenient values:
perf := 100000 * napps / tmax / charge_cores
Miscellaneous
- run
- a sequence number, used to identify the run and its various files and directories.
- charge_cores
- the number of cores requested from the queing system, i.e., those blocked from use by other users. The tests within the benchmark ran with the qsub option (lowercase "ell")
-l naccesspolicy=singlejob
. - HT
- hyperthreading is in effect for this run
Results
Data
Files
- Raw data, grep-able
- Raw data, tab-separated CSV
- Extensive analysis (PDF), with comparisons made by tmax, charge, and both. The last page contains a direct comparison of the performance of gen1 vs. gen2 nodes.
Gen1 nodes
cores/app |
run |
nodes |
ppn |
napps |
tmax |
charge- cores |
charge (core-h/app) |
perf (ops/s/core) |
combined objective |
HT |
---|---|---|---|---|---|---|---|---|---|---|
4 | 1 | 1 | 4 | 1 | 1138.83 | 8 | 2.53 | 11.0 | 28.82 | |
4 | 3 | 1 | 8 | 2 | 2488.86 | 8 | 2.77 | 10.0 | 68.83 | |
6 | 11 | 1 | 6 | 1 | 1566.07 | 8 | 3.48 | 8.0 | 54.50 | |
6 | 13 | 2 | 3 | 1 | 816.61 | 16 | 3.63 | 7.7 | 29.64 | |
6 | 16 | 2 | 6 | 2 | 1401.48 | 16 | 3.11 | 8.9 | 43.65 | |
8 | 21 | 1 | 8 | 1 | 1488.72 | 8 | 3.31 | 8.4 | 49.25 | |
8 | 23 | 2 | 4 | 1 | 791.37 | 16 | 3.52 | 7.9 | 27.83 | |
8 | 26 | 2 | 8 | 2 | 1494.81 | 16 | 3.32 | 8.4 | 49.65 | |
12 | 31 | 2 | 6 | 1 | 838.08 | 16 | 3.72 | 7.5 | 31.22 | |
12 | 33 | 3 | 4 | 1 | 544.74 | 24 | 3.63 | 7.6 | 19.78 | |
12 | 35 | 4 | 3 | 1 | 503.05 | 32 | 4.47 | 6.2 | 22.49 | |
12 | 41 | 3 | 8 | 2 | 1117.20 | 24 | 3.72 | 7.5 | 41.60 | |
12 | 43 | 4 | 6 | 2 | 838.51 | 32 | 3.73 | 7.5 | 31.25 | |
16 | 51 | 2 | 8 | 1 | 998.73 | 16 | 4.44 | 6.3 | 44.33 | |
16 | 53 | 4 | 4 | 1 | 522.50 | 32 | 4.64 | 6.0 | 24.27 | |
16 | 56 | 4 | 8 | 2 | 1626.13 | 32 | 7.23 | 3.8 | 117.52 |
Gen2 nodes
cores/app |
run |
nodes |
ppn |
napps |
tmax |
charge- cores |
charge (core-h/app) |
perf (ops/s/core) |
combined objective |
HT |
---|---|---|---|---|---|---|---|---|---|---|
4 | 2 | 1 | 4 | 1 | 516.69 | 8 | 1.15 | 24.2 | 5.93 | |
4 | 4 | 1 | 8 | 2 | 767.26 | 8 | 0.85 | 32.6 | 6.54 | |
6 | 12 | 1 | 6 | 1 | 470.10 | 8 | 1.04 | 26.6 | 4.91 | |
6 | 14 | 2 | 3 | 1 | 447.01 | 16 | 1.99 | 14.0 | 8.88 | |
6 | 15 | 1 | 12 | 2 | 867.12 | 8 | 0.96 | 28.8 | 8.35 | HT |
6 | 17 | 2 | 6 | 2 | 587.55 | 16 | 1.31 | 21.3 | 7.67 | |
8 | 22 | 1 | 8 | 1 | 472.16 | 8 | 1.05 | 26.5 | 4.95 | |
8 | 24 | 2 | 4 | 1 | 426.55 | 16 | 1.90 | 14.7 | 8.09 | |
8 | 25 | 1 | 16 | 2 | 927.22 | 8 | 1.03 | 27.0 | 9.55 | HT |
8 | 27 | 2 | 8 | 2 | 596.30 | 16 | 1.33 | 21.0 | 7.90 | |
12 | 30 | 1 | 12 | 1 | 565.44 | 8 | 1.26 | 22.1 | 7.10 | HT |
12 | 32 | 2 | 6 | 1 | 330.88 | 16 | 1.47 | 18.9 | 4.87 | |
12 | 34 | 3 | 4 | 1 | 270.06 | 24 | 1.80 | 15.4 | 4.86 | |
12 | 36 | 4 | 3 | 1 | 267.81 | 32 | 2.38 | 11.7 | 6.38 | |
12 | 40 | 2 | 12 | 2 | 582.08 | 16 | 1.29 | 21.5 | 7.53 | HT |
12 | 42 | 3 | 8 | 2 | 383.44 | 24 | 1.28 | 21.7 | 4.90 | |
12 | 44 | 4 | 6 | 2 | 321.16 | 32 | 1.43 | 19.5 | 4.58 | |
16 | 50 | 1 | 16 | 1 | 592.26 | 8 | 1.32 | 21.1 | 7.79 | HT |
16 | 52 | 2 | 8 | 1 | 329.10 | 16 | 1.46 | 19.0 | 4.81 | |
16 | 54 | 4 | 4 | 1 | 237.59 | 32 | 2.11 | 13.2 | 5.02 | |
16 | 55 | 2 | 16 | 2 | 601.78 | 16 | 1.34 | 20.8 | 8.05 | HT |
16 | 57 | 4 | 8 | 2 | 327.90 | 32 | 1.46 | 19.1 | 4.78 |
Observations
- For the same core constellation in a workload, gen2 nodes are 2...3 times faster than gen1 nodes – see last page in the analysis (PDF).
- 4-core runs give the highest numerical throughput in each node type (run=01 to 04).
- gen2 nodes are fine for VASP with nodes=1:ppn=8; gen1 nodes are not (run=22 vs. 21).
Hyperthreading and node-sharing
- Naïve application of hyperthreading (HT) is detrimental - it leads to increased runtimes and thus increased charges.
- run=50 (ppn=16), charge 26% higher than run=22 (ppn=8)
- run=30 (ppn=12), charge 20% higher than run=22 (ppn=8)
- However, when non-MPI jobs or two unsynced MPI jobs are running (run=15, 25, 17), HT yields 10...20% lower charge rates. This is a pittance and makes HT largely unattractive. Unsynced MPI jobs (i.e., sharing nodes) is mildly beneficial in all cases, whether hyperthreading is used or not (runs=04, 15, 25; see pg. 4 in the PDF. The best-case scenario gives a charge savings of -24% (run=15 vs. 30, at nodes=1:ppn=12). This is even the case on gen1 nodes (e.g., runs=26 vs. 51, at nodes=2:ppn=8, charge savings = -25%).
- Running two apps in a single job is mostly not worth the effort of managing them.
- Sharing nodes confers a mild benefit and should be rewarded in a charge discount.
Recommendations
For the given workload, the following values for optimal performance with respect to the given objective can be recommended:
Node type | Objective | ||
---|---|---|---|
time → min | charge → min | time × charge → min | |
gen1 | nodes=4:ppn=3 |
nodes=1:ppn=4 |
nodes=3:ppn=4
|
run=35 tmax=503.05 charge=4.47 |
run=01 tmax=1138.83 charge=2.53 |
run=33 tmax=544.74 charge=3.63 | |
gen2 | nodes=4:ppn=4 |
nodes=1:ppn=8 |
nodes=2:ppn=8
|
run=54 tmax=237.59 charge=2.11 |
run=22 tmax=472.16 charge=1.05 |
run=52 tmax=329.10 charge=1.46 |
The last column is an empirically formulated objective to minimize both time and charge. Compared to the minimum-charge runs (Objective columns 2, runs 01 and 22, respectively), adding nodes will reduce the runtime and only slighty increase the charge. The fastest runs (first Objective column) use the same number of cores for calculation, but since the number of nodes is higher, so is charge_cores, and thus the job charge is also higher.
--stern