Scalasca  (Scalasca 2.6, revision 748ac9e9)
Scalable Performance Analysis of Large-Scale Applications
Summary measurement & examination

The filtering file prepared in Section Optimizing the measurement configuration can now be applied to produce a new summary measurement, ideally with reduced measurement overhead to improve accuracy. This can be accomplished by providing the filter file name to scalasca -analyze via the -f option.

Attention
Before re-analyzing the application, the unfiltered summary experiment should be renamed (or removed), since scalasca -analyze will by default not overwrite the existing experiment directory and abort immediately.
  $ mv scorep_bt_144_sum scorep_bt_144_sum.nofilt
  $ scalasca -analyze -f npb-bt.filt mpiexec -n 144 ./bt.D.144
  S=C=A=N: Scalasca 2.5 runtime summarization
  S=C=A=N: ./scorep_bt_144_sum experiment archive
  S=C=A=N: Mon Mar 18 13:52:32 2019: Collect start
  mpiexec -n 144 ./bt.D.144


   NAS Parallel Benchmarks 3.3 -- BT Benchmark

   No input file inputbt.data. Using compiled defaults
   Size:  408x 408x 408
   Iterations:  250    dt:   0.0000200
   Number of active processes:   144

   Time step    1
   Time step   20
   Time step   40
   Time step   60
   Time step   80
   Time step  100
   Time step  120
   Time step  140
   Time step  160
   Time step  180
   Time step  200
   Time step  220
   Time step  240
   Time step  250
   Verification being performed for class D
   accuracy setting for epsilon =  0.1000000000000E-07
   Comparison of RMS-norms of residual
             1 0.2533188551738E+05 0.2533188551738E+05 0.1499315900507E-12
             2 0.2346393716980E+04 0.2346393716980E+04 0.8546885387975E-13
             3 0.6294554366904E+04 0.6294554366904E+04 0.2745293523008E-14
             4 0.5352565376030E+04 0.5352565376030E+04 0.8376934357159E-13
             5 0.3905864038618E+05 0.3905864038618E+05 0.6650300273080E-13
   Comparison of RMS-norms of solution error
             1 0.3100009377557E+03 0.3100009377557E+03 0.1373406191445E-12
             2 0.2424086324913E+02 0.2424086324913E+02 0.1600422929406E-12
             3 0.7782212022645E+02 0.7782212022645E+02 0.4090394153928E-13
             4 0.6835623860116E+02 0.6835623860116E+02 0.3596566920650E-13
             5 0.6065737200368E+03 0.6065737200368E+03 0.2605201960010E-13
   Verification Successful


   BT Benchmark Completed.
   Class           =                        D
   Size            =            408x 408x 408
   Iterations      =                      250
   Time in seconds =                   228.02
   Total processes =                      144
   Compiled procs  =                      144
   Mop/s total     =                255839.13
   Mop/s/process   =                  1776.66
   Operation type  =           floating point
   Verification    =               SUCCESSFUL
   Version         =                    3.3.1
   Compile date    =              18 Mar 2019

   Compile options:
      MPIF77       = scorep mpifort
      FLINK        = $(MPIF77)
      FMPI_LIB     = (none)
      FMPI_INC     = (none)
      FFLAGS       = -O2
      FLINKFLAGS   = -O2
      RAND         = (none)


   Please send feedbacks and/or the results of this run to:

   NPB Development Team
   Internet: npb@nas.nasa.gov


  S=C=A=N: Mon Mar 18 13:56:24 2019: Collect done (status=0) 232s
  S=C=A=N: ./scorep_bt_144_sum complete.

  $ ls scorep_bt_144_sum
  MANIFEST.md    scorep.cfg     scorep.log
  profile.cubex  scorep.filter  summary.cubex

This new measurement produced an experiment directory containing one additional file compared to the initial run: a copy of the measurement filter in scorep.filter. Notice that applying the runtime filtering reduced the measurement overhead significantly, down to now only 5.5% (228.02 seconds vs. 216.00 seconds for the reference run). This new measurement with the optimized configuration should therefore quite accurately represent the real runtime behavior of the BT application, and can now be post-processed and interactively explored using the Cube result browser. These two steps can be conveniently initiated using the scalasca -examine command:

  $ scalasca -examine scorep_bt_144_sum
  INFO: Post-processing runtime summarization report (profile.cubex)...
  INFO: Displaying ./scorep_bt_144_sum/summary.cubex...

Examination of the summary result (see Figure SummaryExperiment for a screenshot and Section Using the Cube browser for a brief summary of how to use the Cube browser) shows that 96.5% of the overall CPU allocation time is spent in computations, while 3% of the time is spent in MPI point-to-point communication functions and the remainder scattered across other activities. The point-to-point time is almost entirely spent in MPI_Wait calls inside the three solver functions x_solve, y_solve and z_solve, as well as an MPI_Waitall in the boundary exchange routine copy_faces. Computation time is also mostly spent in the solver routines and the boundary exchange, however, inside the different solve_cell, backsubstitute and compute_rhs functions. While the aggregated time spent in the computational routines seems to be relatively balanced across the different MPI ranks (determined using the box plot view in the right pane), there is quite some variation for the MPI_Wait / MPI_Waitall calls.

start_summary.png
"SummaryExperiment": Screenshot of a summary experiment result in the Cube report browser.

Using the Cube browser

The following paragraphs provide a very brief introduction to the usage of the Cube analysis report browser. To make effective use of the GUI, however, please also consult the CubeGUI User Guide [4].

Cube is a generic user interface for presenting and browsing performance and debugging information from parallel applications. The underlying data model is independent from particular performance properties to be displayed. The Cube main window (see Figure SummaryExperiment) consists of three panels containing tree displays or alternate graphical views of analysis reports. The left panel shows performance properties of the execution, such as time or the number of visits. The middle pane shows the call tree or a flat profile of the application. The right pane either shows the system hierarchy consisting of, for example, machines, compute nodes, processes, and threads, or various graphical representations such as a topological view of the application's processes and threads (if available), or a box/violin plot view showing the statistical distribution of values across the system. All tree nodes are labeled with a metric value and a color-coded box which can help in identifying hotspots. The metric value color is determined from the proportion of the total (root) value or some other specified reference value, using the color scale at the bottom of the window.

A click on a performance property or a call path selects the corresponding node. This has the effect that the metric value held by this node (such as execution time) will be further broken down into its constituents in the panels right of the selected node. For example, after selecting a performance property, the middle panel shows its distribution across the call tree. After selecting a call path (i.e., a node in the call tree), the system tree shows the distribution of the performance property in that call path across the system locations. A click on the icon to the left of a node in each tree expands or collapses that node. By expanding or collapsing nodes in each of the three trees, the analysis results can be viewed on different levels of granularity (inclusive vs. exclusive values).

All tree displays support a context menu, which is accessible using the right mouse button and provides further options. For example, to obtain the exact definition of a performance property, select "Documentation" in the context menu associated with each performance property. A brief description can also be obtained from the menu option "Info".



Scalasca    Copyright © 1998–2021 Forschungszentrum Jülich GmbH, Jülich Supercomputing Centre
Copyright © 2009–2015 German Research School for Simulation Sciences GmbH, Laboratory for Parallel Programming