7.0 (revision 7784de1c)
Getting Started

In order to quickly introduce the user to the Score-P system, we explain how to build and install the tool and look at a simple example. We go through the example in full detail.

As mentioned above, the three core steps of a typical work cycle in the investigation of the behavior of a software package can be described as follows:

After building and installing the tool, we shall go through these three steps one after the other in the next sections. This will be followed by a full workflow example. For getting detailed presentations of available features, see Section 'Application Instrumentation' for the instrumentation step and Section 'Application Measurement' for the measurement.

Score-P Quick Installation

The Score-P performance analysis tool uses the GNU Autotools (Autoconf, Automake, Libtool and M4) build system. The use of Autotools allows Score-P to be build in many different systems with varying combinations of compilers, libraries and MPI implementations.

Autotools based projects are build as follows:

  1. The available compilers and tools available are detected from the environment by the configure script.
  2. Makefiles are generated based on the detected compilers and tools.
  3. The generated Makefile project is then built and installed.

Score-P will have features enabled or disabled, based on the detection made by the Autotools generated configure script. The following 2 sub-sections cover mandatory prerequisites as well as optional features that are enabled based on what is available in the configured platform.

Prerequisites

To build Score-P, C, C++ and Fortran compilers and related tools are required. These can be available as modules (typically on super-computer environments) or as packages (on most Linux or BSD distributions).

For Debian based Linux systems using the APT package manager, the following command (as root) is sufficient to build Score-P with minimal features enabled:

apt-get install gcc g++ gfortran mpich2

On Red-Hat and derivative Linux systems running the YUM package manager, in a similar way:

yum install gcc g++ gfortran mpich2

For users of the SuperMUC, it is recommended to load the following modules:

module load ccomp/intel/12.1 fortran/intel/12.1 \
mpi.ibm/5.2_PMR-fixes papi/4.9

General Autotools Build Options

System administrators can build Score-P with the familiar:

mkdir _build
cd _build
../configure && make && make install

The previous sequence of commands will detect compilers, libraries and headers, and then build and install Score-P in the following system directories:

/opt/scorep/bin
/opt/scorep/lib
/opt/scorep/include
/opt/scorep/share

Users that are not administrators on the target machine may need to install the tool in an different location (due to permissions). The prefix flag should be specified with the target directory:

../configure --prefix=<installation directory>

For example, in the install/scorep directory on his/her home folder:

../configure --prefix=$HOME/install/scorep

In this case, the user's PATH variable needs to be updated to include the bin directory of Score-P, and the appropriate library and include folders specified (with -L and -I) when instrumenting and building applications.

Users of the SuperMUC (after loading the required modules mentioned previously), can issue the following command to configure Score-P:

../configure --prefix=$HOME/install/scorep --enable-static \
--disable-shared --with-nocross-compiler-suite=intel \
--with-mpi=openmpi --with-papi-header=$PAPI_BASE/include \
--with-papi-lib=$PAPI_BASE/lib

Score-P Specific Build Options

In addition to general options available in all Autotools based build systems, there are Score-P configuration flags. These can be printed out by passing the --help flag to the configure script.

They are usually self explanatory. Here is a list of them with a short explanation:

Instrumentation

Various analysis tools are supported by the Score-P infrastructure. Most of these tools are focused on certain special aspects that are significant in the code optimization process, but none of them provides the full picture. In the traditional workflow, each tool used to have its own measurement system, and hence its own instrumenter, so the user was forced to instrument his code more than once if more than one class of features of the application was to be investigated. One of the key advantages of Score-P is that it provides an instrumentation system that can be used for all the performance measurement and analysis tools, so that the instrumentation work only needs to be done once.

Internally, the instrumentation itself will insert special measurement calls into the application code at specific important points (events). This can be done in an almost automatic way using corresponding features of typical compilers, but also semi-automatically or in a fully manual way, thus giving the user complete control of the process. In general, an automatic instrumentation is most convenient for the user. However, this approach may lead to too many and/or too disruptive measurements, and for such cases it is then advisable to use selective manual instrumentation and measurement instead. For the moment, we shall however start the procedure in an automatic way to keep things simple for novice users.

To this end, we need to ask the Score-P instrumenter to take care of all the necessary instrumentation of user and MPI functions. This is done by using the scorep command that needs to be prefixed to all the compile and link commands usually employed to build the application. Thus, an application executable app that is normally generated from the two source files app1.f90 and app2.f90 via the command:

mpif90 app1.f90 app2.f90 -o app

will now be built by:

scorep mpif90 app1.f90 app2.f90 -o app

using the Score-P instrumenter.

In practice one will usually perform compilation and linking in separate steps, and it is not necessary to compile all source files at the same time (e.g., if makefiles are used). It is possible to use the Score-P instrumenter in such a case too, and this actually gives more flexibility to the user. Specifically, it is often sufficient to use the instrumenter not in all compilations but only in those that deal with source files containing MPI code. However, when invoking the linker, the instrumenter must always be used.

When makefiles are employed to build the application, it is convenient to define a placeholder variable to indicate whether a ``preparation'' step like an instrumentation is desired or only the pure compilation and linking. For example, if this variable is called PREP then the lines defining the C compiler in the makefile can be changed from:

MPICC = mpicc

to

MPICC = $(PREP) mpicc

(and analogously for linkers and other compilers). One can then use the same makefile to either build an instrumented version with the

make PREP="scorep"

command or a fully optimized and not instrumented default build by simply using:

make

in the standard way, i.e. without specifying PREP on the command line. Of course it is also possible to define the same compiler twice in the makefile, once with and once without the PREP variable, as in:

MPICC = $(PREP) mpicc
MPICC_NO_INSTR = mpicc

and to assign the former to those source files that must be instrumented and the latter to those files that do not need this.

Measurement and Analysis

Once the code has been instrumented, the user can initiate a measurement run using this executable. To this end, it is sufficient to simply execute the target application in the usual way, i.e.:

mpiexec $MPIFLAGS app [app_args]

in the case of an MPI or hybrid code, or simply:

app [app_args]

for a serial or pure OpenMP program. Depending on the details of the local MPI installation, in the former case the mpiexec command may have to be substituted by an appropriate replacement.

When running the instrumented executable, the measurement system will create a directory called scorep-YYYYMMDD_HHMM_XXXXXXXX where its measurement data will be stored. Here YYYYMMDD and HHMM are the date (in year-month-day format) and time, respectively, when the measurement run was started, whereas XXXXXXXX is an additional identification number. Thus, repeated measurements, as required by the optimization work cycle, can easily be performed without the danger of accidentally overwriting results of earlier measurements. The environment variables SCOREP_ENABLE_TRACING and SCOREP_ENABLE_PROFILING control whether event trace data or profiles are stored in this directory. By setting either variable to true, the corresponding data will be written to the directory. The default values are true for SCOREP_ENABLE_PROFILING and false for SCOREP_ENABLE_TRACING.

Report Examination

After the completion of the execution of the instrumented code, the requested data (traces or profiles) is available in the indicated locations. Appropriate tools can then be used to visualize this information and to generate reports, and thus to identify weaknesses of the code that need to be modified in order to obtain programs with a better performance. A number of tools are already available for this purpose. This includes, in particular, the CUBE4 performance report explorer for viewing and analyzing profile data, Vampir for the investigation of trace information, and the corresponding components of the TAU toolsuite.

Alternatively, the Periscope system may be used to analyze the behaviour of the code on-line during its run time, i.e. (in contrast to the approaches mentioned above) before the end of the program run.

Simple Example

As a specific example, we look at a short C code for the solution of a Poisson equation in a hybrid (MPI and OpenMP) environment. The corresponding source code comes as part of the Score-P distribution under the scorep/test/jacobi/ folder. Various other versions are also available - not only hybrid but also for a pure MPI parallelization, a pure OpenMP approach, and in a non-parallel way; and, in each case, not only in C but also in C++ and Fortran.

As indicated above, the standard call sequence:

mpicc -std=c99 -g -O2 -fopenmp -c jacobi.c
mpicc -std=c99 -g -O2 -fopenmp -c main.c
mpicc -std=c99 -g -O2 -fopenmp -o jacobi jacobi.o main.o -lm

that would first compile the two C source files and then link everything to form the final executable needs to be modified by prepending scorep to each of the three commands, i.e. we now have to write:

scorep mpicc -std=c99 -g -O2 -fopenmp -c jacobi.c
scorep mpicc -std=c99 -g -O2 -fopenmp -c main.c
scorep mpicc -std=c99 -g -O2 -fopenmp -o jacobi jacobi.o \ main.o -lm

This call sequence will create a number of auxiliary C source files containing the original source code and a number of commands introduced by the measurement system in order to enable the latter to create the required measurements when the code is actually run. These modified source files are then compiled and linked, thus producing the desired executable named jacobi.

The actual measurement process is then initiated, e.g., by the call:

mpiexec -n 2 ./jacobi

The output data of this process will be stored in a newly created experiment directory scorep-YYYYMMDD_HHMM_XXXXXXXX whose name is built up from the date and time when the measurement was started and an identification number.

As we had not explicitly set any Score-P related environment variables, the profiling mode was active by default. We obtain a file called profile.cubex containing profiling data in the experiment directory as the result of the measurement run. This file can be visually analyzed with the help of CUBE.

If we had set the variable SCOREP_ENABLE_TRACING to true, we would additionally have obtained trace data, namely the so called anchor file traces.otf2 and the global definitions traces.def as well as a subdirectory traces that contains the actual trace data. This trace data is written in Open Trace Format 2 (OTF2) format. OTF2 is the joint successor of the classical formats OTF (used, e. g., by Vampir) and Epilog (used by Scalasca). A tool like Vampir can then be used to give a visual representation of the information contained in these files.