A first example

This tutorial will guide you through the steps of your first SeisSol simulation. We will use the SCEC TPV33 benchmark as an example in this tutorial. We assume that you have successfully compiled SeisSol.

This tutorial is targeted to people who have compiled SeisSol and want to test their installation for the first time. If you are completely new to SeisSol and want to explore its features, we recommend our training material, which bundles a pre-compiled version of SeisSol and some scenarios.

Setup

  • Clone our examples repository: https://github.com/SeisSol/Examples/.

  • Navigate to the folder Examples/tpv33. We will refer to this directory as working directory in the following.

  • Download the mesh files from https://zenodo.org/record/8042664 and store them in the working directory.

  • You can visualize the mesh file with paraview, using e.g. paraview tpv33_half_sym.xdmf. The mesh is described by two files: tpv33_half_sym and tpv33_half_sym.xdmf. The first one is a binary file, which contains all the data (e.g. coordinates, connectivity) and the xdmf file contains information on how to read that data for visualization software such as Paraview. You can read more about this mesh format here: PUML Mesh format.

  • Create the output directory: mkdir output.

  • Optional To create the mesh on your own, execute ./generating_the_mesh.sh. To do so, you need to install gmsh, PUMGen and mirrorMesh.

  • Optional: For performance reasons, we suggest that you store large files (mesh, output) in a scratch file system (if one is available at your cluster) and create symbolic links in your working directory:

    ln -s <path/to/tpv33_half_sym> tpv33_half_sym
    ln -s <path/to/output/directory> output
    

    You may not see a huge difference in this small test case but for larger meshes, this is the recommended strategy.

Execution

  • Link the SeisSol binary to your working directory (Examples/tpv33).

  • Now run: export OMP_NUM_THREADS=<threads>, where <threads> is the number of threads. If you are on a cluster or want to run with multiple MPI ranks, then you should set the number of OMP threads to the number of available threads minus 1 (the last thread is used for communication).

  • Now run: mpiexec -np <n> ./SeisSol_<configuration> parameters.par, where:

    • <n> is the number of MPI ranks / the number of compute nodes used.

    • <configuration> depends on your compilation setting (e.g. SeisSol_Release_dhsw_4_elastic for a Haswell architecture and order 4 accuracy in space and time).

    • When running on your local desktop computer, you may also run with only one MPI rank, i.e. leave away the mpiexec, i.e. only type ./SeisSol_<configuration> parameters.par. Then, you can use all available threads.

Hint: Depending on the system you are using, the MPI launcher might be different from mpiexec (e.g. mpiexec.hydra, mpirun, srun). For more infos about how to get optimal performance, have a look at the Optimal environment variables on SuperMUC-NG.

Result verification

SeisSol produces various output files:

The xdmf files can be visualized with Paraview. For the dat files, you can use viewrec.

The outputs of your simulation can be compared with our outputs (using SeisSol) and the outputs of other codes by checking out the uploaded files for this SCEC benchmark on the SCEC Code Verification Project website.