Skip to content
n01r edited this page Dec 10, 2014 · 36 revisions

You are here: Home > Developer Documentation > Debugging


This page collects some useful hints about how to debug a hybrid (CUDA + device) parallel (MPI) application.

MPI + Valgrind

Use the OpenMPI supressions list

mpiexec <mpi flags> valgrind --suppressions=$MPI_ROOT/share/openmpi/openmpi-valgrind.supp picongpu ...

MPI + GDB

Multi-Node Host-Side

Login into an interactive shell/batch session with X-forwarding ssh -X. Launch PIConGPU with gdb and trigger start and back trace automatically:

mpiexec <mpi flags> xterm -e gdb -ex r -ex tb --args picongpu ...

MPI + CUDA-MEMCHECK

mpiexec <mpi flags> cuda-memcheck --tool <memcheck|racecheck> picongpu ...

CUDA-GDB

Single-Node device-side

Manual.

(!) Compile with nvcc -g -G <...> if you want to set device-side breakpoints.

cd <path>/simOutput
cuda-gdb --args <path2picongpu> -d 1 1 1 -g <...> -s 100 <...> 

in cuda-gdb:

b <FileName>:<LineNumber>
r

Another nice method to set the debug flags for PIConGPU can be used after configuring an example. Just use ccmake . in your build directory and a GUI with a list of flags pops up. Press t for this toggles the advanced mode. Useful flags to set on are SHOW_CODELINES , CUDA_BLOCKING_KERNEL and most importantly the value of CUDA_NVCC_DEBUG_FLAGS should be set to -g;-G. After manipulating the flags press c to configure the make files and generate them by pressing g.