MPI Poisson solvers
The first step is to make examples work by using the MPI fortran FFTW interface.
Working MPI examples:
-
\texttt{analytic/analytic.py}
-
\texttt{scalar\_diffusion/scalar\_diffusion.py}
-
\texttt{scalar\_advection/scalar\_advection.py}
-
\texttt{multiresolution/scalar\_advection.py}
: This example should run by using either theSUBGRID
orREMESH
method. -
\texttt{shear\_layer/shear\_layer.py}
-
\texttt{taylor\_green/taylor\_green.py}
Known bugs:
-
I/O problems, logs are sometimes deleted. -
Redistribute does not work with more than one process when the topology state imposes a non default transposition state. -
Bug in field requirements, wrong check on can_split resulted in the creation of useless topologies. -
Output of graph builder may not be deterministic: operator ordering can be different on different nodes causing a dead lock on blocking mpi calls. Should be resolved now as a side effect of making the topology builder deterministic. -
Critical bug in redistribute for Fortran ordered arrays: We used MPI.C_ORDER aliased by HYSOP_MPI_ORDER in TopoTools.create_subarray resulting in wrong MPI types for n-dimensional subarrays used in conjunction with Fortran ordered buffers. -
Possible bug in redistribute operator when using backend.empty_like for allocating temporary buffers without enforcing memory order.
Testing:
- Analytic example:
for i in 1 2 4; do mpirun -np $i python -u hysop_examples/examples/analytic/analytic.py -VNC --tee -1; done;
This test now passes, diff of the process logs only show a difference in process coordinates, and the result is the same in paraview. Also passes for scalar diffusion and advection.
- Multiresolution example:
mpirun -np 2 python hysop_examples/examples/multiresolution/scalar_advection.py -VNC -d32 --tee -1 --restriction-filter subgrid
mpirun -np 4 python hysop_examples/examples/multiresolution/scalar_advection.py -VNC -d64 --tee -1 --restriction-filter remesh
- Shear Layer example:
mpirun -np 1 python -u hysop_examples/examples/shear_layer/shear_layer.py -NC -d32 -maxit 1 --implementation fortran --compute-precision double --debug-dump-target p1
mpirun -np 2 python -u hysop_examples/examples/shear_layer/shear_layer.py -NC -d32 -maxit 1 --implementation fortran --compute-precision double --debug-dump-target p2
vimdiff <(sed 's/[+-]\(0\.0*\s\)/+\1/g' /tmp/hysop/debug/p1/run.txt) <(sed 's/[+-]\(0\.0*\s\)/+\1/g' /tmp/hysop/debug/p2/run.txt)
- Taylor-Green example: does not work at this time
python -u hysop_examples/examples/taylor_green/taylor_green.py -VNC --implementation fortran --compute-precision double
Debugging commands:
- Debug only one process with gdb:
mpirun -np 2 gdb -ex run --args python -u hysop_examples/examples/scalar_diffusion/scalar_diffusion.py -VNC --tee 0
- Debugging multiple process with gdb in different windows:
mpirun -np 2 xterm -hold -e gdb -ex run --args python -u hysop_examples/examples/scalar_diffusion/scalar_diffusion.py -VNC --tee 0,1
New features:
-
added support for filtering method through the example interface. -
improved global Cartesian topology shape (cutdirs) selection. -
DebugDumper now supports parallel statistic dumps (min, max, mean, variance), but no data dumps. -
HDFWriter now supports parallel compressed writes (and sequential compressed splitted writes up to 16 processes). Support is detected at project configuration time in CMakeLists.txt. -
New utility functions defined in \texttt{hysop.tools.mpi\_utils}
and\texttt{hysop.core.mpi.topo\_tools}
. -
Discrete cartesian fields now support data collecting to main_rank 0 for debugging purposes (unconfirmed).
Edited by Jean-Baptiste Keck