Skip to content
Snippets Groups Projects
Franck Pérignon's avatar
Franck Pérignon authored
Merge devel (sandbox) branch into public (master) branch

See merge request !51
59e49685
History

Platform Python 3.8 Python 3.9 Python 3.10 Licence Pipeline Status

HySoP (Hybrid Simulation with Particles) is a library dedicated to high performance direct numerical simulation of fluid related problems based on semi-lagrangian particle methods, for hybrid architectures providing multiple compute devices including CPUs, GPUs or MICs. See https://particle_methods.gricad-pages.univ-grenoble-alpes.fr/hysop-doc for more information.

Basics

Download and install lastest hysop:

git clone git@gricad-gitlab.univ-grenoble-alpes.fr:particle_methods/hysop.git
cd hysop
mkdir build
cd build
cmake ..
make -j8
make install

By default, cmake will try to find your most up to date Python3 installation. The minimum required version is Python3.8. You can force the python version by using the following trick during the cmake configuration step. If you want to force cmake to detect python3.9:

PYTHON_EXECUTABLE="$(which python3.9)"
PYTHON_INCLUDE_DIR=$(${PYTHON_EXECUTABLE} -c "import sysconfig as sc; print(sc.get_paths()['include'])")
PYTHON_LIBRARY=$(${PYTHON_EXECUTABLE} -c "import sysconfig as sc, os; print(os.path.normpath(os.path.sep.join(sc.get_config_vars('LIBDIR', 'INSTSONAME'))))")
cmake -DPython3_EXECUTABLE="${PYTHON_EXECUTABLE}" -DPython3_INCLUDE_DIR="${PYTHON_INCLUDE_DIR}" -DPython3_LIBRARY="${PYTHON_LIBRARY}" ..

Run Taylor-Green with 8 mpi processes:

mpirun -np 8 python3 hysop_examples/examples/taylor_green/taylor_green.py -impl fortran -V -cp fp64 -d 64

Check examples directory for complete simulation cases. Hysop is currently developped and tested with python3.10. It also works well with python3.8 and python3.9. You may want to export PYTHON_EXECUTABLE=python3.9 to run some of hysop scripts that look for python3 by default.

For each example you can get help with the -h option:

 python3 hysop_examples/examples/taylor_green/taylor_green.py -h

Dependencies:

To install hysop dependencies locally on an ubuntu machine (22.04 LTS):

sudo apt-get install -y expat unzip xz-utils pkg-config cmake rsync git ssh curl wget ca-certificates gcc g++ lsb-core cpio libnuma1 libpciaccess0 libreadline-dev libgcc-11-dev libcairo-dev libcairomm-1.0-dev python3.10-dev openmpi-bin libopenmpi-dev hdf5-tools
python3 -m pip install --upgrade -r requirements.txt

Additionally you may want to provide a working OpenCL platform, HPTT, llvm/llvmlite/numba, clFFT/gpyFFT, flint/arb/python-flint and tbb/mklfft. See the docker files to get instructions on how to configure and install these packages (hysop/ci/docker_images/ubuntu_jammy/Dockerfile). Alternatively your can run hysop in an isolated environment by using either singularity or docker containers (see next sections).

Running in isolation with singularity

Singularity support uses hysop docker images used for continuous integration and thus do not directly include hysop. Singularity is easier to use then docker because it bind mounts $PWD, $TMP and your $HOME directory by default. Getting started with singularity is the way to go if you have singularity installed your compute cluster. To install singularity on your personal machine, please follow the singularity documentation.

Steps to setup hysop in singularity:

  1. Put hysop source code somewhere in your home.

    export HYSOP_ROOT="${HOME}/hysop"
    git clone https://gricad-gitlab.univ-grenoble-alpes.fr/particle_methods/hysop.git "${HYSOP_ROOT}"
  2. Pull singularity image, in this example we use the jammy_cuda image wich include GPU support. You can use jammy to use the CPU OpenCL backend.

    singularity pull --docker-login docker://gricad-registry.univ-grenoble-alpes.fr/particle_methods/hysop:jammy_cuda
  3. Build and install hysop in directly in your home from within the container. This has to be run on a node with an OpenCL library available, most likely a compute node if you are on a cluster.

    singularity run --nv --bind /etc/OpenCL hysop_jammy_cuda.sif bash -c "cd ${HYSOP_ROOT} && cmake -S . -B ./build && cmake --install ./build"

Now the container is ready and you can:

  • Run the container in interactive mode:
    singularity shell --nv --bind /etc/OpenCL hysop_jammy_cuda.sif
    python3 ${HYSOP_ROOT}/hysop_examples/examples/taylor_green/taylor_green.py -V
  • Directly run a simulation inside the container:
    singularity run --nv --bind /etc/OpenCL hysop_jammy_cuda.sif python3 "${HYSOP_ROOT}/hysop_examples/examples/taylor_green/taylor_green.py" -V
  • Directly run a simulation with MPI
    mpirun -np 4 -- singularity run --nv --bind /etc/OpenCL hysop_jammy_cuda.sif python3 "${HYSOP_ROOT}/hysop_examples/examples/taylor_green/taylor_green.py" -V

Please note that --nv --bind /etc/OpenCL is only required enable the NVidia OpenCL backend. See singularity documentation for other OpenCL backends. Environement variables are also forwarded by default unless --cleanenv has been specified.

Known problems within singularity containers

  1. You may get warnings about file locking. If this is a problem, consider disabling the file locking mechanism by passing --env HYSOP_ENABLE_FILELOCKS=0 to singularity or by passing --disable-file-locks directly to your hysop script.

  2. As singularity bind mounts your $HOME by default, your $HOME python environment and .bashrc may alter the runtime. You can disable this by passing --no-home but this may break python packages that cache data inside your $HOME by default. An alternative solution is to bind mount a custom home directory inside the image by passing --cleanenv --home cache_dir_on_host:/home. To get rid of this problem, you can use singularity utility scripts.

Singularity utility scripts

Singularity utility script take care of isolating your home and environment variables on host. It also disables hysop file locking by default. Host hysop directory is mounted as read/write to /hysop and ./ci/singularity_images/hysop_[imgname].home is mounted to /home. You can use git from within the container as ${HOME}/.ssh is also bind to the fake home.

./ci/utils/pull_singularity_image.sh jammy_cuda
./ci/utils/run_singularity_image.sh jammy_cuda bash -c 'cd /hysop && cmake -S . -B ./build && cmake --install ./build'
./ci/utils/run_singularity_image.sh jammy_cuda python3 /hysop/hysop_examples/examples/taylor_green/taylor_green.py -V

You can also run an interactive shell with ./ci/utils/run_singularity_image.sh jammy_cuda.

You may also bind additional files and directories for I/O by using the SINGULARITY_BIND environment variable on host with a comma separated list of mounts with the following format src[:dest[:opts]].Current working directory $PWD is mounted as read-write by default and this cannot be disabled.

Running in isolation with docker

Docker images can be pulled (resp. pushed) with ./ci/utils/pull_docker_image.sh [imgname] and ./ci/utils/push_docker_image.sh [imgname]. The docker images do not contain de hysop library and can be run with ./ci/utils/run_docker_image.sh [imgname]. This script mounts your local hysop directory (read only) to /hysop inside the docker container and prompt a shell. Images have to be downloaded (pulled) prior to be run with this script.

By default, [imgname] corresponds to the docker image used for gitlab continuous integration (currently ubuntu_jammy, which corresponds to Ubuntu 22.04 running with python3.10). Docker images without the _cuda postfix ship an intel OpenCL platform that is compatible with intel CPUs. Try out the jammy_cuda docker image to run on the GPUs (requires host system driver compatible with cuda 11.7).

To quickly test and/or debug hysop inside the docker you can run ./ci/utils/run_debug.sh [imgname] which will build and install hysop inside the container and prompt a shell (read-only mount-bind /hysop directory is copied to read-write /tmp/hysop within the container). Alternatively if you just want to run tests inside docker, you can use the ./ci/utils/run_ci.sh [imgname] script instead.

Docker images can be build locally by using the ./ci/utils/build_docker_image.sh script. It is advised to build docker image only if the pull fails or if you want to add/update dependencies. Each build takes around one hour (36 cores @ 2GHz). By default, the build script will use all of your cores. At least 16GB of RAM is recommended.

There are equivalent *.bat scripts to run on Windows. Please note that the cuda enabled images currently do not work on Windows due to this bug.

Known problems within docker containers

  1. OpenMPI complains about the program being run as root: You can solve this by defining two extra variables:

    export OMPI_ALLOW_RUN_AS_ROOT=1
    export OMPI_ALLOW_RUN_AS_ROOT_CONFIRM=1

    These variables are already defined during tests and in ./ci/utils/run_debug.sh.

  2. OpenMPI complains about not having enough slots: This happens when you request more processes than you have physical cpu cores. To be able to run more processes you can define the following variables: export OMPI_MCA_rmaps_base_oversubscribe=1. This variable is already defined during tests and in ./ci/utils/run_debug.sh.

  3. The program crashes with Bus error (Non-existant physical address): By default, docker does not enable ptrace which makes the vader BTL crash (vader is openmpi shared memory interprocess communicator). There are three independent solutions to this problem:

    • Enable ptrace when running docker (best solution): use docker run --cap-add=SYS_PTRACE to run an interactive container or docker create --cap-add=SYS_PTRACE to create a persistent container. This is already done in our scripts in ./ci/utils*.
    • Disable vader CMA (Cross Memory Attach) with OMPI_MCA_btl_vader_single_copy_mechanism=none: works but interprocess communication performance drops ~50%.
    • Disable vader completely: mpirun --mca btl ^vader: works but leads to really poor performances