Skip to content
Snippets Groups Projects
Commit b8d07912 authored by Franck Pérignon's avatar Franck Pérignon
Browse files

[skip CI] Cleanup. Tests on yaml files to fix doc generation

parent eaef0818
No related branches found
No related tags found
No related merge requests found
Pipeline #201872 skipped
...@@ -52,13 +52,13 @@ HySoP "CPU_Intel" Docker Image can be retrieved and submitted as a [Docker Conta ...@@ -52,13 +52,13 @@ HySoP "CPU_Intel" Docker Image can be retrieved and submitted as a [Docker Conta
As a jupyter notebook with the following command: As a jupyter notebook with the following command:
``` ```
docker run -p 8888:8888 --rm -ti gricad-registry.univ-grenoble-alpes.fr/particle_methods/hysop/hysoplab-master docker run -p 8888:8888 --rm -ti gricad-registry.univ-grenoble-alpes.fr/particle_methods/hysop/hysoplab-cpu-intel-master
``` ```
Or to start a terminal session: Or to start a terminal session:
``` ```
docker run -it --rm --entrypoint="/bin/bash" gricad-registry.univ-grenoble-alpes.fr/particle_methods/hysop/hysoplab-master:latest docker run -it --rm --entrypoint="/bin/bash" gricad-registry.univ-grenoble-alpes.fr/particle_methods/hysop/hysoplab-cpu-intel-master:latest
``` ```
In both cases, you will end up with a fully-fonctionnal installation of Hysop software (in a [Docker Container](https://www.docker.com/resources/what-container/) ) and will be able to quickly test HySoP using [examples in the documentation](https://particle_methods.gricad-pages.univ-grenoble-alpes.fr/hysop-doc/getting_started/index.html). In both cases, you will end up with a fully-fonctionnal installation of Hysop software (in a [Docker Container](https://www.docker.com/resources/what-container/) ) and will be able to quickly test HySoP using [examples in the documentation](https://particle_methods.gricad-pages.univ-grenoble-alpes.fr/hysop-doc/getting_started/index.html).
...@@ -72,7 +72,7 @@ In both cases, you will end up with a fully-fonctionnal installation of Hysop so ...@@ -72,7 +72,7 @@ In both cases, you will end up with a fully-fonctionnal installation of Hysop so
* *By default, the Docker container is completely isolated from the host machine's disk space! As a result, it's impossible to write or read data on your hard disk. To share a directory between your host and the docker container, update the docker command like this: * *By default, the Docker container is completely isolated from the host machine's disk space! As a result, it's impossible to write or read data on your hard disk. To share a directory between your host and the docker container, update the docker command like this:
``` ```
docker run -v HOST_DIRECTORY:/home/hysop-user/shared -p 8888:8888 --rm -ti gricad-registry.univ-grenoble-alpes.fr/particle_methods/hysop/hysoplab-master docker run -v HOST_DIRECTORY:/home/hysop-user/shared -p 8888:8888 --rm -ti gricad-registry.univ-grenoble-alpes.fr/particle_methods/hysop/hysoplab-gpu-nvidia-master
``` ```
HOST_DIRECTORY (use full path!) will be available in the container in directory 'shared'. HOST_DIRECTORY (use full path!) will be available in the container in directory 'shared'.
......
##
## Copyright (c) HySoP 2011-2024
##
## This file is part of HySoP software.
## See "https://particle_methods.gricad-pages.univ-grenoble-alpes.fr/hysop-doc/"
## for further info.
##
## Licensed under the Apache License, Version 2.0 (the "License");
## you may not use this file except in compliance with the License.
## You may obtain a copy of the License at
##
## http://www.apache.org/licenses/LICENSE-2.0
##
## Unless required by applicable law or agreed to in writing, software
## distributed under the License is distributed on an "AS IS" BASIS,
## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
## See the License for the specific language governing permissions and
## limitations under the License.
##
# Docker muti-stage builds or buildkit (parallel stages)
# https://docs.docker.com/build/building/multi-stage/
#
FROM mambaorg/micromamba AS mambahysop
LABEL authors="hysop_team@univ-xxx.fr"
USER root
ARG PYTHON_VERSION=3.12.4
ENV PYTHON_EXECUTABLE=python3.12
#
# Install somme linux tools, compilers and libs
#
RUN apt update && apt upgrade -y && apt install -y -qq \
wget \
git-core \
make \
pkg-config \
g++ \
gfortran \
libopenmpi-dev \
libclfft-dev && \
apt autoclean -y && \
apt autoremove -y && \
rm -rf /var/lib/apt/lists/* && \
rm -rf $HOME/.cache/pip/*
#
# Install conda/mamba packages listed in ci/hysopenv_cpu_intel.yaml file
#
COPY --chown=$MAMBA_USER:$MAMBA_USER ci/hysopenv_cpu_intel.yaml /home/hysopenv_cpu_intel.yaml
RUN micromamba config prepend channels conda-forge && \
micromamba self-update && \
micromamba config set channel_priority strict && \
micromamba install -n base python=${PYTHON_VERSION} -c conda-forge && \
micromamba install -y -f /home/hysopenv_cpu_intel.yaml && \
micromamba clean --all --yes
ARG MAMBA_DOCKERFILE_ACTIVATE=1 # (otherwise python will not be found)
########
# HPTT # (longest time compilation)
########
FROM mambahysop AS hptt
RUN cd /tmp && \
git clone https://gitlab.com/keckj/hptt.git && \
cd hptt && \
sed -i "s#-mavx##g" CMakeLists.txt && \
sed -i "s#-march=native##g" CMakeLists.txt && \
sed -i "s#-mtune=native##g" CMakeLists.txt && \
mkdir build && \
cd build && \
cmake -DCMAKE_BUILD_TYPE=Release .. && \
make -j$(nproc) && \
make install && \
cd ../pythonAPI && \
pip install --upgrade . && \
cd /tmp && \
rm -rf /tmp/hptt
#########
# FLINT #
#########
FROM mambahysop AS flint
#
# python flint (FLINT2 + ARB + python-flint)
#
# flint 3.0.1 version is the last version (16 nov 2023)
# (now ARB has been merged into flint 3)
#
RUN cd /tmp && \
wget -q https://github.com/flintlib/flint/archive/refs/tags/v3.0.1.tar.gz && \
tar -xzf v*.tar.gz && \
rm -f v*.tar.gz && \
cd flint* && \
mkdir build && \
cd build/ && \
cmake .. -DBUILD_SHARED_LIBS=ON && \
make -j$(nproc) && \
make install && \
cd /tmp && \
rm -rf /tmp/flint*
#####################
# HDF5_mpi and H5PY #
#####################
FROM mambahysop AS hdf5_mpi_h5py
ENV MPICC "mpicc"
ARG pip_install_opts='--upgrade --no-binary=h5py --no-deps --no-build-isolation'
RUN cd /tmp && \
wget -q https://support.hdfgroup.org/ftp/HDF5/releases/hdf5-1.14/hdf5-1.14.3/src/hdf5-1.14.3.tar.gz && \
tar -xzf hdf5-*.tar.gz && \
rm -f hdf5-*.tar.gz && \
cd hdf5-* && \
CC="${MPICC}" ./configure --prefix=/usr/local --enable-parallel \
--enable-shared=yes --enable-static=no && \
make -j$(nproc) && \
make install && \
cd /tmp && \
rm -rf /tmp/hdf5-* && \
CC="${MPICC}" HDF5_MPI="ON" HDF5_VERSION="1.14.3" \
HDF5_DIR=/usr/local H5PY_SETUP_REQUIRES=0 \
pip install ${pip_install_opts} h5py==3.11.0
##########
# Stage #
##########
FROM mambahysop AS stage2
# Some python packages are not available in micromamba!
RUN pip install --upgrade \
tee colors.py primefac \
argparse_color_formatter \
tox memory-tempfile
# gpyFFT
# WARNING: Old package!
RUN cd /tmp && \
git clone https://github.com/geggo/gpyfft.git && \
cd gpyfft && \
sed 's#finalize(self, _destroy_plan, self.plan)##' -i gpyfft/gpyfftlib.pyx && \
pip install . && \
cd - && \
rm -rf /tmp/gpyfft
#
# FFTW latest version nov 2023
# Compilation is necessary because there is no mpi version
# with the float precisions requested by hysop (single, double, long)
#
ENV FFTW_ROOT="/usr/local"
RUN cd /tmp && \
wget -q http://www.fftw.org/fftw-3.3.10.tar.gz && \
tar -xzf fftw-*.tar.gz && \
rm -f fftw-*.tar.gz && \
cd fftw-* && \
./configure --enable-openmp --enable-threads --enable-mpi --enable-shared --with-pic --prefix="${FFTW_ROOT}" --enable-single && \
make -j$(nproc) && make install && make clean && \
./configure --enable-openmp --enable-threads --enable-mpi --enable-shared --with-pic --prefix="${FFTW_ROOT}" && \
make -j$(nproc) && make install && make clean && \
./configure --enable-openmp --enable-threads --enable-mpi --enable-shared --with-pic --prefix="${FFTW_ROOT}" --enable-long-double && \
make -j$(nproc) && make install && make clean && \
cd /tmp && \
rm -rf /tmp/fftw-*
#
# PYFFTW git version (last test: july 2024)
# pyfftw is linked to fftw3 compiled above
#ARG pip_install_opts='--no-binary=pyfftw --no-deps --no-build-isolation'
ARG pip_install_opts='--no-binary=pyfftw --no-deps'
RUN cd /tmp && \
git clone https://github.com/pyFFTW/pyFFTW.git && \
cd pyFFTW && \
PYFFTW_INCLUDE="/include" \
pip install ${pip_install_opts} . && \
cd /tmp && \
rm -rf /tmp/pyFFTW
###############
# Final-Stage #
###############
FROM stage2 AS final-stage
# Copy only necessary files from previous stages
COPY --from=hptt /usr/local/include /usr/local/include
COPY --from=hptt /usr/local/lib /usr/local/lib/
COPY --from=flint /usr/local/include/flint/ /usr/local/include/flint/
COPY --from=flint /usr/local/lib/libflint.so /usr/local/lib/
COPY --from=hdf5_mpi_h5py /usr/local/bin /usr/local/bin/
COPY --from=hdf5_mpi_h5py /usr/local/lib /usr/local/lib
COPY --from=hdf5_mpi_h5py /usr/local/include /usr/local/include
# Copy only necessary parts of /opt/conda
COPY --from=hptt /opt/conda/lib/python3.12/site-packages/hptt \
/opt/conda/lib/python3.12/site-packages/hptt
COPY --from=hdf5_mpi_h5py /opt/conda/lib/python3.12/site-packages/h5py \
/opt/conda/lib/python3.12/site-packages/h5py
# Important for f2py command summit in meson build system!
ENV FC="mpif90"
ENV MPICC "mpicc"
ARG MPI_HOME=/usr
ENV MPIRUN_EXECUTABLE="${MPI_HOME}/bin/mpirun.openmpi"
ENV MPIEXEC_EXECUTABLE="${MPI_HOME}/bin/mpiexec.openmpi"
# To run MPI test as root on docker image
ENV OMPI_ALLOW_RUN_AS_ROOT=1
ENV OMPI_ALLOW_RUN_AS_ROOT_CONFIRM=1
# ensure all libraries are known by the runtime linker
# clean cached packages
ENV LD_LIBRARY_PATH=/opt/conda/lib:/usr/local/lib:${LD_LIBRARY_PATH}
RUN ldconfig && \
rm -rf /var/lib/apt/lists/* && \
rm -rf $HOME/.cache/pip/* && \
rm -rf /tmp/*
WORKDIR /home
ENV CI_PROJECT_DIR /home
##
## Copyright (c) HySoP 2011-2024
##
## This file is part of HySoP software.
## See "https://particle_methods.gricad-pages.univ-grenoble-alpes.fr/hysop-doc/"
## for further info.
##
## Licensed under the Apache License, Version 2.0 (the "License");
## you may not use this file except in compliance with the License.
## You may obtain a copy of the License at
##
## http://www.apache.org/licenses/LICENSE-2.0
##
## Unless required by applicable law or agreed to in writing, software
## distributed under the License is distributed on an "AS IS" BASIS,
## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
## See the License for the specific language governing permissions and
## limitations under the License.
##
# Docker muti-stage builds or buildkit (parallel stages)
# https://docs.docker.com/build/building/multi-stage/
#
FROM mambaorg/micromamba AS mambahysop
LABEL authors="hysop_team@univ-xxx.fr"
USER root
ARG PYTHON_VERSION=3.12.4
ENV PYTHON_EXECUTABLE=python3.12
#
# Install somme linux tools, compilers and libs
#
RUN apt update && apt upgrade -y && apt install -y -qq \
wget \
git-core \
make \
pkg-config \
g++ \
gfortran \
libopenmpi-dev \
libclfft-dev && \
apt autoclean -y && \
apt autoremove -y && \
rm -rf /var/lib/apt/lists/* && \
rm -rf $HOME/.cache/pip/*
#
# Install conda/mamba packages listed in ci/hysopenv_gpu_nvidia.yaml file
#
COPY --chown=$MAMBA_USER:$MAMBA_USER \
ci/hysopenv_gpu_nvidia.yaml \
/home/hysopenv_gpu_nvidia.yaml
RUN micromamba config prepend channels conda-forge && \
micromamba self-update && \
micromamba config set channel_priority strict && \
micromamba install -n base python=${PYTHON_VERSION} -c conda-forge && \
micromamba install -y -f /home/hysopenv_gpu_nvidia.yaml && \
micromamba clean --all --yes
ARG MAMBA_DOCKERFILE_ACTIVATE=1 # (otherwise python will not be found)
# Nvidia library path.
# The nvidia libraries are intalled/copied when docker container is created!
RUN echo "/usr/lib/x86_64-linux-gnu/libnvidia-opencl.so.1" > \
/opt/conda/etc/OpenCL/vendors/nvidia.icd
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES compute,utility
########
# HPTT # (longest time compilation)
########
FROM mambahysop AS hptt
RUN cd /tmp && \
git clone https://gitlab.com/keckj/hptt.git && \
cd hptt && \
sed -i "s#-mavx##g" CMakeLists.txt && \
sed -i "s#-march=native##g" CMakeLists.txt && \
sed -i "s#-mtune=native##g" CMakeLists.txt && \
mkdir build && \
cd build && \
cmake -DCMAKE_BUILD_TYPE=Release .. && \
make -j$(nproc) && \
make install && \
cd ../pythonAPI && \
pip install --upgrade . && \
cd /tmp && \
rm -rf /tmp/hptt
#########
# FLINT #
#########
FROM mambahysop AS flint
#
# python flint (FLINT2 + ARB + python-flint)
#
# flint 3.0.1 version is the last version (16 nov 2023)
# (now ARB has been merged into flint 3)
#
RUN cd /tmp && \
wget -q https://github.com/flintlib/flint/archive/refs/tags/v3.0.1.tar.gz && \
tar -xzf v*.tar.gz && \
rm -f v*.tar.gz && \
cd flint* && \
mkdir build && \
cd build/ && \
cmake .. -DBUILD_SHARED_LIBS=ON && \
make -j$(nproc) && \
make install && \
cd /tmp && \
rm -rf /tmp/flint*
#####################
# HDF5_mpi and H5PY #
#####################
FROM mambahysop AS hdf5_mpi_h5py
ENV MPICC "mpicc"
ARG pip_install_opts='--upgrade --no-binary=h5py --no-deps --no-build-isolation'
RUN cd /tmp && \
wget -q https://support.hdfgroup.org/ftp/HDF5/releases/hdf5-1.14/hdf5-1.14.3/src/hdf5-1.14.3.tar.gz && \
tar -xzf hdf5-*.tar.gz && \
rm -f hdf5-*.tar.gz && \
cd hdf5-* && \
CC="${MPICC}" ./configure --prefix=/usr/local --enable-parallel \
--enable-shared=yes --enable-static=no && \
make -j$(nproc) && \
make install && \
cd /tmp && \
rm -rf /tmp/hdf5-* && \
CC="${MPICC}" HDF5_MPI="ON" HDF5_VERSION="1.14.3" \
HDF5_DIR=/usr/local H5PY_SETUP_REQUIRES=0 \
pip install ${pip_install_opts} h5py==3.11.0
##########
# Stage #
##########
FROM mambahysop AS stage2
# Some python packages are not available in micromamba!
RUN pip install --upgrade \
tee colors.py primefac \
argparse_color_formatter \
tox memory-tempfile
# gpyFFT
# WARNING: Old package!
RUN cd /tmp && \
git clone https://github.com/geggo/gpyfft.git && \
cd gpyfft && \
sed 's#finalize(self, _destroy_plan, self.plan)##' -i gpyfft/gpyfftlib.pyx && \
pip install . && \
cd - && \
rm -rf /tmp/gpyfft
#
# FFTW latest version nov 2023
# Compilation is necessary because there is no mpi version
# with the float precisions requested by hysop (single, double, long)
#
ENV FFTW_ROOT="/usr/local"
RUN cd /tmp && \
wget -q http://www.fftw.org/fftw-3.3.10.tar.gz && \
tar -xzf fftw-*.tar.gz && \
rm -f fftw-*.tar.gz && \
cd fftw-* && \
./configure --enable-openmp --enable-threads --enable-mpi --enable-shared --with-pic --prefix="${FFTW_ROOT}" --enable-single && \
make -j$(nproc) && make install && make clean && \
./configure --enable-openmp --enable-threads --enable-mpi --enable-shared --with-pic --prefix="${FFTW_ROOT}" && \
make -j$(nproc) && make install && make clean && \
./configure --enable-openmp --enable-threads --enable-mpi --enable-shared --with-pic --prefix="${FFTW_ROOT}" --enable-long-double && \
make -j$(nproc) && make install && make clean && \
cd /tmp && \
rm -rf /tmp/fftw-*
#
# PYFFTW git version (last test: july 2024)
# pyfftw is linked to fftw3 compiled above
#ARG pip_install_opts='--no-binary=pyfftw --no-deps --no-build-isolation'
ARG pip_install_opts='--no-binary=pyfftw --no-deps'
RUN cd /tmp && \
git clone https://github.com/pyFFTW/pyFFTW.git && \
cd pyFFTW && \
PYFFTW_INCLUDE="/include" \
pip install ${pip_install_opts} . && \
cd /tmp && \
rm -rf /tmp/pyFFTW
###############
# Final-Stage #
###############
FROM stage2 AS final-stage
# Copy only necessary files from previous stages
COPY --from=hptt /usr/local/include /usr/local/include
COPY --from=hptt /usr/local/lib /usr/local/lib/
COPY --from=flint /usr/local/include/flint/ /usr/local/include/flint/
COPY --from=flint /usr/local/lib/libflint.so /usr/local/lib/
COPY --from=hdf5_mpi_h5py /usr/local/bin /usr/local/bin/
COPY --from=hdf5_mpi_h5py /usr/local/lib /usr/local/lib
COPY --from=hdf5_mpi_h5py /usr/local/include /usr/local/include
# Copy only necessary parts of /opt/conda
COPY --from=hptt /opt/conda/lib/python3.12/site-packages/hptt \
/opt/conda/lib/python3.12/site-packages/hptt
COPY --from=hdf5_mpi_h5py /opt/conda/lib/python3.12/site-packages/h5py \
/opt/conda/lib/python3.12/site-packages/h5py
# Important for f2py command summit in meson build system!
ENV FC="mpif90"
ENV MPICC "mpicc"
ARG MPI_HOME=/usr
ENV MPIRUN_EXECUTABLE="${MPI_HOME}/bin/mpirun.openmpi"
ENV MPIEXEC_EXECUTABLE="${MPI_HOME}/bin/mpiexec.openmpi"
# To run MPI test as root on docker image
ENV OMPI_ALLOW_RUN_AS_ROOT=1
ENV OMPI_ALLOW_RUN_AS_ROOT_CONFIRM=1
ENV PYOPENCL_COMPILER_OUTPUT=1
# ensure all libraries are known by the runtime linker
# clean cached packages
ENV LD_LIBRARY_PATH=/opt/conda/lib:/usr/local/lib:${LD_LIBRARY_PATH}
RUN ldconfig && \
rm -rf /var/lib/apt/lists/* && \
rm -rf $HOME/.cache/pip/* && \
rm -rf /tmp/*
WORKDIR /home
ENV CI_PROJECT_DIR /home
...@@ -192,7 +192,7 @@ ENV PYOPENCL_COMPILER_OUTPUT=1 ...@@ -192,7 +192,7 @@ ENV PYOPENCL_COMPILER_OUTPUT=1
# --- Image with deps. required to build hysop-doc --- # --- Image with deps. required to build hysop-doc ---
FROM hysop-final AS hysop-doc FROM hysop-final-cpu AS hysop-doc
# Bibtex !!! # Bibtex !!!
USER root USER root
...@@ -204,6 +204,7 @@ RUN apt update && apt upgrade -y && apt install -y -qq \ ...@@ -204,6 +204,7 @@ RUN apt update && apt upgrade -y && apt install -y -qq \
rm -rf $HOME/.cache/pip/* rm -rf $HOME/.cache/pip/*
USER hysop-user USER hysop-user
ARG MAMBA_DOCKERFILE_ACTIVATE=1
# Install conda/mamba packages listed in ./hysop_doc.yaml file # Install conda/mamba packages listed in ./hysop_doc.yaml file
COPY --chown=hysop-user:hysop-user ci/hysop-doc-env.yaml /home/hysop-doc-env.yaml COPY --chown=hysop-user:hysop-user ci/hysop-doc-env.yaml /home/hysop-doc-env.yaml
RUN micromamba install -y -f /home/hysop-doc-env.yaml && \ RUN micromamba install -y -f /home/hysop-doc-env.yaml && \
......
ARG REGISTRY=gricad-registry.univ-grenoble-alpes.fr
ARG PROJECT=particle_methods/hysop
ARG DEFAULT_IMAGE=ci_cpu_intel
ARG IMAGENAME=$REGISTRY/$PROJECT/$DEFAULT_IMAGE
FROM $IMAGENAME AS hysop_cpu_intel
ARG MAMBA_DOCKERFILE_ACTIVATE=1 # (otherwise meson will not be found)
# TODO: generic path to hysop!
RUN cd /builds/particle_methods/hysop && \
pip install --no-build-isolation --no-deps .
ARG REGISTRY=gricad-registry.univ-grenoble-alpes.fr
ARG PROJECT=particle_methods/hysop
ARG DEFAULT_IMAGE=ci_gpu_nvidia
ARG IMAGENAME=$REGISTRY/$PROJECT/$DEFAULT_IMAGE
FROM $IMAGENAME AS hysop_gpu_nvidia
ARG MAMBA_DOCKERFILE_ACTIVATE=1 # (otherwise meson will not be found)
# TODO: generic path to hysop!
RUN cd /builds/particle_methods/hysop && \
pip install --no-build-isolation --no-deps .
ARG REGISTRY=gricad-registry.univ-grenoble-alpes.fr
ARG PROJECT=particle_methods/hysop
ARG DEFAULT_IMAGE=hysop_cpu_intel
ARG IMAGENAME=$REGISTRY/$PROJECT/$DEFAULT_IMAGE
FROM $IMAGENAME
ARG UID
ARG GID
ARG LOGIN
ARG GROUP
# Update the package list, install sudo, create a non-root user,
# and grant password-less sudo permissions
RUN addgroup --force-badname --gid $GID $GROUP && \
adduser --uid $UID --gid $GID --disabled-password --gecos "" $LOGIN && \
echo "$LOGIN ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
# Set the non-root user as the default user
USER $LOGIN
# Set the working directory
ARG WKD=/home/$LOGIN
WORKDIR $WKD
# End
ARG REGISTRY=gricad-registry.univ-grenoble-alpes.fr
ARG PROJECT=particle_methods/hysop
ARG DEFAULT_IMAGE=hysop_gpu_nvidia
ARG IMAGENAME=$REGISTRY/$PROJECT/$DEFAULT_IMAGE
FROM $IMAGENAME
ARG UID
ARG GID
ARG LOGIN
ARG GROUP
# Update the package list, install sudo, create a non-root user,
# and grant password-less sudo permissions
RUN addgroup --force-badname --gid $GID $GROUP && \
adduser --uid $UID --gid $GID --disabled-password --gecos "" $LOGIN && \
echo "$LOGIN ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
# Set the non-root user as the default user
USER $LOGIN
# Set the working directory
ARG WKD=/home/$LOGIN
WORKDIR $WKD
# End
...@@ -21,6 +21,41 @@ name: base ...@@ -21,6 +21,41 @@ name: base
channels: channels:
- conda-forge - conda-forge
dependencies: dependencies:
- python=3.12
- clinfo
- cmake
- cython
- editdistance
- gmpy2
- jsonpickle
- matplotlib
- meson
- meson-python # need for pip install --no-build-isolation --no-deps .
- mpi4py
- networkx
- ninja
- numba
- numcodecs
- numpy
- openmpi # no hack for mpi, use conda
- portalocker
- psutil
- py-cpuinfo
- pyopencl==2024.1
- pytest
- python-flint
- pyvis
- scipy
- sympy
- wheel
- zarr
- hdf5=*=mpi*
- h5py=*=mpi*
- fftw=*=mpi*
- tox
- pyfftw
- jupyterlab
- intel-opencl-rt
- doxygen - doxygen
- graphviz - graphviz
- sphinx - sphinx
...@@ -30,7 +65,17 @@ dependencies: ...@@ -30,7 +65,17 @@ dependencies:
- sphinx-rtd-theme - sphinx-rtd-theme
- strip-hints - strip-hints
- pandoc - pandoc
- ipykernel - antlr4-python3-runtime
#- ipykernel
- pip - pip
- pip: - pip:
- nbsphinx - nbsphinx
- tee
- colors.py
- primefac
- argparse_color_formatter
- memory-tempfile
channels:
- numba
dependencies:
- llvmlite
...@@ -71,6 +71,7 @@ dependencies: ...@@ -71,6 +71,7 @@ dependencies:
- tox - tox
- pyfftw - pyfftw
- jupyterlab - jupyterlab
- pip
- pip: - pip:
- tee - tee
- colors.py - colors.py
......
##
## Copyright (c) HySoP 2011-2024
##
## This file is part of HySoP software.
## See "https://particle_methods.gricad-pages.univ-grenoble-alpes.fr/hysop-doc/"
## for further info.
##
## Licensed under the Apache License, Version 2.0 (the "License");
## you may not use this file except in compliance with the License.
## You may obtain a copy of the License at
##
## http://www.apache.org/licenses/LICENSE-2.0
##
## Unless required by applicable law or agreed to in writing, software
## distributed under the License is distributed on an "AS IS" BASIS,
## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
## See the License for the specific language governing permissions and
## limitations under the License.
##
name: base
channels:
- conda-forge
dependencies:
- python=3.12
- clinfo
- cmake
- cython
- editdistance
- gmpy2
- intel-opencl-rt
- jsonpickle
- matplotlib
- meson
- meson-python # need for pip install --no-build-isolation --no-deps .
- mpi4py
- networkx
- ninja
- numba
- numcodecs
- numpy
- openmpi=4.1.6=external_* # more info at the end of this file
- portalocker
- psutil
- py-cpuinfo
- pyopencl==2024.1
- pytest
- python-flint
- pyvis
- scipy
- sympy
- wheel
- zarr
channels:
- numba
dependencies:
- llvmlite
# openmpi=4.1.6=external_* :
# Forces conda to use system openmpi already installed!
# (https://conda-forge.org/docs/user/tipsandtricks.html#
# using-external-message-passing-interface-mpi-libraries)
##
## Copyright (c) HySoP 2011-2024
##
## This file is part of HySoP software.
## See "https://particle_methods.gricad-pages.univ-grenoble-alpes.fr/hysop-doc/"
## for further info.
##
## Licensed under the Apache License, Version 2.0 (the "License");
## you may not use this file except in compliance with the License.
## You may obtain a copy of the License at
##
## http://www.apache.org/licenses/LICENSE-2.0
##
## Unless required by applicable law or agreed to in writing, software
## distributed under the License is distributed on an "AS IS" BASIS,
## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
## See the License for the specific language governing permissions and
## limitations under the License.
##
name: base
channels:
- conda-forge
dependencies:
- clinfo
- cmake
- cython
- editdistance
- gmpy2
- jsonpickle
- matplotlib
- meson
- meson-python # need for pip install --no-build-isolation --no-deps .
- mpi4py
- networkx
- ninja
- numba
- numcodecs
- numpy
- openmpi=4.1.6=external_* # more info at the end of this file
- portalocker
- psutil
- py-cpuinfo
- pyopencl==2024.1
- pytest
- python-flint
- pyvis
- scipy
- sympy
- wheel
- zarr
channels:
- numba
dependencies:
- llvmlite
# openmpi=4.1.6=external_* :
# Forces conda to use system openmpi already installed!
# (https://conda-forge.org/docs/user/tipsandtricks.html#
# using-external-message-passing-interface-mpi-libraries)
# CI configuration for Hysop project
Req:
- keep only latest tags
- exception: releases
Steps:
[[_TOC_]]
## Create and save docker images
**What:**
- build docker images with all the dependencies required to config and build hysop
- save those images in hysop registries
**Images:**
- ci_cpu_intel: OpenCL/intel, for an host without GPUs --> [saved in hysop registry as ci_cpu_intel](gricad-registry.univ-grenoble-alpes.fr/particle_methods/hysop/ci_cpu_intel)
- ci_gpu_nvidia: OpenCL/NVIDIA, for an host with GPUs --> [saved in hysop registry as ci_cpu_nvidia](gricad-registry.univ-grenoble-alpes.fr/particle_methods/hysop/ci_gpu_nvidia)
- ci_cpu_intel_doc: same as ci_cpu_intel but with extra-deps required to build the documentation (project [hysop-doc](https://gricad-gitlab.univ-grenoble-alpes.fr/particle_methods/hysop-doc)) --> [saved in hysop-doc registry](gricad-registry.univ-grenoble-alpes.fr/particle_methods/hysop-doc/ci_cpu_intel_doc)
**When:** only if the last commit message contains [docker-build]
**Requirement**: the same docker images must be generated, whatever the branch is (i.e. ci directory and .gitlab-ci.yml must be properly synchronized between all branches)
## Configure hysop
- Run meson setup on OpenCL/intel and OpenCL nvidia images
- Keep artifacts (build-dir) for build job
## Build hysop
- Run meson compile on OpenCL/intel and OpenCL nvidia images
- Keep artifacts (build-dir) for tests and install and examples jobs
## Install hysop
- Run meson install on OpenCL/intel and OpenCL nvidia images
- Try to import hysop and hysop fortran packages
## Tests
Two jobs to:
- run meson tests on OpenCL/intel and OpenCL nvidia images
- generate artifacts only if tests fail.
Two jobs to:
- run examples (run_examples.sh) on OpenCL/intel and OpenCL nvidia images
- generate artifacts only if tests fail.
## Create 'read-to-use' images
Build and save docker images with a fully functionnal hysop install
- hysopbinderlab, based on 'ci_cpu_intel', source image to create binder repo.
Saved in [project hysop-binder](https://gricad-gitlab.univ-grenoble-alpes.fr/particle_methods/hysop_binder)
- hysoplab-cpu-intel-<branch-name> master based on 'ci_cpu_intel, jupyter lab for hysop (no GPUs host)
[Saved in hysop registry](https://gricad-gitlab.univ-grenoble-alpes.fr/particle_methods/hysop/container_registry)
usage:
```
docker run -p 8888:8888 --rm -ti gricad-registry.univ-grenoble-alpes.fr/particle_methods/hysop/hysoplab-cpu-intel-master
```
- hysoplab-gpu-nvidia-<branch-name> based on 'ci_gpu_nvidia, jupyter lab for hysop (GPUs host)
[Saved in hysop registry](https://gricad-gitlab.univ-grenoble-alpes.fr/particle_methods/hysop/container_registry)
usage:
```
docker run -p 8888:8888 --runtime=nvidia --gpus all --rm -ti gricad-registry.univ-grenoble-alpes.fr/particle_methods/hysop/hysoplab-gpu-nvidia-master
```
## Build doc
A job to trigger doc. generation in project [hysop-doc](https://gricad-gitlab.univ-grenoble-alpes.fr/particle_methods/hysop-doc)
## More
- A job to automatically create a release when a tag is pushed.
#!/usr/bin/env bash
##
## Copyright (c) HySoP 2011-2024
##
## This file is part of HySoP software.
## See "https://particle_methods.gricad-pages.univ-grenoble-alpes.fr/hysop-doc/"
## for further info.
##
## Licensed under the Apache License, Version 2.0 (the "License");
## you may not use this file except in compliance with the License.
## You may obtain a copy of the License at
##
## http://www.apache.org/licenses/LICENSE-2.0
##
## Unless required by applicable law or agreed to in writing, software
## distributed under the License is distributed on an "AS IS" BASIS,
## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
## See the License for the specific language governing permissions and
## limitations under the License.
##
set -feu -o pipefail
# Ensure that required variables are set.
#: ${CI_PROJECT_DIR:?"Please set environment variable CI_PROJECT_DIR with 'hysop' repository (absolute) path."}
: ${BUILD_DIR:?"Please set environment variable BUILD_DIR to the required build path."}
#: ${INSTALL_DIR:?"Please set environment variable INSTALL_DIR to the expected install path."}
meson setup ${BUILD_DIR} --python.install-env prefix --prefix=${BUILD_DIR}/install
exit 0
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment