Skip to content
Snippets Groups Projects
Commit ca386533 authored by Franck Pérignon's avatar Franck Pérignon
Browse files

Update odesolvers. Work arrya + fix tests + doc + pep8 things

parent 5ffe076f
No related branches found
No related tags found
No related merge requests found
Showing
with 6381 additions and 21 deletions
docs/sphinx/figures/cluster_general.jpg

112 KiB

File added
This diff is collapsed.
docs/sphinx/figures/comm_split.jpg

43.5 KiB

File added
This diff is collapsed.
docs/sphinx/figures/data_discr.jpg

31.6 KiB

File added
docs/sphinx/figures/decomp_domain.jpg

26.1 KiB

File added
This diff is collapsed.
docs/sphinx/figures/ghost_points.jpg

26.3 KiB

File added
This diff is collapsed.
docs/sphinx/figures/periodic_topo.jpg

43.6 KiB

File added
.. _domains: .. _domains:
Domains and subsets .. currentmodule:: hysop.domain
Domains
=======
A :class:`~domain.Domain` is the physical space where fields and operators will be defined, associated with a coordinate system.
A domain is defined with its dimension (1, 2 or 3 dimensions domains are allowed in HySoP), some geometrical properties and some boundary conditions. Each point in the domain is identified with its space coordinates, denoted as :math:`x, y, z`.
At the time only box-shaped domains are available in HySoP, thanks to object :class:`~box.Box`::
from hysop import Box
# 1D
b = Box(length=[2.], origin=[-1.], bc=PERIODIC)
# 3D
dom = Box(length=[1., 3., 5.], origin=[-1., 2., 3])
# Default
dom = Box()
print dom.length
# coordinates of origin and 'highest' point
print dom.origin, dom.end
Origin denotes the coordinates of the 'lowest' point in the box.
Default dimension is 3d, default sides length is 1. in each direction, and default origin position is 0 for each direction::
dom = Box()
Boundary conditions are set to periodic by default.
Parallelisation of the simulation
=================================
The simulation can be parallelised in two different manners:
* by tasks, i.e. some 'tasks' are defined and a group of processes will be affected to each task. Each group is autonomous and work in an asynchronous way.
* by splitting the data : the domain, and the fields defined on this domain, is splitted into sub-domains, each sub-domain is attributed to a process. Then each process works on its own subset of the data, in an asynchronous manner.
Both ways can be combined. Synchronisation is possible and processes or groups can communicate. This is detailed in the following parts.
.. _topologies:
Communicators and tasks
-----------------------
Please check :ref:`mpi_utils` for communicator, tasks and other mpi-related definitions.
When created, a domain is automatically associated with a communicator and the processes of the communicator
with a task. Default task is :class:`~hysop.constants.DEFAULT_TASK_ID` for all processes, and default communicator
is main_comm::
from hysop import Box
from hysop.mpi import main_comm, main_size
from hysop.constants import DEFAULT_TASK_ID
dom = Box()
assert dom.comm_task == main_comm
assert dom.tasks_list() == [DEFAULT_TASK_ID, ] * main_size
A process can be affected to one and only one task and dom.comm_task is the communicator associated to the task
of the current process.
Ok, let us now assume that we need to define three different tasks, in a simulation run with 8 mpi processes.
The idea is to bind processes 0, 4, 5 and 6 to task 'red', 1, 2, 3 to task 'green' and 7 to task 'yellow', as
shown in the figure below:
.. image:: /figures/comm_split.*
To do so, one need to set the optional argument 'proc_tasks' of domain, with a list which maps
processes rank to task id. Indeed, proc_tasks[4] = 12 means that process of rank 4 (in main_comm!)
is attached to task number 12.
Try the following program with 8 processes to check the result::
from hysop import Box
from hysop.mpi import main_comm, main_size, main_rank
from hysop.constants import DEFAULT_TASK_ID
RED = 4
GREEN = 1
YELLOW = 12
proc_tasks = [RED, ] * main_size
proc_tasks[1:4] = [GREEN, ] * 3
proc_tasks[7] = YELLOW
dom = Box(proc_tasks=proc_tasks)
assert dom.comm_task != main_comm
print 'process of rank ', main_rank, ' with task ', dom.current_task()
print 'rank/main_comm = ', main_rank, ' rank/comm_task', dom.comm_task.Get_rank()
Important remarks:
* dom.comm_task defines a different object depending on which process you are.
* the rank of a process in main_comm may be different from its rank in comm_task
Some useful methods:
* dom.current_task() : returns the task id of the current process
* dom.tasks_in_proc(i) : returns task id of process number i
* dom.is_on_task(tid) : returns true if the current process belongs to task tid
MPI topologies
--------------
.. currentmodule:: hysop.mpi
Domains may be distributed among several mpi processes, to allow parallel process of the simulation.
MPI cartesian topologies are used to handle the mapping between hysop objects and mpi processes.
For details about MPI implementation in hysop, check :ref:`mpi_utils`.
The domain is splitted into N parts, N being the number of processes involved in the current task. MPI processes are arranged in topological patterns, on a one-, two- or three-dimensional grid.
Each process is identified with its rank in the topology and with its coordinates in the grid. The grid dimension may be any number between 1 and the dimension of the domain. The figure below shows an example with 4 mpi processes distributed in a 2-dimensional grid. The physical domain is on the left and each process on the right owns its proper sub-domain. For example process of rank 2 (P2) handles the subdomain C, and has coordinates (1,0) in the grid.
.. image:: /figures/decomp_domain.*
Moreover, each process knows its neighbours in all directions.
Basic usage
^^^^^^^^^^^
:class:`topology.Cartesian` objects are used to described this mpi grid layout (we'll see later that it also handles the local space discretisation, i.e. some meshes local to each process).
Create a new mpi process distribution is quite simple in HySoP and must be managed through the domain on which
the topology will be defined, thanks to the method :meth:`~hysop.domain.domain.Domain.create_topology`.
By default mpi can find the 'best' processes distribution, depending on how many processes are available, on
the dimension of the space and on the way data are saved in memory (C or Fortran order indeed). So the creation routine may be called very simply::
# 2d domain
dom = Box(length=[1., 1.])
d2d = Discretization([65, 65])
# 2d grid of MPI processes
topo = dom.create_topology(d2d)
Such a code executed with 4 mpi processes will create a topology similar to the one describe on the figure above.
Notice the 'd2d' argument, used to specified the space discretisation, since, as said before, :class:`topology.Cartesian` are also used to managed the data distribution on some local meshes. Just ignore this d2d for the moment, we will come back to this in the next part.
The standard methods of a topology are::
# return the number of process in the topology
print topo.size
# rank of the current process in the topology
print topo.rank
# rank of the neighbours of the current process
# in direction d
d = 2
print topo.neighbours[:, d]
# id of the task owning this topology
print topo.task_id()
Advanced usage
^^^^^^^^^^^^^^
In addition to d2d, different arguments are available to create any kind of topology, according to your needs. All of them are detailed in the example below::
# 3d domain
dom = Box()
d3d = Discretisation([65, 65, 65])
# let mpi choose the best distribution
topo_0 = dom.create_topology(d3d)
# choose the dimension of the topology. dim = 1, 2 or 3
topo_1 = dom.create_topology(d3d, dim=2)
# choose the shape, i.e how many mpi process in each direction
topo_2 = dom.create_topology(d3d, shape=[3, 1, 2])
# choose which direction must be distributed
topo_3 = dom.create_topology(d3d, cutdir=[True, False, True])
*Remarks:*
* shape, cutdir, periodicity: you must specify a value for each direction of the space, whatever is the dimension of the required topology. For example, topo_2 will be a 2d mpi topology but shape length is 3.
For shape, the number of process in the mpi grid must corresponds to the number of processes in dom.comm_task. The example above must be executed with 6 processes.
* shape, cutdir and dim are exclusive : choose one and only one among them.
By default, topologies are periodic in each direction. Use isperiodic argument to customize this::
dom = Box(dimension=2)
d2d = Discretization([33,33])
#
topo_4 = dom.create_topology(d2d, shape=[3, 2], isperiodic=[False, False])
topo_5 = dom.create_topology(d2d, shape=[3, 2], isperiodic=[False, True])
When executed with 6 mpi processes, topo_4 corresponds to left grid and topo_5 to right grid below
.. image:: /figures/periodic_topo.*
Periodic topologies may be useful to work with periodic domains or data.
Keep in mind that each process has its own memory space and must use 'message passing' to access to other processes data.
Use a predifined mpi communicator
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In some cases, it may be necessary to use a previously built mpi communicator (from another soft, from fortran subroutines ...)::
dom = Box(dimension=2)
d2d = Discretization([33,33])
topo_4 = dom.create_topology(d2d, cartesian_topology=another_comm)
# another_comm being previously defined
Space discretization
==================== ====================
.. currentmodule:: hysop.domain Since domain and data may be distributed among mpi processes, we must distinguish two 'levels' of discretization:
* **global** : to set the resolution of a mesh on the whole physical domain; done thanks to :class:`~hysop.tools.parameters.Discretization`. This does not depend on the number of mpi processes. This global resolution is just a 'modelisation' parameter. No data or array will be allocated using this resolution. Everything concerning memory is done at process level.
* **local** : all the parameters of the meshes on a given mpi process, i.e. the discretization of a sub-domain.
To summarize, you must choose a global discretisation for each topology, for which some local discretisations will be computed, depending on the number of processes and on the boundary conditions.
Global discretisation
---------------------
For the moment, only regular grids are available in HySop, defined with the class :class:`~hysop.tools.parameters.Discretization`::
from hysop import Discretization
d2d = Discretization([65, 29])
d3d = Discretization([129, 33, 257])
d3dg = Discretization([129, 33, 257], ghosts=[1, 2, 1])
The two arguments are the resolution of the grid, i.e the number of points (not cells!) in each direction, and the number of points
in the ghost layer in each direction. Resolution argument is mandatory, while ghosts is set to zero by default.
This is a 'global' description of the required discretization. What really happens in memory for fields discretize using these objects strongly depends on the number of mpi processes used for the simulation and on the boundary condition types. This will be discussed later.
Local grids
-----------
When creating a topology, a global discretization parameter must be provided.
From this discretization and the topologies parameters (mpi grid layout), local
grids are computed and saved in the attribute 'mesh' of the topology::
from hysop import Box, Discretization
d2d = Discretization([65, 65])
dom = Box(dimension=2)
topo = dom.create_topology(d2d)
print topo.mesh
To compute the local resolution, the global resolution in each direction is divided by the number of mpi process in this direction. Remaining points are appended on the last process of the considered direction. Then, if asked, some ghost points are added on local boundaries. Definition and utility of ghost points is detailed thereafter.
To clarify notations, an example is detailed in the figure below. A periodic 1d domain of length Lx, starting at point x0 is condisered. The chosen discretization has 11 points, as shown on the top part of the figure. The points are numbered from 0 to 10, which corresponds to their **global index**. Creating a topology based on this discretization, with 2 points in the ghost layer and executed on 3 mpi processes results in the local meshes at the bottom of the figure. Pn stands for process of rank n in the 1d topology.
.. image:: /figures/data_discr.*
We distinguish **computational** points, in black on the figure, from ghost points, in red. The former are 'real' grid points where computation will occur. They corresponds to the points of the original (global) grid. Ghost points are some extra points, some kind of fake boundaries, local to each process.
Points are identified thanks to their index, either local (numbers in green) or relative to the global mesh (numbers in black).
Important remark:
* when the domain is periodic, the last point in each direction is ignored on local meshes, to save memory. Either this point is in the ghost layer or not present at all if ghost layer equal to 0. Mind this to choose the values of the discretization parameter. A common usage consists in choosing a number of points which is a power of two, plus 1, to fit with fftw requirements.
To conclude, here is a list of the most useful attributes of mesh class::
from hysop import Box, Discretization
d1d = Discretization([11], ghosts=[2])
dom = Box(dimension=1)
topo = dom.create_topology(d1d)
mesh = topo.mesh
# local resolution
print mesh.resolution
# local resolution excluding ghost points
print mesh.compute_resolution
# coordinates of the origin of the local mesh
print mesh.origin
# list of local indices of computational points
print mesh.compute_index
# list of local indices of all points
print mesh.local_index
# coordinates of local points
print mesh.coords
# coordinates of computational points
print mesh.compute_coords
# global index of the first point of the local mesh
print mesh.position
# space step
print mesh.space_step
compute_index returns a list of python slices. For example, in our example of the figure above, on process 1, compute_index is equal to [slice(2, 5)], which means that first point has local index 2 and last point local index 4. This argument can be used to call a numpy array::
import hysop.tools.numpywrapper as npw
# init some arrays.
a = npw.zeros(topo.mesh.resolution)
# set value only for computational points:
a[mesh.compute_index] = 3.
coords or compute_coords returns a tuple, with coordinates values in each direction. It can be used as argument in a function, like this::
from hysop import Box, Discretization
import numpy as np
def func(x, y):
return np.cos(x) + np.sin(y)
d2d = Discretization([11, 11], ghosts=[2, 2])
dom = Box(dimension=2)
topo = dom.create_topology(d2d)
# apply 'func' for (x,y) of each computational point of topo
# on the current process
res = func(*topo.mesh.compute_coords)
# res is an array of shape topo.mesh.compute_resolution
Ghost points
^^^^^^^^^^^^
Ghost-points are a very common trick to manage the behavior of numerical methods at the boundaries of local (distributed) subdomains.
Consider the left-hand figure below, where for some numerical method with a 5 points stencil, one need to compute some value at point of index 4. To do so, values of two neighboring points in each direction are required.
If the grid is distributed without ghost points, values from points of global indices 6 and 7 will be held by process 2 while computation for point 5 occurs on process 1. So, some 'false' boundaries are added, the ghost points, in red on the figure. That means that values of points 6 and 7 are duplicated on process 1 and 2. On the former, this points are ghost points, while on process 2 they are computational points.
.. image:: /figures/ghost_points.*
For a time step, the algorithm will then be more or less:
Box-shaped domain * Update ghost points : each process receive values from its neighbours in its ghost points and send values to ghost points of its neighbours.
----------------- * Compute : each process apply numerical scheme for all its computational points.
:class:`~box.Box` When required, the update step is handled by the operator and so hidden from the end user.
But, if needed, use :class:`~hysop.numerics.update_ghosts.UpdateGhosts` to explicitely update
ghost points::
Data distribution from hysop import Box, Discretization
----------------- from hysop.numerics.update_ghosts import UpdateGhost
.. toctree:: import hysop.tools.numpywrapper as npw
:maxdepth: 2 box = Box(dimension=1)
topo = box.create_topology(Discretization([65], [2]))
# number of numpy arrays to be updated
nb = 2
# create the 'updater' for two arrays
synchronize = UpdateGhost(topo, nb)
topologies # init some arrays.
a = npw.zeros(topo.mesh.resolution)
b = npw.zeros(topo.mesh.resolution)
a[...] = topo.rank
b[...] = topo.rank * 3.
How to define a subset # update ghost points
---------------------- synchronize.apply([a, b])
.. toctree:: A few remarks:
:maxdepth: 2
subsets * the number of numpy arrays to be synchronized must be set at init and must corresponds to the length of
the list argument of the apply method,
* arrays shapes must fit with the topology local resolution,
* ghost points implies mpi communications and impact the memory print of the numerical method. Therefore
they must be used only when required.
.. _odesolvers:
ODE solvers
===========
.. currentmodule hysop.numerics
Consider the following problem:
.. math::
\frac{d y(x,t)}{dt} &=& rhs(t, y(x,t)) \\
y(x, t_0) &=& y_0
with y and rhs some vector fields.
The following time integrators are implemented in hysop:
* forward Euler, :class:`~hysop.numerics.odesolvers.Euler`,
* forward Runge-Kutta of order 2, 3 and 4 : :class:`~hysop.numerics.odesolvers.RK2`, :class:`~hysop.numerics.odesolvers.RK3`, :class:`~hysop.numerics.odesolvers.RK4`
Usage
-----
::
# Build
ti = Euler(nb_components=2, topo=topo, f=func)
# Solve
result = ti(t, y, dt, result)
To build a time integrator, it is necessary to define a python function to compute the right-hand side.
This function must be of the following form::
def some_name(t, y, work):
work[0][...] = ...
work[1][...] = ...
...
return work
with y and work some lists of numpy arrays of length nb_components, nb_components being the number of
components of the right-hand side. y must not be modified by the function and work must contain the result of the computation. t is the time.
to set the number of components of the rhs
...@@ -3,6 +3,45 @@ ...@@ -3,6 +3,45 @@
Data transfers between topologies Data transfers between topologies
================================= =================================
A continuous field may be associated with different topologies (in the sense of different space discretization and/or mpi processes distribution)
depending on in which operators this field is involved. Consider for example a problem with stretching and Poisson operators, in 3 dimension, with the following sequence:
* vorticity = stretching(vorticity, velocity), with a topology 'topo_s'
* velocity = poisson(vorticity), with a topology 'topo_p'
For the first operator (stretching), the best data distribution will be a 3d mpi processes grid, with a space discretization including ghost points (to fit with finite different scheme requirements), while if Poisson is solved with fft, a 1 or 2D mpi processes grid will be required, and ghost points are not necessary.
Therefore, fields present in both operators will be associated with two different topologies. Vorticity output from stretching will be used as input in Poisson. Since data distribution and local meshes won't be the same for stretching and poisson, vorticity data need to be distributed from topo_s to topo_p. This is the role
of :class:`~hysop.operator.redistribute.Redistribute` operator.
The correct sequence to solve the problem will be:
* vorticity = stretching(vorticity, velocity), with a topology 'topo_s'
* redistribute(vorticity, topo_s, topo_p)
* velocity = poisson(vorticity), with a topology 'topo_p'
Second step means 'update values of vorticity on topo_p with its values on topo_s for all mpi processes
involved in topo_s and topo_p'
Three kind of redistribution are available in HySoP:
* :class:`~hysop.operator.redistribute.RedistributeIntra` : for topologies/operator defined on
the same mpi communicator
* :class:`~hysop.operator.redistribute.RedistributeInter` : for topologies/operator defined on
two different communicators (with no intersection between their sets of mpi processes)
* :class:`~hysop.operator.redistribute.RedistributeOverlap` : for topologies/operator defined on
on two different communicators that may have some common processes.
In addition to the standard operator arguments, (see :ref:`operators`)
redistribute operator needs a 'source' and a 'target'. Source and target might be
either a :class:`~hysop.mpi.topology.Cartesian` or a :class:`~hysop.operator.computational.Computational
Many different topologies can coexist for a single problem. Indeed,
As desr
Each operator is defined on a specific topology. Each operator is defined on a specific topology.
Which means that its fields are distributed through mpi processes Which means that its fields are distributed through mpi processes
and defined on local grids. and defined on local grids.
......
...@@ -6,10 +6,11 @@ Subsets and obstacles ...@@ -6,10 +6,11 @@ Subsets and obstacles
.. currentmodule:: hysop.domain .. currentmodule:: hysop.domain
The class :class:`subsets.Subset` and its heirs allow user to define some subsets The class :class:`subsets.Subset` and its heirs allow user to define some subsets
of a predefined domain. This may be useful for: of a predefined domain. This may be useful to:
* define specific area(s) where an operator applies (penalisation for example)
* reduce the size of i/o by writing hdf output only for a specific area. * define specific area(s) where an operator applies (penalisation for example)
* define obstacles inside the flow * reduce the size of i/o by writing hdf output only for a specific area.
* define obstacles inside the flow
Porous obstacles Porous obstacles
...@@ -18,9 +19,9 @@ Porous obstacles ...@@ -18,9 +19,9 @@ Porous obstacles
:class:`porous.BiPole` :class:`porous.BiPole`
.. image:: /figures/PolesExample.png .. image:: /figures/PolesExample.png
:width: 400pt :width: 400pt
:class:`porous.QuadriPole` :class:`porous.QuadriPole`
.. image:: /figures/QuadriPoleExample.png .. image:: /figures/QuadriPoleExample.png
:width: 400pt :width: 400pt
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment