Public Member Functions | |
Vector () | |
Vector (const MPI_Comm &communicator, const unsigned int n, const unsigned int local_size) | |
template<typename Number > | |
Vector (const MPI_Comm &communicator, const ::Vector< Number > &v, const unsigned int local_size) | |
Vector (const MPI_Comm &communicator, const VectorBase &v, const unsigned int local_size) | |
Vector & | operator= (const Vector &v) |
Vector & | operator= (const PETScWrappers::Vector &v) |
Vector & | operator= (const PetscScalar s) |
template<typename number > | |
Vector & | operator= (const ::Vector< number > &v) |
void | reinit (const MPI_Comm &communicator, const unsigned int N, const unsigned int local_size, const bool fast=false) |
void | reinit (const Vector &v, const bool fast=false) |
const MPI_Comm & | get_mpi_communicator () const |
Protected Member Functions | |
virtual void | create_vector (const unsigned int n, const unsigned int local_size) |
Private Attributes | |
MPI_Comm | communicator |
Related Functions | |
(Note that these are not member functions.) | |
void | swap (Vector &u, Vector &v) |
The parallel functionality of PETSc is built on top of the Message Passing Interface (MPI). MPI's communication model is built on collective communications: if one process wants something from another, that other process has to be willing to accept this communication. A process cannot query data from another process by calling a remote function, without that other process expecting such a transaction. The consequence is that most of the operations in the base class of this class have to be called collectively. For example, if you want to compute the l2 norm of a parallel vector, all processes across which this vector is shared have to call the l2_norm
function. If you don't do this, but instead only call the l2_norm
function on one process, then the following happens: This one process will call one of the collective MPI functions and wait for all the other processes to join in on this. Since the other processes don't call this function, you will either get a time-out on the first process, or, worse, by the time the next a callto a PETSc function generates an MPI message on the other processes , you will get a cryptic message that only a subset of processes attempted a communication. These bugs can be very hard to figure out, unless you are well-acquainted with the communication model of MPI, and know which functions may generate MPI messages.
One particular case, where an MPI message may be generated unexpectedly is discussed below.
PETSc does allow read access to individual elements of a vector, but in the distributed case only to elements that are stored locally. We implement this through calls like d=vec(i)
. However, if you access an element outside the locally stored range, an exception is generated.
In contrast to read access, PETSc (and the respective deal.II wrapper classes) allow to write (or add) to individual elements of vectors, even if they are stored on a different process. You can do this writing, for example, vec(i)=d
or vec(i)+=d
, or similar operations. There is one catch, however, that may lead to very confusing error messages: PETSc requires application programs to call the compress() function when they switch from adding, to elements to writing to elements. The reasoning is that all processes might accumulate addition operations to elements, even if multiple processes write to the same elements. By the time we call compress() the next time, all these additions are executed. However, if one process adds to an element, and another overwrites to it, the order of execution would yield non-deterministic behavior if we don't make sure that a synchronisation with compress() happens in between.
In order to make sure these calls to compress() happen at the appropriate time, the deal.II wrappers keep a state variable that store which is the presently allowed operation: additions or writes. If it encounters an operation of the opposite kind, it calls compress() and flips the state. This can sometimes lead to very confusing behavior, in code that may for example look like this:
* PETScWrappers::MPI::Vector vector; * ... * // do some write operations on the vector * for (unsigned int i=0; i<vector.size(); ++i) * vector(i) = i; * * // do some additions to vector elements, but * // only for some elements * for (unsigned int i=0; i<vector.size(); ++i) * if (some_condition(i) == true) * vector(i) += 1; * * // do another collective operation * const double norm = vector.l2_norm(); *
This code can run into trouble: by the time we see the first addition operation, we need to flush the overwrite buffers for the vector, and the deal.II library will do so by calling compress(). However, it will only do so for all processes that actually do an addition -- if the condition is never true for one of the processes, then this one will not get to the actual compress() call, whereas all the other ones do. This gets us into trouble, since all the other processes hang in the call to flush the write buffers, while the one other process advances to the call to compute the l2 norm. At this time, you will get an error that some operation was attempted by only a subset of processes. This behavior may seem surprising, unless you know that write/addition operations on single elements may trigger this behavior.
The problem described here may be avoided by placing additional calls to compress(), or making sure that all processes do the same type of operations at the same time, for example by placing zero additions if necessary.
PETScWrappers::MPI::Vector::Vector | ( | ) |
Default constructor. Initialize the vector as empty.
PETScWrappers::MPI::Vector::Vector | ( | const MPI_Comm & | communicator, | |
const unsigned int | n, | |||
const unsigned int | local_size | |||
) | [explicit] |
Constructor. Set dimension to n
and initialize all elements with zero.
v=0;
. Presumably, the user wants to set every element of the vector to zero, but instead, what happens is this call: v=Vector<number>(0);
, i.e. the vector is replaced by one of length zero.
PETScWrappers::MPI::Vector::Vector | ( | const MPI_Comm & | communicator, | |
const ::Vector< Number > & | v, | |||
const unsigned int | local_size | |||
) | [inline, explicit] |
Copy-constructor from deal.II vectors. Sets the dimension to that of the given vector, and copies all elements.
PETScWrappers::MPI::Vector::Vector | ( | const MPI_Comm & | communicator, | |
const VectorBase & | v, | |||
const unsigned int | local_size | |||
) | [explicit] |
Copy-constructor the values from a PETSc wrapper vector class.
Copy the given vector. Resize the present vector if necessary. Also take over the MPI communicator of v
.
Vector& PETScWrappers::MPI::Vector::operator= | ( | const PETScWrappers::Vector & | v | ) |
Copy the given sequential (non-distributed) vector into the present parallel vector. It is assumed that they have the same size, and this operation does not change the partitioning of the parallel vector by which its elements are distributed across several MPI processes. What this operation therefore does is to copy that chunk of the given vector v
that corresponds to elements of the target vector that are stored locally, and copies them. Elements that are not stored locally are not touched.
This being a parallel vector, you must make sure that all processes call this function at the same time. It is not possible to change the local part of a parallel vector on only one process, independent of what other processes do, with this function.
Vector& PETScWrappers::MPI::Vector::operator= | ( | const PetscScalar | s | ) |
Set all components of the vector to the given number s
. Simply pass this down to the base class, but we still need to declare this function to make the example given in the discussion about making the constructor explicit work.
Reimplemented from PETScWrappers::VectorBase.
Vector& PETScWrappers::MPI::Vector::operator= | ( | const ::Vector< number > & | v | ) | [inline] |
Copy the values of a deal.II vector (as opposed to those of the PETSc vector wrapper class) into this object.
Contrary to the case of sequential vectors, this operators requires that the present vector already has the correct size, since we need to have a partition and a communicator present which we otherwise can't get from the source vector.
void PETScWrappers::MPI::Vector::reinit | ( | const MPI_Comm & | communicator, | |
const unsigned int | N, | |||
const unsigned int | local_size, | |||
const bool | fast = false | |||
) |
Change the dimension of the vector to N
. It is unspecified how resizing the vector affects the memory allocation of this object; i.e., it is not guaranteed that resizing it to a smaller size actually also reduces memory consumption, or if for efficiency the same amount of memory is used
local_size
denotes how many of the N
values shall be stored locally on the present process. for less data.
communicator
denotes the MPI communicator henceforth to be used for this vector.
If fast
is false, the vector is filled by zeros. Otherwise, the elements are left an unspecified state.
Change the dimension to that of the vector v
, and also take over the partitioning into local sizes as well as the MPI communicator. The same applies as for the other reinit
function.
The elements of v
are not copied, i.e. this function is the same as calling reinit(v.size(), v.local_size(), fast)
.
const MPI_Comm& PETScWrappers::MPI::Vector::get_mpi_communicator | ( | ) | const |
Return a reference to the MPI communicator object in use with this vector.
virtual void PETScWrappers::MPI::Vector::create_vector | ( | const unsigned int | n, | |
const unsigned int | local_size | |||
) | [protected, virtual] |
Create a vector of length n
. For this class, we create a parallel vector. n
denotes the total size of the vector to be created. local_size
denotes how many of these elements shall be stored locally.
MPI_Comm PETScWrappers::MPI::Vector::communicator [private] |
Copy of the communicator object to be used for this parallel vector.