next up previous contents index
Next: Manipulators Up: classdescMP Previous: classdescMP   Contents   Index


MPIbuf

MPIbuf is derived from pack_t, and so arbitrary objects can be placed into an MPIbuf in just the same way as a pack_t. If the HETERO preprocessor symbol is defined, then MPIbuf is derived from xdr_pack instead, so MPIbufs can be safely used on a heterogenous cluster -- thus obviating the need to use the MPI compound type mechanism (MPI_Type* series of functions).

Specification:

 class MPIbuf: public pack_t
{
public:
  MPI_Comm Communicator;
  int myid();   /* utility functions returning rank and number in */
  int nprocs(); /* current communicator */
  bool const_buffer;  /* in send_recv, all messages of same length */
  int proc, tag; /* store status of receives */

  MPIbuf(): pack_t() {Communicator=MPI_COMM_WORLD; const_buffer=false;}

  bool sent(); //has asynchronous message been sent?
  void wait(); //wait for asynchronous message to be sent

  void send(int proc, int tag);
  void isend(int proc, int tag); //asynchronous send

  MPIbuf& get(int p=MPI_ANY_SOURCE, int t=MPI_ANY_TAG);
  void send_recv(int dest, int sendtag, 
                 int source=MPI_ANY_SOURCE, int recvtag=MPI_ANY_TAG);
  void bcast(int root);

  MPIbuf& gather(int root);
  MPIbuf& scatter(int root); 

  MPIbuf& reset();
  bool msg_waiting(int source=MPI_ANY_SOURCE, int tag=MPI_ANY_TAG);
};

The simplest additional operations are send and get. send sends the buffer contents to the nominated processor, with the nominated message tag, and clears the buffer. get receives the next message into the buffer -- if processor or tag are specified, the message is restricted to those that match. get returns the value of *this, so the message can be unpacked on one line, eg:

buffer.get() >> x >> y;
get places the source and message tag for the received message in proc and tag.

send_recv does a simultaneous send and receive, sending the buffer to the nominated destination, with nominated tag. If the flag const_buffer is set, then all messages must be of equal length. The prevents the need to send the message sizes first.

bcast performs a broadcast.

gather concatenates the MPIbufs from all nodes onto the MPIbuf on the root node. If const_buffer is set, then the more efficient MPI_Gather is used, otherwise the buffer sizes are gathered first, and MPI_Gatherv used.

scatter scatters an MPIbuf from the root node to all the nodes. The data destined for each node must be separated by mark objects, as in:

cmd << A << mark() << B << mark(); cmd.scatter(0);
Again, if the data to be scattered is of identical size for each node, set the const_buffer, and the more efficient MPI_Scatter will be employed instead of MPI_Scatterv.

By default, all operations take place in the MPI_COMM_WORLD communicator. This behaviour can be changed by assigning a different communicator to MPIbuf::Communicator

Messages can be sent asynchronously using isend(). sent() can be used to test whether the message has been passed, and wait() can be used to stall until the message has been sent. wait() is always called prior to the MPIbuf object being destroyed.



Subsections
next up previous contents index
Next: Manipulators Up: classdescMP Previous: classdescMP   Contents   Index
Russell Standish 2016-09-02