MPI_Ialltoallv man page on Cygwin

Man page or keyword search:  
man Server   22533 pages
apropos Keyword Search (all sections)
Output format
Cygwin logo
[printable version]

MPI_Alltoallv(3)		   Open MPI		      MPI_Alltoallv(3)

NAME
       MPI_Alltoallv,  MPI_Ialltoallv - All processes send different amount of
       data to, and receive different amount of data from, all processes

SYNTAX
C Syntax
       #include <mpi.h>
       int MPI_Alltoallv(const void *sendbuf, const int sendcounts[],
	    const int sdisplsP, MPI_Datatype sendtype,
	    void *recvbuf, const int recvcounts[],
	    const int rdispls[], MPI_Datatype recvtype, MPI_Comm comm)

       int MPI_Ialltoallv(const void *sendbuf, const int sendcounts[],
	    const int sdisplsP, MPI_Datatype sendtype,
	    void *recvbuf, const int recvcounts[],
	    const int rdispls[], MPI_Datatype recvtype, MPI_Comm comm,
	    MPI_Request *request)

Fortran Syntax
       INCLUDE 'mpif.h'

       MPI_ALLTOALLV(SENDBUF, SENDCOUNTS, SDISPLS, SENDTYPE,
	    RECVBUF, RECVCOUNTS, RDISPLS, RECVTYPE, COMM, IERROR)

	    <type>    SENDBUF(*), RECVBUF(*)
	    INTEGER   SENDCOUNTS(*), SDISPLS(*), SENDTYPE
	    INTEGER   RECVCOUNTS(*), RDISPLS(*), RECVTYPE
	    INTEGER   COMM, IERROR

       MPI_IALLTOALLV(SENDBUF, SENDCOUNTS, SDISPLS, SENDTYPE,
	    RECVBUF, RECVCOUNTS, RDISPLS, RECVTYPE, REQUEST, COMM, IERROR)

	    <type>    SENDBUF(*), RECVBUF(*)
	    INTEGER   SENDCOUNTS(*), SDISPLS(*), SENDTYPE
	    INTEGER   RECVCOUNTS(*), RDISPLS(*), RECVTYPE
	    INTEGER   COMM, REQUEST, IERROR

C++ Syntax
       #include <mpi.h>
       void MPI::Comm::Alltoallv(const void* sendbuf,
	    const int sendcounts[], const int displs[],
	    const MPI::Datatype& sendtype, void* recvbuf,
	    const int recvcounts[], const int rdispls[],
	    const MPI::Datatype& recvtype)

Java Syntax
       import mpi.*;
       void MPI.COMM_WORLD.Alltoallv(Object sendbuf, int sendoffset, int sendcount[],
				     int sdispls[], MPI.Datatype sendtype,
				     Object recvbuf, int recvoffset, int recvcount[],
				     int rdispls[], MPI.Datatype recvtype)

INPUT PARAMETERS
       sendbuf	   Starting address of send buffer.

       sendoffset  Initial number of elements to skip at beginning  of	buffer
		   (integer, Java-only).

       sendcounts  Integer  array,  where entry i specifies the number of ele‐
		   ments to send to rank i.

       sdispls	   Integer array, where entry  i  specifies  the  displacement
		   (offset  from  sendbuf, in units of sendtype) from which to
		   send data to rank i.

       sendtype	   Datatype of send buffer elements.

       recvoffset  Initial number of elements to skip at beginning  of	buffer
		   (integer, Java-only).

       recvcounts  Integer  array,  where entry j specifies the number of ele‐
		   ments to receive from rank j.

       rdispls	   Integer array, where entry  j  specifies  the  displacement
		   (offset  from  recvbuf, in units of recvtype) to which data
		   from rank j should be written.

       recvtype	   Datatype of receive buffer elements.

       comm	   Communicator over which data is to be exchanged.

OUTPUT PARAMETERS
       recvbuf	   Address of receive buffer.

       request	   Request (handle, non-blocking only).

       IERROR	   Fortran only: Error status.

DESCRIPTION
       MPI_Alltoallv is a generalized collective operation in which  all  pro‐
       cesses  send data to and receive data from all other processes. It adds
       flexibility to MPI_Alltoall by allowing the user	 to  specify  data  to
       send  and  receive vector-style (via a displacement and element count).
       The operation of this routine can be thought of as follows, where  each
       process	performs  2n  (n being the number of processes in communicator
       comm) independent point-to-point communications	(including  communica‐
       tion with itself).

	    MPI_Comm_size(comm, &n);
	    for (i = 0, i < n; i++)
		MPI_Send(sendbuf + sdispls[i] * extent(sendtype),
		    sendcounts[i], sendtype, i, ..., comm);
	    for (i = 0, i < n; i++)
		MPI_Recv(recvbuf + rdispls[i] * extent(recvtype),
		    recvcounts[i], recvtype, i, ..., comm);

       Process j sends the k-th block of its local sendbuf to process k, which
       places the data in the j-th block of its local recvbuf.

       When a pair of processes exchanges data, each may pass  different  ele‐
       ment  count  and datatype arguments so long as the sender specifies the
       same amount of data to send (in	bytes)	as  the	 receiver  expects  to
       receive.

       Note  that  process  i may send a different amount of data to process j
       than it receives from process j. Also, a process may send entirely dif‐
       ferent amounts of data to different processes in the communicator.

       WHEN COMMUNICATOR IS AN INTER-COMMUNICATOR

       When  the  communicator	is an inter-communicator, the gather operation
       occurs in two phases.  The data is gathered from all the members of the
       first  group and received by all the members of the second group.  Then
       the data is gathered from all the  members  of  the  second  group  and
       received	 by  all  the  members of the first.  The operation exhibits a
       symmetric, full-duplex behavior.

       The first group defines	the  root  process.   The  root	 process  uses
       MPI_ROOT	 as the value of root.	All other processes in the first group
       use MPI_PROC_NULL as the value of root.	All processes  in  the	second
       group  use the rank of the root process in the first group as the value
       of root.

       When the communicator is an intra-communicator, these  groups  are  the
       same, and the operation occurs in a single phase.

USE OF IN-PLACE OPTION
       When  the communicator is an intracommunicator, you can perform an all-
       to-all operation in-place (the output buffer is used as the input  buf‐
       fer).   Use the variable MPI_IN_PLACE as the value of sendbuf.  In this
       case, sendcounts, sdispls, and sendtype are ignored.  The input data of
       each  process  is  assumed  to  be in the area where that process would
       receive its own contribution to the receive buffer.

NOTES
       The specification of counts and	displacements  should  not  cause  any
       location to be written more than once.

       All  arguments  on all processes are significant. The comm argument, in
       particular, must describe the same communicator on all processes.

       The offsets of sdispls and rdispls are measured in  units  of  sendtype
       and  recvtype, respectively. Compare this to MPI_Alltoallw, where these
       offsets are measured in bytes.

ERRORS
       Almost all MPI routines return an error value; C routines as the	 value
       of  the	function  and Fortran routines in the last argument. C++ func‐
       tions do not return errors. If the default  error  handler  is  set  to
       MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism
       will be used to throw an MPI::Exception object.

       Before the error value is returned, the current MPI  error  handler  is
       called.	By  default, this error handler aborts the MPI job, except for
       I/O  function  errors.  The  error  handler   may   be	changed	  with
       MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN
       may be used to cause error values to be returned. Note  that  MPI  does
       not guarantee that an MPI program can continue past an error.

SEE ALSO
       MPI_Alltoall
       MPI_Alltoallw

1.7.4				 Feb 04, 2014		      MPI_Alltoallv(3)
[top]

List of man pages available for Cygwin

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net