MPI_Allreduce man page on Cygwin

Man page or keyword search:  
man Server   22533 pages
apropos Keyword Search (all sections)
Output format
Cygwin logo
[printable version]

MPI_Allreduce(3)		   Open MPI		      MPI_Allreduce(3)

NAME
       MPI_Allreduce,  MPI_Iallreduce - Combines values from all processes and
       distributes the result back to all processes.

SYNTAX
C Syntax
       #include <mpi.h>
       int MPI_Allreduce(const void *sendbuf, void *recvbuf, int count,
			 MPI_Datatype datatype, MPI_Op op, MPI_Comm comm)

       int MPI_Iallreduce(const void *sendbuf, void *recvbuf, int count,
			  MPI_Datatype datatype, MPI_Op op, MPI_Comm comm,
			  MPI_Request *request)

Fortran Syntax
       INCLUDE 'mpif.h'
       MPI_ALLREDUCE(SENDBUF, RECVBUF, COUNT, DATATYPE, OP, COMM, IERROR)
	    <type>    SENDBUF(*), RECVBUF(*)
	    INTEGER   COUNT, DATATYPE, OP, COMM, IERROR

       MPI_ALLREDUCE(SENDBUF, RECVBUF, COUNT, DATATYPE, OP, COMM, REQUEST, IERROR)
	    <type>    SENDBUF(*), RECVBUF(*)
	    INTEGER   COUNT, DATATYPE, OP, COMM, REQUEST, IERROR

C++ Syntax
       #include <mpi.h>
       void MPI::Comm::Allreduce(const void* sendbuf, void* recvbuf,
	    int count, const MPI::Datatype& datatype, const
	    MPI::Op& op) const=0

Java Syntax
       import mpi.*;
       void MPI.COMM_WORLD.Allreduce(Object sendbuf, int sendoffset,
				     Object recvbuf, int recvoffset,
				     int count, MPI.Datatype sendtype,
				     MPI.Op op)

INPUT PARAMETERS
       sendbuf	 Starting address of send buffer (choice).

       sendoffset
		 Number of elements to skip at beginning of  buffer  (integer,
		 Java-only).

       count	 Number of elements in send buffer (integer).

       recvoffset
		 Number	 of  elements to skip at beginning of buffer (integer,
		 Java-only).

       datatype	 Datatype of elements of send buffer (handle).

       op	 Operation (handle).

       comm	 Communicator (handle).

OUTPUT PARAMETERS
       recvbuf	 Starting address of receive buffer (choice).

       request	 Request (handle, non-blocking only).

       IERROR	 Fortran only: Error status (integer).

DESCRIPTION
       Same as MPI_Reduce except that the result appears in the receive buffer
       of all the group members.

       Example 1: A routine that computes the product of a vector and an array
       that are distributed across a group of processes and returns the answer
       at all nodes (compare with Example 2, with MPI_Reduce, below).

       SUBROUTINE PAR_BLAS2(m, n, a, b, c, comm)
       REAL a(m), b(m,n)    ! local slice of array
       REAL c(n)	    ! result
       REAL sum(n)
       INTEGER n, comm, i, j, ierr

       ! local sum
       DO j= 1, n
	 sum(j) = 0.0
	 DO i = 1, m
	   sum(j) = sum(j) + a(i)*b(i,j)
	 END DO
       END DO

       ! global sum
       CALL MPI_ALLREDUCE(sum, c, n, MPI_REAL, MPI_SUM, comm, ierr)

       ! return result at all nodes
       RETURN

       Example 2: A routine that computes the product of a vector and an array
       that are distributed across a group of processes and returns the answer
       at node zero.

       SUBROUTINE PAR_BLAS2(m, n, a, b, c, comm)
       REAL a(m), b(m,n)    ! local slice of array
       REAL c(n)	    ! result
       REAL sum(n)
       INTEGER n, comm, i, j, ierr

       ! local sum
       DO j= 1, n
	 sum(j) = 0.0
	 DO i = 1, m
	   sum(j) = sum(j) + a(i)*b(i,j)
	 END DO
       END DO

       ! global sum
       CALL MPI_REDUCE(sum, c, n, MPI_REAL, MPI_SUM, 0, comm, ierr)

       ! return result at node zero (and garbage at the other nodes)
       RETURN

USE OF IN-PLACE OPTION
       When  the communicator is an intracommunicator, you can perform an all-
       reduce operation in-place (the output buffer is used as the input  buf‐
       fer).   Use  the	 variable  MPI_IN_PLACE as the value of sendbuf at all
       processes.

       Note that MPI_IN_PLACE is a special kind of  value;  it	has  the  same
       restrictions on its use as MPI_BOTTOM.

       Because	the  in-place  option converts the receive buffer into a send-
       and-receive buffer, a Fortran binding that includes  INTENT  must  mark
       these as INOUT, not OUT.

WHEN COMMUNICATOR IS AN INTER-COMMUNICATOR
       When  the  communicator	is an inter-communicator, the reduce operation
       occurs in two phases.  The data is reduced from all the members of  the
       first  group and received by all the members of the second group.  Then
       the data is reduced from all  the  members  of  the  second  group  and
       received	 by  all  the  members of the first.  The operation exhibits a
       symmetric, full-duplex behavior.

       When the communicator is an intra-communicator, these  groups  are  the
       same, and the operation occurs in a single phase.

NOTES ON COLLECTIVE OPERATIONS
       The  reduction functions ( MPI_Op ) do not return an error value.  As a
       result, if the functions detect an error, all they  can	do  is	either
       call  MPI_Abort	or silently skip the problem.  Thus, if you change the
       error handler from MPI_ERRORS_ARE_FATAL to something else, for example,
       MPI_ERRORS_RETURN , then no error may be indicated.

ERRORS
       Almost  all MPI routines return an error value; C routines as the value
       of the function and Fortran routines in the last	 argument.  C++	 func‐
       tions  do  not  return  errors.	If the default error handler is set to
       MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism
       will be used to throw an MPI::Exception object.

       Before  the  error  value is returned, the current MPI error handler is
       called. By default, this error handler aborts the MPI job,  except  for
       I/O   function	errors.	  The	error  handler	may  be	 changed  with
       MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN
       may  be	used  to cause error values to be returned. Note that MPI does
       not guarantee that an MPI program can continue past an error.

1.7.4				 Feb 04, 2014		      MPI_Allreduce(3)
[top]

List of man pages available for Cygwin

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net