Martin Lísal listopad 2003
1 1.1 1.1.1
Často používané MPI příkazy (pokračování) Příkazy pro kolektivní komunikaci (pokračování) Distribuce informací nestejné velikosti na jednotlivé procesy
call MPI_SCATTERV(sendbuf,sendcounts,displs,send_MPI_data_type, recvbuf,recvcount, recv_MPI_data_type, root,comm,ierr) Funkce příkazu je vysvětlena na obr. 1 a použití MPI SCATTERV demonstruje následující program. program scatterv implicit none include ’mpif.h’ ! preprocessor directive ! integer :: nprocs, & ! # of processes myrank, & ! my process rank ierr integer :: send_count(0:3), & ! send count recv_count, & ! receive count displ(0:3) ! displacement ! real :: sendmsg(10), & ! send message recvmsg(4) ! recieve message ! ! start up MPI ! call MPI_INIT(ierr) ! ! find out how many processes are being used ! call MPI_COMM_SIZE(MPI_COMM_WORLD,nprocs,ierr) 1
& &
if(nprocs > 4) stop " scatterv: nprocs > 4!" ! ! get my process rank ! call MPI_COMM_RANK(MPI_COMM_WORLD,myrank,ierr) ! if(myrank == 0) then sendmsg=(/1.0,2.0,2.0,3.0,3.0,3.0,4.0,4.0,4.0,4.0/) send_count=(/1,2,3,4/) displ=(/0,1,3,6/) endif recv_count=myrank+1 ! call MPI_SCATTERV(sendmsg,send_count,displ,MPI_REAL, recvmsg,recv_count, MPI_REAL, 0,MPI_COMM_WORLD,ierr) ! print*," myrank =",myrank," recvmsg:",recvmsg ! ! shut down MPI ! call MPI_FINALIZE(ierr) end program scatterv
1.1.2
& &
Shromáždění informací nestejné velikosti z jednotlivých procesů
call MPI_GATHERV(sendbuf,sendcount, send_MPI_data_type, recvbuf,recvcounts,displs,recv_MPI_data_type, root,comm,ierr) Funkce příkazu je vysvětlena na obr. 2 a použití MPI GATHERV demonstruje následující program.
2
& &
program gatherv implicit none include ’mpif.h’ ! preprocessor directive ! integer :: nprocs, & ! # of processes myrank, & ! my process rank ierr,i integer :: send_count, & ! send count recv_count(0:3), & ! receive count displ(0:3) ! displacement ! real :: sendmsg(4), & ! send message recvmsg(10) ! recieve message ! ! start up MPI ! call MPI_INIT(ierr) ! ! find out how many processes are being used ! call MPI_COMM_SIZE(MPI_COMM_WORLD,nprocs,ierr) if(nprocs > 4) stop " gatherv: nprocs > 4!" ! ! get my process rank ! call MPI_COMM_RANK(MPI_COMM_WORLD,myrank,ierr) ! do i=1,myrank+1 sendmsg(i)=myrank+1 enddo send_count=myrank+1 ! recv_count=(/1,2,3,4/) displ=(/0,1,3,6/) ! call MPI_GATHERV(sendmsg,send_count, MPI_REAL, recvmsg,recv_count,displ,MPI_REAL, 0,MPI_COMM_WORLD,ierr) 3
& &
! if(myrank == 0) then print*," recvmsg:",recvmsg endif ! ! shut down MPI ! call MPI_FINALIZE(ierr) end program gatherv Příkazy MPI GATHER a MPI GATHERV mají ještě „ALLÿ verze MPI ALLGATHER a MPI ALLGATHERV, které shromažďují informace na všechny procesy a tudíž neobsahují argument root. 1.1.3
Posílání informací stejné velikosti z jednotlivých procesů na všechny procesy
call MPI_ALLTOALL(sendbuf,sendcount,send_MPI_data_type, recvbuf,recvcount,recv_MPI_data_type, comm,ierr)
& &
Funkce příkazu je vysvětlena na obr. 3 a použití MPI ALLTOALL demonstruje následující program. program alltoall implicit none include ’mpif.h’ ! preprocessor directive ! integer :: nprocs, & ! # of processes myrank, & ! my process rank ierr,i ! real :: sendmsg(4), & ! send message recvmsg(4) ! recieve message ! ! start up MPI ! 4
call MPI_INIT(ierr) ! ! find out how many processes are being used ! call MPI_COMM_SIZE(MPI_COMM_WORLD,nprocs,ierr) if(nprocs > 4) stop " gather: nprocs > 4!" ! ! get my process rank ! call MPI_COMM_RANK(MPI_COMM_WORLD,myrank,ierr) ! do i=1,nprocs sendmsg(i)=REAL(i+nprocs*myrank) enddo print*," myrank =",myrank," sendmsg:",sendmsg ! call MPI_ALLTOALL(sendmsg,1,MPI_REAL, & recvmsg,1,MPI_REAL, & MPI_COMM_WORLD,ierr) ! print*," myrank =",myrank," recvmsg:",recvmsg ! ! shut down MPI ! call MPI_FINALIZE(ierr) end program alltoall
1.1.4
Posílání informací nestejné velikosti z jednotlivých procesů na všechny procesy
call MPI_ALLTOALLV(sendbuf,sendcounts,sdispls,send_MPI_data_type, recvbuf,recvcounts,rdispls,recv_MPI_data_type, comm,ierr) Funkce příkazu je vysvětlena na obr. 4 a použití MPI ALLTOALLV demonstruje následující program. 5
& &
program alltoallv implicit none include ’mpif.h’ ! preprocessor directive ! integer :: nprocs, & ! # of processes myrank, & ! my process rank ierr,i integer :: scnt(0:3), & ! send counts sdsp(0:3), & ! send displacements rcnt(0:3), & ! recv counts rdsp(0:3) ! recv displacements real :: sendmsg(10),recvmsg(16) ! sendmsg=(/1.0,2.0,2.0,3.0,3.0,3.0,4.0,4.0,4.0,4.0/) scnt=(/1,2,3,4/) sdsp=(/0,1,3,6/) ! ! start up MPI ! call MPI_INIT(ierr) ! ! find out how many processes are being used ! call MPI_COMM_SIZE(MPI_COMM_WORLD,nprocs,ierr) if(nprocs > 4) stop " gather: nprocs > 4!" ! ! get my process rank ! call MPI_COMM_RANK(MPI_COMM_WORLD,myrank,ierr) ! ! sendmsg ! do i=1,10 sendmsg(i)=sendmsg(i)+REAL(nprocs*myrank) enddo print*," myrank =",myrank," sendmsg:",sendmsg ! ! rcnt and rdsp 6
! do i=0,nprocs-1 rcnt(i)=myrank+1 rdsp(i)=i*(myrank+1) enddo ! call MPI_ALLTOALLV(sendmsg,scnt,sdsp,MPI_REAL, recvmsg,rcnt,rdsp,MPI_REAL, MPI_COMM_WORLD,ierr)
& &
! print*," myrank =",myrank," recvmsg:",recvmsg ! ! shut down MPI ! call MPI_FINALIZE(ierr) end program alltoallv
1.2
Příkazy pro operace na proměnných distribuovaných na jednotlivých procesech
call MPI_REDUCE(operand,result,count,MPI_data_type,operation,root, & comm,ierr) Příkaz MPI REDUCE má ještě „ALLÿ verzi, které neobsahuje argument root. call MPI_SCAN(sendbuf,recvbuf,count,MPI_data_type,operation,comm,ierr) Funkce příkazu je vysvětlena na obr. 5 a použití MPI SCAN demonstruje následující program.
7
program scan implicit none include ’mpif.h’ ! preprocessor directive ! integer :: nprocs, & ! # of processes myrank, & ! my process rank ierr real :: sendmsg,recvmsg ! ! start up MPI ! call MPI_INIT(ierr) ! ! find out how many processes are being used ! call MPI_COMM_SIZE(MPI_COMM_WORLD,nprocs,ierr) if(nprocs > 4) stop " gather: nprocs > 4!" ! ! get my process rank ! call MPI_COMM_RANK(MPI_COMM_WORLD,myrank,ierr) ! ! sendmsg ! sendmsg=REAL(myrank+1) ! call MPI_SCAN(sendmsg,recvmsg,1,MPI_REAL,MPI_SUM, MPI_COMM_WORLD,ierr) ! print*," myrank =",myrank," recvmsg:",recvmsg ! ! shut down MPI ! call MPI_FINALIZE(ierr) end program scan
8
&
call MPI_MPI_REDUCE_SCATTER(sendbuf,recvbuf,recvcounts,MPI_data_type, operation,comm,ierr) Funkce příkazu je vysvětlena na obr. 6 a použití MPI REDUCE SCATTER demonstruje následující program. program reduce_scatter implicit none include ’mpif.h’ ! preprocessor directive ! integer :: nprocs, & ! # of processes myrank, & ! my process rank ierr,i integer :: rcnt(0:3) ! recv counts ! real :: sendmsg(10), & ! send message recvmsg(4) ! recieve message ! rcnt=(/1,2,3,4/) ! ! start up MPI ! call MPI_INIT(ierr) ! ! find out how many processes are being used ! call MPI_COMM_SIZE(MPI_COMM_WORLD,nprocs,ierr) if(nprocs > 4) stop " scatter: nprocs > 4!" ! ! get my process rank ! call MPI_COMM_RANK(MPI_COMM_WORLD,myrank,ierr) ! do i=1,10 sendmsg(i)=real(i+myrank*10) enddo print*," myrank =",myrank," sendmsg:",sendmsg 9
&
! call MPI_REDUCE_SCATTER(sendmsg,recvmsg,rcnt,MPI_REAL,MPI_SUM, MPI_COMM_WORLD,ierr) ! print*," myrank =",myrank," recvmsg =",recvmsg ! ! shut down MPI ! call MPI_FINALIZE(ierr) end program reduce_scatter
10
&
Obrázek 1: Schematický popis funkce příkazu MPI SCATTERV.
Obrázek 2: Schematický popis funkce příkazu MPI GATHERV.
Obrázek 3: Schematický popis funkce příkazu MPI ALLTOALL.
11
Obrázek 4: Schematický popis funkce příkazu MPI ALLTOALLV.
Obrázek 5: Schematický popis funkce příkazu MPI SCAN.
Obrázek 6: Schematický popis funkce příkazu MPI REDUCE SCATTER.
12
Figure 129. MPI_SCATTERV
Sample program PROGRAM scatterv INCLUDE ’mpif.h’ INTEGER isend(6), irecv(3) INTEGER iscnt(0:2), idisp(0:2) DATA isend/1,2,2,3,3,3/ DATA iscnt/1,2,3/ idisp/0,1,3/ CALL MPI_INIT(ierr) CALL MPI_COMM_SIZE(MPI_COMM_WORLD, nprocs, ierr) CALL MPI_COMM_RANK(MPI_COMM_WORLD, myrank, ierr) ircnt=myrank+1 CALL MPI_SCATTERV(isend, iscnt, idisp, MPI_INTEGER, & irecv, ircnt, MPI_INTEGER, & 0, MPI_COMM_WORLD, ierr) PRINT *,’irecv =’,irecv CALL MPI_FINALIZE(ierr) END
Sample execution $ a.out -procs 3 0: irecv = 1 0 0 1: irecv = 2 2 0 2: irecv = 3 3 3
B.2.5 MPI_GATHER Purpose
Collects individual messages from each process in comm at the root process.
Frequently Used MPI Subroutines Illustrated
169
Description
This routine collects individual messages from each process in comm at the root process and stores them in rank order. With recvcounts as an array, messages can have varying sizes, and displs allows you the flexibility of where the data is placed on the root. The type signature of sendcount, sendtype on process i must be equal to the type signature of recvcounts(i), recvtype at the root. This means the amount of data sent must be equal to the amount of data received, pairwise between each process and the root. Distinct type maps between sender and receiver are allowed. All processes in comm need to call this routine.
Figure 131. MPI_GATHERV
Sample program PROGRAM gatherv INCLUDE ’mpif.h’ INTEGER isend(3), irecv(6) INTEGER ircnt(0:2), idisp(0:2) DATA ircnt/1,2,3/ idisp/0,1,3/ CALL MPI_INIT(ierr) CALL MPI_COMM_SIZE(MPI_COMM_WORLD, nprocs, ierr) CALL MPI_COMM_RANK(MPI_COMM_WORLD, myrank, ierr) DO i=1,myrank+1 isend(i) = myrank + 1 ENDDO iscnt = myrank + 1 CALL MPI_GATHERV(isend, iscnt, MPI_INTEGER, & irecv, ircnt, idisp, MPI_INTEGER, & 0, MPI_COMM_WORLD, ierr)
172
RS/6000 SP: Practical MPI Programming
be equal to the amount of data received, pairwise between every pair of processes. The type maps can be different. All arguments on all processes are significant. All processes in comm need to call this routine.
Figure 134. MPI_ALLTOALL
Sample program PROGRAM alltoall INCLUDE ’mpif.h’ INTEGER isend(3), irecv(3) CALL MPI_INIT(ierr) CALL MPI_COMM_SIZE(MPI_COMM_WORLD, nprocs, ierr) CALL MPI_COMM_RANK(MPI_COMM_WORLD, myrank, ierr) DO i=1,nprocs isend(i) = i + nprocs * myrank ENDDO PRINT *,’isend =’,isend CALL MP_FLUSH(1) ! for flushing stdout CALL MPI_ALLTOALL(isend, 1, MPI_INTEGER, & irecv, 1, MPI_INTEGER, & MPI_COMM_WORLD, ierr) PRINT *,’irecv =’,irecv CALL MPI_FINALIZE(ierr) END
Sample execution $ a.out -procs 3 0: isend = 1 2 1: isend = 4 5 2: isend = 7 8 0: irecv = 1 4 1: irecv = 2 5 2: irecv = 3 6
3 6 9 7 8 9
Frequently Used MPI Subroutines Illustrated
177
Figure 135. MPI_ALLTOALLV
Sample program PROGRAM alltoallv INCLUDE ’mpif.h’ INTEGER isend(6), irecv(9) INTEGER iscnt(0:2), isdsp(0:2), ircnt(0:2), irdsp(0:2) DATA isend/1,2,2,3,3,3/ DATA iscnt/1,2,3/ isdsp/0,1,3/ CALL MPI_INIT(ierr) CALL MPI_COMM_SIZE(MPI_COMM_WORLD, nprocs, ierr) CALL MPI_COMM_RANK(MPI_COMM_WORLD, myrank, ierr) DO i=1,6 isend(i) = isend(i) + nprocs * myrank ENDDO DO i=0,nprocs-1 ircnt(i) = myrank + 1 irdsp(i) = i * (myrank + 1) ENDDO PRINT *,’isend =’,isend CALL MP_FLUSH(1) ! for flushing stdout CALL MPI_ALLTOALLV(isend, iscnt, isdsp, MPI_INTEGER, & irecv, ircnt, irdsp, MPI_INTEGER, & MPI_COMM_WORLD, ierr) PRINT *,’irecv =’,irecv CALL MPI_FINALIZE(ierr) END
Sample execution $ a.out -procs 3 0: isend = 1 2 2 3 3 3 1: isend = 4 5 5 6 6 6
Frequently Used MPI Subroutines Illustrated
179
INTEGER comm
The communicator (handle) (IN)
INTEGER ierror
The Fortran return code
Description
This routine is used to perform a prefix reduction on data distributed across the group. The operation returns, in the receive buffer of the process with rank i, the reduction of the values in the send buffers of processes with ranks 0..i (inclusive). The type of operations supported, their semantics, and the restrictions on send and receive buffers are the same as for MPI_REDUCE. All processes in comm need to call this routine.
For predefined combinations of operations and data types, see Table 11 on page 181.
Figure 139. MPI_SCAN
Sample program PROGRAM scan INCLUDE ’mpif.h’ CALL MPI_INIT(ierr) CALL MPI_COMM_SIZE(MPI_COMM_WORLD, nprocs, ierr) CALL MPI_COMM_RANK(MPI_COMM_WORLD, myrank, ierr) isend = myrank + 1 CALL MPI_SCAN(isend, irecv, 1, MPI_INTEGER, & MPI_SUM, MPI_COMM_WORLD, ierr) PRINT *,’irecv =’,irecv CALL MPI_FINALIZE(ierr) END
Sample execution $ a.out -procs 3 0: irecv = 1 1: irecv = 3 2: irecv = 6
B.2.14 MPI_REDUCE_SCATTER Purpose
184
Applies a reduction operation to the vector sendbuf over the set of processes specified by comm and scatters the result according to the values in recvcounts.
RS/6000 SP: Practical MPI Programming
Figure 140. MPI_REDUCE_SCATTER
Sample program PROGRAM reduce_scatter INCLUDE ’mpif.h’ INTEGER isend(6), irecv(3) INTEGER ircnt(0:2) DATA ircnt/1,2,3/ CALL MPI_INIT(ierr) CALL MPI_COMM_SIZE(MPI_COMM_WORLD, nprocs, ierr) CALL MPI_COMM_RANK(MPI_COMM_WORLD, myrank, ierr) DO i=1,6 isend(i) = i + myrank * 10 ENDDO CALL MPI_REDUCE_SCATTER(isend, irecv, ircnt, MPI_INTEGER, & MPI_SUM, MPI_COMM_WORLD, ierr) PRINT *,’irecv =’,irecv CALL MPI_FINALIZE(ierr) END
186
RS/6000 SP: Practical MPI Programming