

| |
Graph Topology Functions for MPI(C/C++ version)
01/02/2017
1. Graph Constructor Function
Example
Program example
2. Graph Inquiry Functions
3. Graph Information Functions
4. Low-Level Graph Functions
5. Topology Inquiry Functions
1. Graph Constructor Function
MPI_Graph_create(
MPI_Comm comm_old, //in, input communicator(handle)
int nnodes, //in, number of nodes in graph
int *index,//in, array of integers, describes node degree, see below
int *edges, // in,
array of integers, describes graph
edges , see below
int reorder, // ranking may be reordered (1) or (0)
MPI_Comm *comm_graph //out, communicator with graph topology added
(handle)
)
MPI_Graph _create returns a new communicator to which the graph
topology information is attached. If reorder=0 then the rank of each process in the
new group is identical to its rank in the old group. Otherwise, the function may reorder
the processes. If the size, nnodes, of the graph is smaller than the size of the
group of comm_old, then some processes are returned MPI_COMM_NULL, in analogy to
MPI_Comm_split. The call is erroneous if it specifies a graph that is larger than the
group size of the input communicator. In analogy to the function MPI_Comm_create ,
no cached information propagates to the new communicator. Also, this function is
collective. As with other collective calls, the program must be written to work correctly,
whether the call synchronize of not.
The three parameters nnodes, index and edges define
the graph structure, nnodes is the number of nodes of the graph. The nodes are
numbered form 0 to nnodes-1. The ith entry of array index stores the total number
of neighbors of the first i graph nodes. The lists of neighbors of nodes 0, 1, ..., nnodes-1
are stored in consecutive locations in array edges. The array edges is a flattened
representation of the edge lists. The total number of entries in index is nnodes
and the total number of entries in edges is equal to the number of graph edges. The
definitions of the arguments nnodes, index and edges are illustrated
in following example.
Example Assume there are four processes 0, 1, 2,
3 with the following adjacency matrix:
process |
neighbors |
0
1
2
3 |
1,3
0
3
0,2 |
Then, the input arguments are:
nnodes = 4
index = (2,3,4,6)
edges = (1,3,0,3,0,2)
Thus , in C, index[0] is the degree of node zero, and index[i]-index[i-1] is the degree of
node i, i=1, ..., nnodes-1; the list of neighbors of node zero is stored in
edges[j], for 0 <=j<=index[0]-1 and the list of neighbors of
the node i, i>0, is stored in edges[j], index[i-1]<=j<=index[i]-1.
Rationale. Since bidirectional communication is assumed,
the edges array is symmetric. To allow input checking and to make the graph construction
easier for the user, the full graph is given and not just half of the symmetric graph.
Advice to implementers. A graph topology can be
implemented by caching with the communicator the two arrays
1. index
2. edges
The number of nodes is equal to the number of processes in the group. An
additional zero entry at the start of array index simplifies access to the topology
information.
References:
[1] Marc Snir, etc. MPI -The Complete Reference , Volume 1, The MPI Core, second ed.,
The MIT press, 1998
[2] T. L. Sterling, etc. , How to build a Beowulf: A guide to the implementation and
Application of PC Clusters, The MIT press, 1999
[3] T.L, Sterling, etc., Beowulf Cluster Computing with Linux, The MIT press, 2001
[4] William Gropp, etc, Using MPI: portable Parallel Programming with the Message_Passing
Interface, second ed., 1999
Following we provide an c++ program as an example.
//Author: Yi Wang
//Date: March 31, 2002
//File name: graph.cpp
//Description: In this program, we create a new communicator with a given graph
//topology from the old communicator. The graph has 4 nodes and 6 edges. The
//edges are {0<->1,0<->3,and 3<->2}. We then create a message in process
0. Let
//process 0 send this message to its neighbors, namely process 1 and 3. Process
//1 and 3 then receive the message from process 0. At the end, Process 1 and 3
//send back to process 0 the message they have received, and process 0 receives
//the messages from process 1 and 3 and print out the results.
#include <mpi.h>
#include <stdio.h>
#include <iostream.h>
#include <iomanip.h>
#include <string.h>
int main (int argc,char **argv)
{
int myrank,size;
int namelen;
char name[MPI_MAX_PROCESSOR_NAME];
char greeting[sizeof(name)+100];
int nnodes=4;
int nedges=6;
int index0[4]={2,3,4,6};
int edges0[6]={1,3,0,3,0,2};
int*index;
int *edges;
int reorder=0;
int nneighbors;
int maxneighbors=3;
int *neighbors;
MPI_Comm comm_graph;
char bcast_message[100];
MPI_Status stat;
neighbors=new int [maxneighbors];
index=index0;
edges=edges0;
MPI_Init(&argc,&argv);
//create graph
MPI_Graph_create(MPI_COMM_WORLD,nnodes,index,edges,reorder,&comm_graph);
MPI_Comm_rank(comm_graph,&myrank);//get new rank
//get number of neighbours
MPI_Graph_neighbors_count(comm_graph,myrank,&nneighbors);
//get neighbors
MPI_Graph_neighbors(comm_graph,myrank,maxneighbors,neighbors);
MPI_Comm_size(comm_graph,&size);
MPI_Get_processor_name(name,&namelen);
sprintf(greeting,"rank %d running on %s
!\n",myrank,name);
if (myrank==0)
{
cout<<"Total number of
processes="<<size<<endl;
strcpy(bcast_message,"I received
Broadcasting message\n");
for(int j=0;j<nneighbors;j++)
MPI_Send(bcast_message,strlen(bcast_message)+1,MPI_BYTE,
neighbors[j],//destination
0,//tagged by sender
comm_graph);
}
else { //myrank!=0)
for (int j=0;j<nneighbors;j++)
{
if(neighbors[j]==0)
MPI_Recv(bcast_message,sizeof(bcast_message),MPI_BYTE,
0,//source
0,//tagged by 0
comm_graph,&stat);
}
}
if(myrank==0){
fputs(greeting,stdout);
for(int j=0;j<nneighbors;j++)
{
MPI_Recv(greeting,sizeof(greeting),MPI_BYTE,
neighbors[j],//source
1,//tagged by 1
comm_graph,&stat);
fputs(greeting,stdout);
MPI_Recv(bcast_message,sizeof(bcast_message),MPI_BYTE,
neighbors[j],//source
0,//tagged by 0
comm_graph,&stat);
fputs(bcast_message,stdout);
}
}else{//myrank!=0)
for(int
j=0;j<nneighbors;j++){
if(neighbors[j]==0)
{
MPI_Send(greeting,strlen(greeting)+1,MPI_BYTE,0,1,//tagged by 1
comm_graph);
MPI_Send(bcast_message,strlen(bcast_message)+1,MPI_BYTE,
0,//destination
0,//tagged by 0
comm_graph);}
}
}
MPI_Finalize();
delete [] neighbors;
return 0;
}
|