MPI Hello World

From mathpub
Jump to navigation Jump to search

MPI Hello World

Many parallel jobs are using MPI at the lowest level to manage parallel compute resources.

This is a 'Hello World' program that will test the operation of sending jobs to remote workers.

 /*
 * Sample MPI "hello world" application in C
 */


#include <stdio.h>
#include "mpi.h"

int main(int argc, char* argv[])
{
    int rank, size, len;
    char version[MPI_MAX_LIBRARY_VERSION_STRING];

    MPI_Init(&argc, &argv);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
    MPI_Comm_size(MPI_COMM_WORLD, &size);
    printf("Hello, world, I am %d of %d \n", rank, size);
    MPI_Finalize();

    return 0;
}
 

create a folder and save this file as hello_mpi.c

You can compile this program with

mpicc -g hello_mpi.c -o hello_mpi

You now have an executable called hello_mpi

You can run the command with

mpirun ./hello_mpi

With no other arguments, mpirun will run all of the tasks on the local machine, usually one for each CPU core. You should see a hello world from each process, on each cpu core.

Now, to run your mpi program on the cluster, you will need to create a hostfile. First, lets create a simple hostfile that just runs four processes on the local machine

localhost slots=4 max_slots=8
 

Put that line in a file called localhost . Now run your program with that hostfile, using

mpirun --hostfile localhost ./hello_mpi
Hello, world, I am 0 of 4 
Hello, world, I am 2 of 4 
Hello, world, I am 1 of 4 
Hello, world, I am 3 of 4 
 

Now create a hostfile called cluster_hosts with the following entries:

pnode01 slots=4 max_slots=8
pnode02 slots=4 max_slots=8
pnode03 slots=4 max_slots=8
pnode04 slots=4 max_slots=8
 

and keep adding lines until you get to pnode40 .

The next step assumes you have set up your ssh keys as described in With 40 nodes listed in your cluster_hosts file, run your program again with

mpirun --hostfile cluster_hosts ./hello_mpi