Difference between revisions of "MPI Hello World"
(4 intermediate revisions by one other user not shown) | |||
Line 1: | Line 1: | ||
== MPI Hello World == |
== MPI Hello World == |
||
− | Many parallel jobs are using MPI at the lowest level to manage parallel compute resources. |
+ | Many parallel jobs are using [https://en.wikipedia.org/wiki/Message_Passing_Interface MPI] at the lowest level to manage parallel compute resources. You will want to successfully run this example to make sure your parallel computing environment is working properly. These instructions assume you've already set up your cluster ssh keys, as described in [[Cluster SSH access]] |
This is a 'Hello World' program that will test the operation of sending jobs to remote workers. |
This is a 'Hello World' program that will test the operation of sending jobs to remote workers. |
||
Line 15: | Line 15: | ||
int main(int argc, char* argv[]) |
int main(int argc, char* argv[]) |
||
{ |
{ |
||
− | int rank, size |
+ | int rank, size; |
− | char version[MPI_MAX_LIBRARY_VERSION_STRING]; |
||
MPI_Init(&argc, &argv); |
MPI_Init(&argc, &argv); |
||
Line 44: | Line 43: | ||
Now, to run your mpi program on the cluster, you will need to create a hostfile. |
Now, to run your mpi program on the cluster, you will need to create a hostfile. |
||
First, lets create a simple hostfile that just runs four processes on the local machine |
First, lets create a simple hostfile that just runs four processes on the local machine |
||
+ | |||
+ | |||
⚫ | |||
<nowiki> |
<nowiki> |
||
Line 49: | Line 51: | ||
</nowiki> |
</nowiki> |
||
⚫ | |||
Now run your program with that hostfile, using |
Now run your program with that hostfile, using |
||
+ | |||
mpirun --hostfile localhost ./hello_mpi |
mpirun --hostfile localhost ./hello_mpi |
||
Line 68: | Line 70: | ||
</nowiki> |
</nowiki> |
||
− | and keep adding lines until you get to <tt> |
+ | and keep adding lines until you get to <tt> pnode64 slots=4 max_slots=8 </tt>. |
− | The next step assumes you have set up your ssh keys as described in [[Cluster SSH |
+ | The next step assumes you have set up your ssh keys as described in [[Cluster SSH access]] |
− | With |
+ | With 64 nodes listed in your <tt> cluster_hosts </tt> file, run your program again with |
mpirun --hostfile cluster_hosts ./hello_mpi |
mpirun --hostfile cluster_hosts ./hello_mpi |
||
− | You should see the output shown below, which is |
+ | You should see the output shown below, which is 256 lines long, four responses from each host, one for each core described. |
Once you are able to run this program successfully, your MPI setup is working. This will be necessary before you can run other MPI-based programs, such as ipyparallel programs which use MPI. |
Once you are able to run this program successfully, your MPI setup is working. This will be necessary before you can run other MPI-based programs, such as ipyparallel programs which use MPI. |
||
+ | |||
+ | Note: the example below was run on 40 nodes. |
||
<nowiki> |
<nowiki> |
Latest revision as of 01:07, 21 December 2022
MPI Hello World
Many parallel jobs are using MPI at the lowest level to manage parallel compute resources. You will want to successfully run this example to make sure your parallel computing environment is working properly. These instructions assume you've already set up your cluster ssh keys, as described in Cluster SSH access
This is a 'Hello World' program that will test the operation of sending jobs to remote workers.
/* * Sample MPI "hello world" application in C */ #include <stdio.h> #include "mpi.h" int main(int argc, char* argv[]) { int rank, size; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size(MPI_COMM_WORLD, &size); printf("Hello, world, I am %d of %d \n", rank, size); MPI_Finalize(); return 0; }
create a folder and save this file as hello_mpi.c
You can compile this program with
mpicc -g hello_mpi.c -o hello_mpi
You now have an executable called hello_mpi
You can run the command with
mpirun ./hello_mpi
With no other arguments, mpirun will run all of the tasks on the local machine, usually one for each CPU core. You should see a hello world from each process, on each cpu core.
Now, to run your mpi program on the cluster, you will need to create a hostfile. First, lets create a simple hostfile that just runs four processes on the local machine
Put the following line in a file called localhost .
localhost slots=4 max_slots=8
Now run your program with that hostfile, using
mpirun --hostfile localhost ./hello_mpi
Hello, world, I am 0 of 4 Hello, world, I am 2 of 4 Hello, world, I am 1 of 4 Hello, world, I am 3 of 4
Now create a hostfile called cluster_hosts with the following entries:
pnode01 slots=4 max_slots=8 pnode02 slots=4 max_slots=8 pnode03 slots=4 max_slots=8 pnode04 slots=4 max_slots=8
and keep adding lines until you get to pnode64 slots=4 max_slots=8 .
The next step assumes you have set up your ssh keys as described in Cluster SSH access
With 64 nodes listed in your cluster_hosts file, run your program again with
mpirun --hostfile cluster_hosts ./hello_mpi
You should see the output shown below, which is 256 lines long, four responses from each host, one for each core described.
Once you are able to run this program successfully, your MPI setup is working. This will be necessary before you can run other MPI-based programs, such as ipyparallel programs which use MPI.
Note: the example below was run on 40 nodes.
Hello, world, I am 126 of 160 Hello, world, I am 69 of 160 Hello, world, I am 159 of 160 Hello, world, I am 137 of 160 Hello, world, I am 11 of 160 Hello, world, I am 138 of 160 Hello, world, I am 139 of 160 Hello, world, I am 3 of 160 Hello, world, I am 125 of 160 Hello, world, I am 68 of 160 Hello, world, I am 0 of 160 Hello, world, I am 127 of 160 Hello, world, I am 71 of 160 Hello, world, I am 8 of 160 Hello, world, I am 136 of 160 Hello, world, I am 85 of 160 Hello, world, I am 2 of 160 Hello, world, I am 124 of 160 Hello, world, I am 70 of 160 Hello, world, I am 51 of 160 Hello, world, I am 9 of 160 Hello, world, I am 86 of 160 Hello, world, I am 21 of 160 Hello, world, I am 1 of 160 Hello, world, I am 25 of 160 Hello, world, I am 143 of 160 Hello, world, I am 119 of 160 Hello, world, I am 6 of 160 Hello, world, I am 10 of 160 Hello, world, I am 84 of 160 Hello, world, I am 133 of 160 Hello, world, I am 156 of 160 Hello, world, I am 23 of 160 Hello, world, I am 100 of 160 Hello, world, I am 146 of 160 Hello, world, I am 27 of 160 Hello, world, I am 118 of 160 Hello, world, I am 55 of 160 Hello, world, I am 32 of 160 Hello, world, I am 123 of 160 Hello, world, I am 67 of 160 Hello, world, I am 87 of 160 Hello, world, I am 135 of 160 Hello, world, I am 157 of 160 Hello, world, I am 20 of 160 Hello, world, I am 130 of 160 Hello, world, I am 120 of 160 Hello, world, I am 12 of 160 Hello, world, I am 82 of 160 Hello, world, I am 48 of 160 Hello, world, I am 132 of 160 Hello, world, I am 158 of 160 Hello, world, I am 22 of 160 Hello, world, I am 131 of 160 Hello, world, I am 101 of 160 Hello, world, I am 145 of 160 Hello, world, I am 24 of 160 Hello, world, I am 140 of 160 Hello, world, I am 117 of 160 Hello, world, I am 113 of 160 Hello, world, I am 154 of 160 Hello, world, I am 7 of 160 Hello, world, I am 53 of 160 Hello, world, I am 33 of 160 Hello, world, I am 16 of 160 Hello, world, I am 28 of 160 Hello, world, I am 111 of 160 Hello, world, I am 121 of 160 Hello, world, I am 13 of 160 Hello, world, I am 64 of 160 Hello, world, I am 88 of 160 Hello, world, I am 96 of 160 Hello, world, I am 83 of 160 Hello, world, I am 49 of 160 Hello, world, I am 134 of 160 Hello, world, I am 128 of 160 Hello, world, I am 102 of 160 Hello, world, I am 148 of 160 Hello, world, I am 147 of 160 Hello, world, I am 26 of 160 Hello, world, I am 36 of 160 Hello, world, I am 141 of 160 Hello, world, I am 58 of 160 Hello, world, I am 73 of 160 Hello, world, I am 46 of 160 Hello, world, I am 116 of 160 Hello, world, I am 114 of 160 Hello, world, I am 155 of 160 Hello, world, I am 4 of 160 Hello, world, I am 52 of 160 Hello, world, I am 34 of 160 Hello, world, I am 62 of 160 Hello, world, I am 17 of 160 Hello, world, I am 29 of 160 Hello, world, I am 76 of 160 Hello, world, I am 92 of 160 Hello, world, I am 81 of 160 Hello, world, I am 50 of 160 Hello, world, I am 129 of 160 Hello, world, I am 103 of 160 Hello, world, I am 149 of 160 Hello, world, I am 144 of 160 Hello, world, I am 37 of 160 Hello, world, I am 142 of 160 Hello, world, I am 56 of 160 Hello, world, I am 75 of 160 Hello, world, I am 47 of 160 Hello, world, I am 40 of 160 Hello, world, I am 106 of 160 Hello, world, I am 115 of 160 Hello, world, I am 152 of 160 Hello, world, I am 5 of 160 Hello, world, I am 54 of 160 Hello, world, I am 35 of 160 Hello, world, I am 63 of 160 Hello, world, I am 18 of 160 Hello, world, I am 30 of 160 Hello, world, I am 77 of 160 Hello, world, I am 93 of 160 Hello, world, I am 108 of 160 Hello, world, I am 122 of 160 Hello, world, I am 14 of 160 Hello, world, I am 65 of 160 Hello, world, I am 89 of 160 Hello, world, I am 99 of 160 Hello, world, I am 153 of 160 Hello, world, I am 61 of 160 Hello, world, I am 19 of 160 Hello, world, I am 31 of 160 Hello, world, I am 78 of 160 Hello, world, I am 94 of 160 Hello, world, I am 109 of 160 Hello, world, I am 15 of 160 Hello, world, I am 66 of 160 Hello, world, I am 91 of 160 Hello, world, I am 97 of 160 Hello, world, I am 80 of 160 Hello, world, I am 150 of 160 Hello, world, I am 38 of 160 Hello, world, I am 57 of 160 Hello, world, I am 72 of 160 Hello, world, I am 44 of 160 Hello, world, I am 41 of 160 Hello, world, I am 107 of 160 Hello, world, I am 112 of 160 Hello, world, I am 59 of 160 Hello, world, I am 74 of 160 Hello, world, I am 45 of 160 Hello, world, I am 42 of 160 Hello, world, I am 104 of 160 Hello, world, I am 79 of 160 Hello, world, I am 110 of 160 Hello, world, I am 90 of 160 Hello, world, I am 98 of 160 Hello, world, I am 151 of 160 Hello, world, I am 39 of 160 Hello, world, I am 43 of 160 Hello, world, I am 105 of 160 Hello, world, I am 60 of 160 Hello, world, I am 95 of 160