Difference between revisions of "MPI Hello World"
(Created page with "== MPI Hello World == Many parallel jobs are using MPI at the lowest level to manage parallel compute resources. This is a 'Hello World' program that will test the operation...") |
|||
Line 4: | Line 4: | ||
This is a 'Hello World' program that will test the operation of sending jobs to remote workers. |
This is a 'Hello World' program that will test the operation of sending jobs to remote workers. |
||
+ | <nowiki> |
||
− | |||
− | <code> |
||
/* |
/* |
||
* Sample MPI "hello world" application in C |
* Sample MPI "hello world" application in C |
||
Line 29: | Line 28: | ||
return 0; |
return 0; |
||
} |
} |
||
+ | </nowiki> |
||
+ | |||
+ | create a folder and save this file as helloworld_mpi.c |
||
+ | |||
+ | You can compile this program with |
||
+ | |||
+ | mpicc -g helloworld_mpi.c -o helloworld_mpi |
||
+ | |||
+ | You now have an executable called helloworld_mpi |
||
+ | |||
+ | You can run the command with |
||
+ | mpirun ./helloworld_mpi |
||
+ | |||
+ | With no other arguments, mpirun will run all of the tasks on the local machine, usually one for each CPU core. You should see a hello world from each process, on each cpu core. |
||
+ | |||
+ | Now, to run your mpi program on the cluster, you will need to create a hostfile. |
||
+ | First, lets create a simple hostfile that just runs four processes on the local machine |
||
+ | |||
+ | <nowiki> |
||
+ | localhost slots=4 max_slots=8 |
||
+ | </nowiki> |
||
+ | |||
+ | Put that line in a file called 'localhost'. |
||
+ | Now run your program with that hostfile, using |
||
+ | mpirun --hostfile localhost ./helloworld_mpi |
||
− | < |
+ | <nowiki> |
+ | Hello, world, I am 2 of 4, (Open MPI v4.0.3, package: Debian OpenMPI, ident: 4.0.3, repo rev: v4.0.3, Mar 03, 2020, 87) |
||
+ | Hello, world, I am 0 of 4, (Open MPI v4.0.3, package: Debian OpenMPI, ident: 4.0.3, repo rev: v4.0.3, Mar 03, 2020, 87) |
||
+ | Hello, world, I am 1 of 4, (Open MPI v4.0.3, package: Debian OpenMPI, ident: 4.0.3, repo rev: v4.0.3, Mar 03, 2020, 87) |
||
+ | Hello, world, I am 3 of 4, (Open MPI v4.0.3, package: Debian OpenMPI, ident: 4.0.3, repo rev: v4.0.3, Mar 03, 2020, 87) |
||
+ | <nowiki> |
Revision as of 12:57, 24 September 2022
MPI Hello World
Many parallel jobs are using MPI at the lowest level to manage parallel compute resources.
This is a 'Hello World' program that will test the operation of sending jobs to remote workers.
/* * Sample MPI "hello world" application in C */ #include <stdio.h> #include "mpi.h" int main(int argc, char* argv[]) { int rank, size, len; char version[MPI_MAX_LIBRARY_VERSION_STRING]; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size(MPI_COMM_WORLD, &size); MPI_Get_library_version(version, &len); printf("Hello, world, I am %d of %d, (%s, %d)\n", rank, size, version, len); MPI_Finalize(); return 0; }
create a folder and save this file as helloworld_mpi.c
You can compile this program with
mpicc -g helloworld_mpi.c -o helloworld_mpi
You now have an executable called helloworld_mpi
You can run the command with mpirun ./helloworld_mpi
With no other arguments, mpirun will run all of the tasks on the local machine, usually one for each CPU core. You should see a hello world from each process, on each cpu core.
Now, to run your mpi program on the cluster, you will need to create a hostfile. First, lets create a simple hostfile that just runs four processes on the local machine
localhost slots=4 max_slots=8
Put that line in a file called 'localhost'. Now run your program with that hostfile, using mpirun --hostfile localhost ./helloworld_mpi
<nowiki>
Hello, world, I am 2 of 4, (Open MPI v4.0.3, package: Debian OpenMPI, ident: 4.0.3, repo rev: v4.0.3, Mar 03, 2020, 87) Hello, world, I am 0 of 4, (Open MPI v4.0.3, package: Debian OpenMPI, ident: 4.0.3, repo rev: v4.0.3, Mar 03, 2020, 87) Hello, world, I am 1 of 4, (Open MPI v4.0.3, package: Debian OpenMPI, ident: 4.0.3, repo rev: v4.0.3, Mar 03, 2020, 87) Hello, world, I am 3 of 4, (Open MPI v4.0.3, package: Debian OpenMPI, ident: 4.0.3, repo rev: v4.0.3, Mar 03, 2020, 87)
<nowiki>