Difference between revisions of "Rocky MPI Hello World"
From NIMBioS
(Created page with "TBA") |
|||
| (6 intermediate revisions by the same user not shown) | |||
| Line 1: | Line 1: | ||
= MPI Jobs on Rocky = | |||
== Supported versions == | |||
Rocky supports PMIx v3 for MPI process management. MPI applications should be launched via srun using --mpi=pmix_v3. | |||
To see what versions of OpenMPI are available use the following command: | |||
<syntaxhighlight lang="bash"> | |||
module spider OpenMPI | |||
</syntaxhighlight> | |||
As of the writing of this document the versions available: | |||
<pre> | |||
Versions: | |||
OpenMPI/3.1.1-GCC-7.3.0-2.30 | |||
OpenMPI/3.1.3-GCC-8.2.0-2.31.1 | |||
OpenMPI/3.1.4-GCC-8.3.0 | |||
OpenMPI/4.0.3-GCC-9.3.0 | |||
OpenMPI/4.0.5-GCC-10.2.0 | |||
OpenMPI/4.1.1-GCC-10.3.0 | |||
OpenMPI/4.1.1-GCC-11.2.0 | |||
OpenMPI/4.1.4-GCC-11.3.0 | |||
OpenMPI/4.1.4-GCC-12.2.0 | |||
OpenMPI/4.1.5-GCC-12.3.0 | |||
OpenMPI/4.1.6-GCC-13.2.0 | |||
OpenMPI/5.0.3-GCC-13.3.0 | |||
OpenMPI/5.0.7-GCC-14.2.0 | |||
OpenMPI/5.0.8-GCC-14.3.0 | |||
OpenMPI/5.0.8-llvm-compilers-20.1.8 | |||
</pre> | |||
---- | |||
= MPI Example: Hello World in C = | |||
This example demonstrates compiling and running MPI across multiple nodes. | |||
== Step 1 — Create the MPI program (hello_mpi.c) == | |||
<syntaxhighlight lang="c"> | |||
#include <mpi.h> | |||
#include <stdio.h> | |||
int main(int argc, char *argv[]) { | |||
int rank, size, len; | |||
char name[MPI_MAX_PROCESSOR_NAME]; | |||
MPI_Init(&argc, &argv); | |||
MPI_Comm_rank(MPI_COMM_WORLD, &rank); | |||
MPI_Comm_size(MPI_COMM_WORLD, &size); | |||
MPI_Get_processor_name(name, &len); | |||
printf("Hello from rank %d of %d on %s", rank, size, name); | |||
MPI_Finalize(); | |||
return 0; | |||
} | |||
</syntaxhighlight> | |||
== Step 2 — Compile the program == | |||
Load the appropriate version of OpenMPI you want to use and then compile your program with mpicc. | |||
<pre> | |||
module load OpenMPI/5.0.8-GCC-14.3.0 | |||
mpicc -O2 -o hello_mpi hello_mpi.c | |||
</pre> | |||
This creates the binary file <code>hello_mpi</code> that we will run in our job. | |||
== Step 3 — Submit an MPI batch job (mpi_hello.sbatch) == | |||
Create the batch file that we'll name <code>mpi_hello.sbatch</code>.<br/> | |||
Notice that we are using <code>--mpi=pmix_v3</code> as a parameter to srun. | |||
<syntaxhighlight lang="bash"> | |||
#!/usr/bin/env bash | |||
#SBATCH --job-name=mpi-hello | |||
#SBATCH --output=mpi-hello-%j.out | |||
#SBATCH --error=mpi-hello-%j.err | |||
#SBATCH --nodes=2 | |||
#SBATCH --ntasks=8 | |||
#SBATCH --time=00:05:00 | |||
module load OpenMPI/5.0.8-GCC-14.3.0 | |||
srun --mpi=pmix_v3 ./hello_mpi | |||
</syntaxhighlight> | |||
Use sbatch to submit the job defined in our sbatch file: | |||
<pre> | |||
sbatch mpi_hello.sbatch | |||
</pre> | |||
== Step 4 — Expected Output == | |||
The generated output file (<code>mpi-hello-<jobid>.out</code>) should contain: | |||
<pre> | |||
Hello from rank 0 of 8 on rocky4.rocky.nimbios.org | |||
Hello from rank 1 of 8 on rocky4.rocky.nimbios.org | |||
Hello from rank 2 of 8 on rocky4.rocky.nimbios.org | |||
Hello from rank 3 of 8 on rocky4.rocky.nimbios.org | |||
Hello from rank 4 of 8 on rocky5.rocky.nimbios.org | |||
Hello from rank 5 of 8 on rocky5.rocky.nimbios.org | |||
Hello from rank 6 of 8 on rocky5.rocky.nimbios.org | |||
Hello from rank 7 of 8 on rocky5.rocky.nimbios.org | |||
</pre> | |||
Latest revision as of 19:31, 12 January 2026
MPI Jobs on Rocky
Supported versions
Rocky supports PMIx v3 for MPI process management. MPI applications should be launched via srun using --mpi=pmix_v3.
To see what versions of OpenMPI are available use the following command:
module spider OpenMPI
As of the writing of this document the versions available:
Versions:
OpenMPI/3.1.1-GCC-7.3.0-2.30
OpenMPI/3.1.3-GCC-8.2.0-2.31.1
OpenMPI/3.1.4-GCC-8.3.0
OpenMPI/4.0.3-GCC-9.3.0
OpenMPI/4.0.5-GCC-10.2.0
OpenMPI/4.1.1-GCC-10.3.0
OpenMPI/4.1.1-GCC-11.2.0
OpenMPI/4.1.4-GCC-11.3.0
OpenMPI/4.1.4-GCC-12.2.0
OpenMPI/4.1.5-GCC-12.3.0
OpenMPI/4.1.6-GCC-13.2.0
OpenMPI/5.0.3-GCC-13.3.0
OpenMPI/5.0.7-GCC-14.2.0
OpenMPI/5.0.8-GCC-14.3.0
OpenMPI/5.0.8-llvm-compilers-20.1.8
MPI Example: Hello World in C
This example demonstrates compiling and running MPI across multiple nodes.
Step 1 — Create the MPI program (hello_mpi.c)
#include <mpi.h>
#include <stdio.h>
int main(int argc, char *argv[]) {
int rank, size, len;
char name[MPI_MAX_PROCESSOR_NAME];
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Get_processor_name(name, &len);
printf("Hello from rank %d of %d on %s", rank, size, name);
MPI_Finalize();
return 0;
}
Step 2 — Compile the program
Load the appropriate version of OpenMPI you want to use and then compile your program with mpicc.
module load OpenMPI/5.0.8-GCC-14.3.0 mpicc -O2 -o hello_mpi hello_mpi.c
This creates the binary file hello_mpi that we will run in our job.
Step 3 — Submit an MPI batch job (mpi_hello.sbatch)
Create the batch file that we'll name mpi_hello.sbatch.
Notice that we are using --mpi=pmix_v3 as a parameter to srun.
#!/usr/bin/env bash
#SBATCH --job-name=mpi-hello
#SBATCH --output=mpi-hello-%j.out
#SBATCH --error=mpi-hello-%j.err
#SBATCH --nodes=2
#SBATCH --ntasks=8
#SBATCH --time=00:05:00
module load OpenMPI/5.0.8-GCC-14.3.0
srun --mpi=pmix_v3 ./hello_mpi
Use sbatch to submit the job defined in our sbatch file:
sbatch mpi_hello.sbatch
Step 4 — Expected Output
The generated output file (mpi-hello-<jobid>.out) should contain:
Hello from rank 0 of 8 on rocky4.rocky.nimbios.org Hello from rank 1 of 8 on rocky4.rocky.nimbios.org Hello from rank 2 of 8 on rocky4.rocky.nimbios.org Hello from rank 3 of 8 on rocky4.rocky.nimbios.org Hello from rank 4 of 8 on rocky5.rocky.nimbios.org Hello from rank 5 of 8 on rocky5.rocky.nimbios.org Hello from rank 6 of 8 on rocky5.rocky.nimbios.org Hello from rank 7 of 8 on rocky5.rocky.nimbios.org