Difference between revisions of "Rocky MPI Hello World"

From NIMBioS
(Created page with "TBA")
 
Line 1: Line 1:
TBA
= MPI Jobs on Rocky =
 
== Supported MPI Stack ==
 
We currently recommend and support:
 
* '''OpenMPI 4.1.6'''
* '''PMIx 4.2.6'''
 
''OpenMPI 5+ will be supported with a future software upgrade''
 
----
 
= MPI Example: Hello World in C =
 
This example demonstrates compiling and running MPI across multiple nodes.
 
== Step 1 — Create the MPI program (hello_mpi.c) ==
 
<syntaxhighlight lang="c">
#include <mpi.h>
#include <stdio.h>
 
int main(int argc, char *argv[]) {
    int rank, size, len;
    char name[MPI_MAX_PROCESSOR_NAME];
 
    MPI_Init(&argc, &argv);
 
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
    MPI_Comm_size(MPI_COMM_WORLD, &size);
    MPI_Get_processor_name(name, &len);
 
    printf("Hello from rank %d of %d on %s", rank, size, name);
 
    MPI_Finalize();
    return 0;
}
</syntaxhighlight>
 
== Step 2 — Compile the program ==
 
<pre>
module load OpenMPI/4.1.6-GCC-13.2.0
mpicc -O2 -o hello_mpi hello_mpi.c
</pre>
 
== Step 3 — Submit an MPI batch job (mpi_hello.sbatch) ==
 
<syntaxhighlight lang="bash">
#!/usr/bin/env bash
#SBATCH --job-name=mpi-hello
#SBATCH --output=mpi-hello-%j.out
#SBATCH --error=mpi-hello-%j.err
#SBATCH --nodes=2
#SBATCH --ntasks=8
#SBATCH --time=00:05:00
#SBATCH --partition=compute
 
module load OpenMPI/4.1.6-GCC-13.2.0
 
srun --mpi=pmix_v3 ./hello_mpi
 
# optionally use mpirun instead of srun
#mpirun -np "$SLURM_NTASKS" ./hello_mpi
</syntaxhighlight>
 
Submit the job:
 
<pre>
sbatch mpi_hello.sbatch
</pre>
 
== Step 4 — Expected Output ==
 
The generated output file (<code>mpi-hello-<jobid>.out</code>) should contain:
 
<pre>
Hello from rank 0 of 8 on rocky4.rocky.nimbios.org
Hello from rank 1 of 8 on rocky4.rocky.nimbios.org
Hello from rank 2 of 8 on rocky4.rocky.nimbios.org
Hello from rank 3 of 8 on rocky4.rocky.nimbios.org
Hello from rank 4 of 8 on rocky5.rocky.nimbios.org
Hello from rank 5 of 8 on rocky5.rocky.nimbios.org
Hello from rank 6 of 8 on rocky5.rocky.nimbios.org
Hello from rank 7 of 8 on rocky5.rocky.nimbios.org
</pre>
 
----
 
= Quick Reference =
 
{| class="wikitable"
! Action !! Command
|-
| Load MPI module || <code>module load OpenMPI/4.1.6-GCC-13.2.0</code>
|-
| Recommended launch || <code>srun --mpi=pmix_v3 ./my_program</code>
|-
| Alternative launch || <code>mpirun ./my_program</code>
|-
| Avoid || <code>srun --mpi=pmi2</code>, <code>srun --mpi=none</code>, OpenMPI 5
|}

Revision as of 00:40, 16 December 2025

MPI Jobs on Rocky

Supported MPI Stack

We currently recommend and support:

  • OpenMPI 4.1.6
  • PMIx 4.2.6

OpenMPI 5+ will be supported with a future software upgrade


MPI Example: Hello World in C

This example demonstrates compiling and running MPI across multiple nodes.

Step 1 — Create the MPI program (hello_mpi.c)

#include <mpi.h>
#include <stdio.h>

int main(int argc, char *argv[]) {
    int rank, size, len;
    char name[MPI_MAX_PROCESSOR_NAME];

    MPI_Init(&argc, &argv);

    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
    MPI_Comm_size(MPI_COMM_WORLD, &size);
    MPI_Get_processor_name(name, &len);

    printf("Hello from rank %d of %d on %s", rank, size, name);

    MPI_Finalize();
    return 0;
}

Step 2 — Compile the program

module load OpenMPI/4.1.6-GCC-13.2.0
mpicc -O2 -o hello_mpi hello_mpi.c

Step 3 — Submit an MPI batch job (mpi_hello.sbatch)

#!/usr/bin/env bash
#SBATCH --job-name=mpi-hello
#SBATCH --output=mpi-hello-%j.out
#SBATCH --error=mpi-hello-%j.err
#SBATCH --nodes=2
#SBATCH --ntasks=8
#SBATCH --time=00:05:00
#SBATCH --partition=compute

module load OpenMPI/4.1.6-GCC-13.2.0

srun --mpi=pmix_v3 ./hello_mpi

# optionally use mpirun instead of srun
#mpirun -np "$SLURM_NTASKS" ./hello_mpi

Submit the job:

sbatch mpi_hello.sbatch

Step 4 — Expected Output

The generated output file (mpi-hello-<jobid>.out) should contain:

Hello from rank 0 of 8 on rocky4.rocky.nimbios.org
Hello from rank 1 of 8 on rocky4.rocky.nimbios.org
Hello from rank 2 of 8 on rocky4.rocky.nimbios.org
Hello from rank 3 of 8 on rocky4.rocky.nimbios.org
Hello from rank 4 of 8 on rocky5.rocky.nimbios.org
Hello from rank 5 of 8 on rocky5.rocky.nimbios.org
Hello from rank 6 of 8 on rocky5.rocky.nimbios.org
Hello from rank 7 of 8 on rocky5.rocky.nimbios.org

Quick Reference

Action Command
Load MPI module module load OpenMPI/4.1.6-GCC-13.2.0
Recommended launch srun --mpi=pmix_v3 ./my_program
Alternative launch mpirun ./my_program
Avoid srun --mpi=pmi2, srun --mpi=none, OpenMPI 5