Difference between revisions of "Rocky Python MPI Hello World"

From NIMBioS
 
(3 intermediate revisions by the same user not shown)
Line 1: Line 1:
= MPI4PY Jobs on Rocky =
= MPI4PY Jobs on Rocky =


== Supported Stack ==
== Supported versions ==


We currently recommend and support:
Rocky supports PMIx v3 for MPI process management. MPI applications should be launched via srun using --mpi=pmix_v3.


* '''OpenMPI 4.1.6'''
To see what versions of mpi4py are available use the following command:
* '''PMIx 4.2.6'''
* '''mpi4py 4.0.1'''


= MPI4PY Example: Hello World in Python =
<syntaxhighlight lang="bash">
module spider mpi4py
</syntaxhighlight>
 
<syntaxhighlight lang="bash">
    Versions:
        mpi4py/3.0.1 (E)
        mpi4py/3.0.2 (E)
        mpi4py/3.0.3 (E)
        mpi4py/3.1.1 (E)
        mpi4py/3.1.3 (E)
        mpi4py/3.1.4-gompi-2022b
        mpi4py/3.1.4-gompi-2023a
        mpi4py/3.1.4 (E)
        mpi4py/3.1.5-gompi-2023b
        mpi4py/3.1.5 (E)
        mpi4py/3.1.6 (E)
        mpi4py/4.0.1-gompi-2024a
        mpi4py/4.0.1 (E)
</syntaxhighlight>
 
With the above given information we are going to use <code>mpi4py/4.0.1</code>.




Line 28: Line 47:
== hello_mpi4py.sbatch ==
== hello_mpi4py.sbatch ==


Note the usage of <code>--mpi=pmix_v3</code> parameter.
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
#!/usr/bin/env bash
#!/usr/bin/env bash
Line 37: Line 57:
#SBATCH --time=00:05:00
#SBATCH --time=00:05:00


module load OpenMPI/4.1.6-GCC-13.2.0
module load mpi4py/4.0.1
module load mpi4py/4.0.1



Latest revision as of 20:35, 12 January 2026

MPI4PY Jobs on Rocky

Supported versions

Rocky supports PMIx v3 for MPI process management. MPI applications should be launched via srun using --mpi=pmix_v3.

To see what versions of mpi4py are available use the following command:

module spider mpi4py
     Versions:
        mpi4py/3.0.1 (E)
        mpi4py/3.0.2 (E)
        mpi4py/3.0.3 (E)
        mpi4py/3.1.1 (E)
        mpi4py/3.1.3 (E)
        mpi4py/3.1.4-gompi-2022b
        mpi4py/3.1.4-gompi-2023a
        mpi4py/3.1.4 (E)
        mpi4py/3.1.5-gompi-2023b
        mpi4py/3.1.5 (E)
        mpi4py/3.1.6 (E)
        mpi4py/4.0.1-gompi-2024a
        mpi4py/4.0.1 (E)

With the above given information we are going to use mpi4py/4.0.1.


hello_mpi4py.py

from mpi4py import MPI
import socket

comm = MPI.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()
hostname = socket.gethostname()

print(f"Hello from rank {rank} of {size} on {hostname}")

hello_mpi4py.sbatch

Note the usage of --mpi=pmix_v3 parameter.

#!/usr/bin/env bash
#SBATCH --job-name=hello-mpi4py
#SBATCH --output=hello-mpi4py-%j.out
#SBATCH --error=hello-mpi4py-%j.err
#SBATCH --nodes=2
#SBATCH --ntasks=8
#SBATCH --time=00:05:00

module load mpi4py/4.0.1

srun --mpi=pmix_v3 python hello_mpi4py.py

Submit the job:

sbatch hello_mpi4py.sbatch

Expected Output

Hello from rank 0 of 8 on rocky4.rocky.nimbios.org
Hello from rank 1 of 8 on rocky4.rocky.nimbios.org
Hello from rank 2 of 8 on rocky4.rocky.nimbios.org
Hello from rank 3 of 8 on rocky4.rocky.nimbios.org
Hello from rank 4 of 8 on rocky5.rocky.nimbios.org
Hello from rank 5 of 8 on rocky5.rocky.nimbios.org
Hello from rank 6 of 8 on rocky5.rocky.nimbios.org
Hello from rank 7 of 8 on rocky5.rocky.nimbios.org