Difference between revisions of "Rocky Kivlin"
Line 8: | Line 8: | ||
{| class="wikitable" | {| class="wikitable" | ||
|- | |- | ||
! Node !! Type !! Architecture !! vCPU !! Memory | ! Node !! Type !! Architecture !! vCPU !! Memory | ||
|- | |- | ||
| bull1 || compute || Xeon Gold 6430 (Sapphire Rapids) || 128 || 512G | | bull1 || compute || Xeon Gold 6430 (Sapphire Rapids) || 128 || 512G | ||
|} | |} | ||
Latest revision as of 14:53, 13 February 2024
About
Kivlin Lab has purchased compute resources on the Rocky cluster. These resources are only available to accounts associated with the Kivlin Lab. When requesting an account, please specify your association with the lab to gain access to these resources.
Kivlin Partition
Below are the nodes in the Kivlin Partition:
Node | Type | Architecture | vCPU | Memory |
---|---|---|---|---|
bull1 | compute | Xeon Gold 6430 (Sapphire Rapids) | 128 | 512G |
Usage
By default, accounts will use the compute_all
partition which includes all shared pool resources. When submitting a job to Rocky, you will need to specify the kivlin
partition as part of your batch file.
Example batch file:
#SBATCH --partition=kivlin,compute_all #SBATCH --job-name=PYTHON_PRIME #SBATCH --output=python_prime_%j.out #SBATCH --mail-user=me@test.com #SBATCH --mail-type=END module load Python python3 prime.py
In the above batch file, you can see that we specify the kivlin
partition followed by the compute_all
partition. With these parameters, the job scheduler will look for available resources on the kivlin partition first. If all those are in use it will then use resources from the shared pool.