Rocky User Guide

From NIMBioS
Revision as of 19:14, 11 January 2024 by Jondale (talk | contribs)

About Rocky

Rocky is a HPC cluster comprised of compute heavy nodes with 40 cores/80 threads and 512GB of ram [rocky], memory intensive nodes with 20 cores/40 threads and 768GB of RAM [moose], and a Ceph storage subsystem [quarrel]


Requesting Access

In order to gain access to Rocky you must first fill out the Rocky_Access_Form.


Logging in to Rocky

Rocky's firewall limits access to the UTK network. You will either need to be on campus or using the Campus VPN

Once your account is created you will be able to SSH into a shell or SCP to copy files to/from Rocky.

Rocky uses Public Key Authentication for access instead of passwords.

Please review the following pages for OS specific instructions:

Rocky_Access_SSH Linux or Mac
Rocky_Access_Windows Windows


Environmental Modules

Rocky uses Lmod as it's environmental module system. This allows you to easily set your session or job's environment to support the language, libraries, and specific versions needed.

To learn more about using Lmod on Rocky, check out Rocky Environments.


Submitting a Job

Rocky uses Slurm to queue and submit jobs to the cluster's compute nodes.

The nature of a compute cluster is that jobs are meant to be submitted to a queue. Jobs are processed based on a Fairshare priority system. Your job's priority is based on the associations you have (such as your lab) and how much compute you and they have used recently. Sometimes your jobs might run instantly but other times you might have a wait time before they process.

To learn how to set up your own jobs, check out the Anatomy of a Rocky Job

Examples

Beyond the below examples, we have also started a Github Repository.

Python

R

MATLAB


Acknowledgement and Citations

If any work on Rocky is used in a research report, journal, or publication that requires citation of authors' work, please acknowledge NIMBioS.

Our suggested acknowledgment is as follows.

[A portion of] The computation for this work was performed on the National Institute for Modeling Biological Systems (NIMBioS) computational resources at the University of Tennessee.