<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.nimbios.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jstratt7</id>
	<title>NIMBioS - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.nimbios.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jstratt7"/>
	<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/Special:Contributions/Jstratt7"/>
	<updated>2026-04-04T16:20:28Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.37.2</generator>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_Support&amp;diff=240</id>
		<title>Rocky Support</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_Support&amp;diff=240"/>
		<updated>2024-05-07T14:52:06Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Supporting Rocky =&lt;br /&gt;
&lt;br /&gt;
Rocky grows through your ﬁnancial support (e.g.,&lt;br /&gt;
direct costs built into grants or start up funds). By&lt;br /&gt;
contributing to building and maintaining Rocky, you&lt;br /&gt;
can assure priority access to the resources you&lt;br /&gt;
contribute.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Compute Resources =&lt;br /&gt;
&lt;br /&gt;
If you need exclusive access to compute resources for yourself, your lab, or any group of people you designate, we offer the ability to purchase those compute resources on Rocky.  We will work with you to determine your needs and provide a quote to provide it.  We'll purchase new hardware to meet those needs and install them on Rocky.  You will have exclusive access to the new hardware for the duration of the agreement.  Agreements are typically three years.  &lt;br /&gt;
&lt;br /&gt;
At the end of the agreed term you will have the option of extending the agreement or allowing the hardware to enter the shared pool.  As long as the shared pool has compute resources you have helped provide, you will have priority access to all resources in the shared pool.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Storage =&lt;br /&gt;
&lt;br /&gt;
We offer the ability to purchase storage on our Ceph storage cluster.   Your storage will be fully redundant across many drives and at least three storage nodes.  Your data will be backed up offsite at regular intervals.&lt;br /&gt;
&lt;br /&gt;
You will be able to access your storage through project directories on the Rocky compute nodes.  We can offer alternative interfaces to the data if necessary.  If you are interested in other interfaces, please bring those up in discussions prior to an agreement.&lt;br /&gt;
&lt;br /&gt;
Initial agreements on storage are typically three years.  At the end of the three years, the agreement may be extended.&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_User_Guide&amp;diff=225</id>
		<title>Rocky User Guide</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_User_Guide&amp;diff=225"/>
		<updated>2023-06-05T17:47:59Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: typo&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= About Rocky =&lt;br /&gt;
&lt;br /&gt;
Rocky is a [https://en.wikipedia.org/wiki/High-performance_computing HPC] cluster comprised of compute heavy nodes with 40 cores/80 threads and 512GB of ram ['''rocky'''], memory intensive nodes with 20 cores/40 threads and 768GB of RAM ['''moose'''], and a [https://docs.ceph.com Ceph] storage subsystem ['''quarrel''']&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Requesting Access ==&lt;br /&gt;
In order to gain access to Rocky you must first fill out the [[Rocky_Access_Form]].  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging in to Rocky ==&lt;br /&gt;
Rocky's firewall limits access to the UTK network.  You will either need to be on campus or using the [https://utk.teamdynamix.com/TDClient/2277/OIT-Portal/KB/ArticleDet?ID=123517 Campus VPN]&lt;br /&gt;
&lt;br /&gt;
Once your account is created you will be able to SSH into a shell or SCP to copy files to/from Rocky.  &lt;br /&gt;
&lt;br /&gt;
Rocky uses Public Key Authentication for access instead of passwords.  &lt;br /&gt;
&lt;br /&gt;
Please review the following pages for OS specific instructions:&lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| [[Rocky_Access_SSH]] || Linux or Mac&lt;br /&gt;
|-&lt;br /&gt;
| [[Rocky_Access_Windows]] || Windows&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Environmental Modules ==&lt;br /&gt;
Rocky uses [https://lmod.readthedocs.io/en/latest/ Lmod] as it's environmental module system.  This allows you to easily set your session or job's environment to support the language, libraries, and specific versions needed.&lt;br /&gt;
&lt;br /&gt;
To learn more about using Lmod on Rocky, check out [[ Rocky_Environments | Rocky Environments]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Submitting a Job ==&lt;br /&gt;
&lt;br /&gt;
Rocky uses [https://slurm.schedmd.com/documentation.html Slurm] to queue and submit jobs to the cluster's compute nodes.  &lt;br /&gt;
&lt;br /&gt;
To learn how to set up your own jobs, check out the [[Rocky_Job_Anatomy | Anatomy of a Rocky Job]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Examples =&lt;br /&gt;
&lt;br /&gt;
Beyond the below examples, we have also started a [https://github.com/rocky-cluster Github Repository].&lt;br /&gt;
&lt;br /&gt;
==== Python ====&lt;br /&gt;
* [[ Rocky_Python_HelloWorld | Hello World ]]&lt;br /&gt;
* [[ Rocky_Python_Prime | Discover Prime Numbers ]]&lt;br /&gt;
* [[ Rocky_Python_Prime_Array | Discover Prime Numbers using a job array ]]&lt;br /&gt;
&lt;br /&gt;
==== R ====&lt;br /&gt;
* [[ Rocky_R_HelloWorld | Hello World ]]&lt;br /&gt;
* [[ Rocky_R_Prime | Discover Prime Numbers ]]&lt;br /&gt;
&lt;br /&gt;
==== MATLAB ====&lt;br /&gt;
* [[ Rocky_MATLAB_HelloWorld | Hello World ]]&lt;br /&gt;
* [[ Rocky_MATLAB_Prime_Array | Discover Prime Numbers using a job array ]]&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_User_Guide&amp;diff=224</id>
		<title>Rocky User Guide</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_User_Guide&amp;diff=224"/>
		<updated>2023-05-21T04:02:13Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= About Rocky =&lt;br /&gt;
&lt;br /&gt;
Rocky is a [https://en.wikipedia.org/wiki/High-performance_computing HPC] cluster comprised of compute heavy nodes with 40 cores/80 threads and 512GB of ram ['''rocky'''], memory intensive nodes with 20 cores/40 threads and 768GB of RAM ['''moose'''], and a [https://docs.ceph.com Ceph] storage subsystem ['''quarrel''']&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Requesting Access ==&lt;br /&gt;
In order to gain access to Rocky you must first fill out the [[Rocky_Access_Form]].  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging in to Rocky ==&lt;br /&gt;
Rocky's firewall limits access to the UTK network.  You will either need to be on campus or using the [https://utk.teamdynamix.com/TDClient/2277/OIT-Portal/KB/ArticleDet?ID=123517 Campus VPN]&lt;br /&gt;
&lt;br /&gt;
Once your account is created you will be able to SSH into a shell or SCP to copy files to/from Rocky.  &lt;br /&gt;
&lt;br /&gt;
Rocky uses Public Key Authentication for access instead of passwords.  &lt;br /&gt;
&lt;br /&gt;
Please review the following pages for OS specific instructions:&lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| [[Rocky_Access_SSH]] || Linux or Mac&lt;br /&gt;
|-&lt;br /&gt;
| [[Rocky_Access_Windows]] || Windows&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Environmental Modules ==&lt;br /&gt;
Rocky uses [https://lmod.readthedocs.io/en/latest/ Lmod] as it's environmental module system.  This allows you to easily set your session or job's environment to support the language, libraries, and specific versions needed.&lt;br /&gt;
&lt;br /&gt;
To learn more about using Lmod on Rocky, check out [[ Rocky_Environments | Rocky Environments]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Submitting a Job ==&lt;br /&gt;
&lt;br /&gt;
Rocky uses [https://slurm.schedmd.com/documentation.html Slurm] to queue and submit jobs to the cluster's compute nodes.  &lt;br /&gt;
&lt;br /&gt;
To learn how to set up your own jobs, check out the [[Rocky_Job_Anatomy | Anatomy of a Rocky Job]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Examples =&lt;br /&gt;
&lt;br /&gt;
Beyond the below examples, we have also started a [https://github.com/rocky-cluster Github Repository].&lt;br /&gt;
&lt;br /&gt;
==== Python ====&lt;br /&gt;
* [[ Rocky_Python_HelloWorld | Hello World ]]&lt;br /&gt;
* [[ Rocky_Python_Prime | Discover Prime Numbers ]]&lt;br /&gt;
* [[ Rocky_Python_Prime_Array | Discover Prime Numbers using a job array ]]&lt;br /&gt;
&lt;br /&gt;
==== R ====&lt;br /&gt;
* [[ Rocky_R_HelloWorld | Hello World ]]&lt;br /&gt;
* [[ Rocky_R_Prime | Discover Prime Numbers ]]&lt;br /&gt;
&lt;br /&gt;
==== MATLAB ====&lt;br /&gt;
* [[ Rocky_MATLAB_HelloWorld | Hellow World ]]&lt;br /&gt;
* [[ Rocky_MATLAB_Prime_Array | Discover Prime Numbers using a job array ]]&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_Support&amp;diff=223</id>
		<title>Rocky Support</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_Support&amp;diff=223"/>
		<updated>2023-05-17T21:07:27Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Supporting Rocky =&lt;br /&gt;
&lt;br /&gt;
Rocky grows through your ﬁnancial support (e.g.,&lt;br /&gt;
direct costs built into grants or start up funds). By&lt;br /&gt;
contributing to building and maintaining Rocky, you&lt;br /&gt;
can assure priority access to the resources you&lt;br /&gt;
contribute.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Node Costs =&lt;br /&gt;
&lt;br /&gt;
Below are examples of node costs.  While we update the below list routinely, hardware prices change daily.  The below table is meant to give a sense of scale and not a guaranteed cost.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Storage Node ====&lt;br /&gt;
&lt;br /&gt;
Storage Nodes are added to our Ceph storage subsystem to provide highly redundant and fault tolerant storage accessible to the Rocky Cluster.  By providing a storage node, you will have access to the additional amount of storage provided.   Long term storage will incur further costs as hardware ages and fails.&lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| Quote Date || January 25, 2023&lt;br /&gt;
|-&lt;br /&gt;
| Quoted Cost || $12,155.32&lt;br /&gt;
|-&lt;br /&gt;
| Capacity || 34TB&lt;br /&gt;
|-&lt;br /&gt;
| Configuration || Dell R740xd&amp;lt;br/&amp;gt;16 x 8TB HDD&amp;lt;br/&amp;gt;4 x 1.9TB M.2 SSD&amp;lt;br/&amp;gt;Additional drives to extend backup system.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Compute Node ====&lt;br /&gt;
&lt;br /&gt;
Compute nodes are added to Rocky's pool of compute resources.   By providing a compute node, your user and/or group will be given priority access to the amount of compute added for a designated amount of time (generally the length of the funding project).&lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| Quote Date || May 1, 2023&lt;br /&gt;
|-&lt;br /&gt;
| Quoted Cost || $17000&lt;br /&gt;
|-&lt;br /&gt;
| Compute || 112 vcpu&amp;lt;br/&amp;gt;512MB ram&lt;br /&gt;
|-&lt;br /&gt;
| Configuration || Dell R750&amp;lt;br/&amp;gt;2 x Xeon Gold 6330&amp;lt;br/&amp;gt;&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_Support&amp;diff=222</id>
		<title>Rocky Support</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_Support&amp;diff=222"/>
		<updated>2023-05-17T21:05:02Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Supporting Rocky =&lt;br /&gt;
&lt;br /&gt;
Rocky grows through your ﬁnancial support (e.g.,&lt;br /&gt;
direct costs built into grants or start up funds). By&lt;br /&gt;
contributing to building and maintaining Rocky, you&lt;br /&gt;
can assure priority access to the resources you&lt;br /&gt;
contribute.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Node Costs =&lt;br /&gt;
&lt;br /&gt;
Below are examples of node costs.  While we update the below list routinely, hardware prices change daily.  The below table is meant to give a sense of scale and not a guaranteed cost.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Storage Node ====&lt;br /&gt;
&lt;br /&gt;
Storage Nodes are added to our Ceph storage subsystem to provide highly redundant and fault tolerant storage accessible to the Rocky Cluster.  By providing a storage node, you will have access to the additional amount of storage provided.   Long term storage will incur further costs as hardware ages and fails.&lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| Quote Date || January 25, 2023&lt;br /&gt;
|-&lt;br /&gt;
| Quoted Cost || $12,155.32&lt;br /&gt;
|-&lt;br /&gt;
| Capacity || 30TB&lt;br /&gt;
|-&lt;br /&gt;
| Configuration || Dell R740xd&amp;lt;br/&amp;gt;16 x 8TB HDD&amp;lt;br/&amp;gt;4 x 1.9TB M.2 SSD&amp;lt;br/&amp;gt;Additional drives to extend backup system.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Compute Node ====&lt;br /&gt;
&lt;br /&gt;
Compute nodes are added to Rocky's pool of compute resources.   By providing a compute node, your user and/or group will be given priority access to the amount of compute added for a designated amount of time (generally the length of the funding project).&lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| Quote Date || May 1, 2023&lt;br /&gt;
|-&lt;br /&gt;
| Quoted Cost || $17000&lt;br /&gt;
|-&lt;br /&gt;
| Compute || 112 vcpu&amp;lt;br/&amp;gt;512MB ram&lt;br /&gt;
|-&lt;br /&gt;
| Configuration || Dell R750&amp;lt;br/&amp;gt;2 x Xeon Gold 6330&amp;lt;br/&amp;gt;&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_Support&amp;diff=221</id>
		<title>Rocky Support</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_Support&amp;diff=221"/>
		<updated>2023-05-17T18:17:01Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Supporting Rocky =&lt;br /&gt;
&lt;br /&gt;
Rocky grows through your ﬁnancial support (e.g.,&lt;br /&gt;
direct costs built into grants or start up funds). By&lt;br /&gt;
contributing to building and maintaining Rocky, you&lt;br /&gt;
can assure priority access to the resources you&lt;br /&gt;
contribute.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Node Costs =&lt;br /&gt;
&lt;br /&gt;
Below are examples of node costs.  While we update the below list routinely, hardware prices change daily.  The below table is meant to give a sense of scale and not a guaranteed cost.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Storage Node ====&lt;br /&gt;
&lt;br /&gt;
Storage Nodes are added to our Ceph storage subsystem to provide highly redundant and fault tolerant storage accessible to the Rocky Cluster.  By providing a storage node, you will have access to the additional amount of storage provided.   Long term storage will incur further costs as hardware ages and fails.&lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| Quote Date || January 25, 2023&lt;br /&gt;
|-&lt;br /&gt;
| Quoted Cost || $12,155.32&lt;br /&gt;
|-&lt;br /&gt;
| Capacity || 40TB&lt;br /&gt;
|-&lt;br /&gt;
| Configuration || Dell R740xd&amp;lt;br/&amp;gt;16 x 8TB HDD&amp;lt;br/&amp;gt;4 x 1.9TB M.2 SSD&amp;lt;br/&amp;gt;Additional drives to extend backup system.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Compute Node ====&lt;br /&gt;
&lt;br /&gt;
Compute nodes are added to Rocky's pool of compute resources.   By providing a compute node, your user and/or group will be given priority access to the amount of compute added for a designated amount of time (generally the length of the funding project).&lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| Quote Date || May 1, 2023&lt;br /&gt;
|-&lt;br /&gt;
| Quoted Cost || $17000&lt;br /&gt;
|-&lt;br /&gt;
| Compute || 112 vcpu&amp;lt;br/&amp;gt;512MB ram&lt;br /&gt;
|-&lt;br /&gt;
| Configuration || Dell R750&amp;lt;br/&amp;gt;2 x Xeon Gold 6330&amp;lt;br/&amp;gt;&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_Support&amp;diff=220</id>
		<title>Rocky Support</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_Support&amp;diff=220"/>
		<updated>2023-05-17T18:11:31Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: /* Compute Node */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Supporting Rocky =&lt;br /&gt;
&lt;br /&gt;
Rocky grows through your ﬁnancial support (e.g.,&lt;br /&gt;
direct costs built into grants or start up funds). By&lt;br /&gt;
contributing to building and maintaining Rocky, you&lt;br /&gt;
can assure priority access to the resources you&lt;br /&gt;
contribute.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Node Costs =&lt;br /&gt;
&lt;br /&gt;
Below are examples of node costs.  While we update the below list routinely, hardware prices change daily.  The below table is meant to give a sense of scale and not a guaranteed cost.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Storage Node ====&lt;br /&gt;
&lt;br /&gt;
Storage Nodes are added to our Ceph storage subsystem to provide highly redundant and fault tolerant storage accessible to the Rocky Cluster.  &lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| Quote Date || January 25, 2023&lt;br /&gt;
|-&lt;br /&gt;
| Quoted Cost || $12,155.32&lt;br /&gt;
|-&lt;br /&gt;
| Capacity || 40TB&lt;br /&gt;
|-&lt;br /&gt;
| Configuration || Dell R740xd&amp;lt;br/&amp;gt;16 x 8TB HDD&amp;lt;br/&amp;gt;4 x 1.9TB M.2 SSD&amp;lt;br/&amp;gt;Additional drives to extend backup system.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Compute Node ====&lt;br /&gt;
&lt;br /&gt;
Compute nodes are added to Rocky's pool of compute resources.   By providing a compute node, your user and/or group will be given priority access to the amount of compute added for a designated amount of time (generally the length of the funding project).&lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| Quote Date || May 1, 2023&lt;br /&gt;
|-&lt;br /&gt;
| Quoted Cost || $17000&lt;br /&gt;
|-&lt;br /&gt;
| Compute || 112 vcpu&amp;lt;br/&amp;gt;512MB ram&lt;br /&gt;
|-&lt;br /&gt;
| Configuration || Dell R750&amp;lt;br/&amp;gt;2 x Xeon Gold 6330&amp;lt;br/&amp;gt;&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_Support&amp;diff=219</id>
		<title>Rocky Support</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_Support&amp;diff=219"/>
		<updated>2023-05-17T18:02:55Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: /* Compute Node */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Supporting Rocky =&lt;br /&gt;
&lt;br /&gt;
Rocky grows through your ﬁnancial support (e.g.,&lt;br /&gt;
direct costs built into grants or start up funds). By&lt;br /&gt;
contributing to building and maintaining Rocky, you&lt;br /&gt;
can assure priority access to the resources you&lt;br /&gt;
contribute.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Node Costs =&lt;br /&gt;
&lt;br /&gt;
Below are examples of node costs.  While we update the below list routinely, hardware prices change daily.  The below table is meant to give a sense of scale and not a guaranteed cost.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Storage Node ====&lt;br /&gt;
&lt;br /&gt;
Storage Nodes are added to our Ceph storage subsystem to provide highly redundant and fault tolerant storage accessible to the Rocky Cluster.  &lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| Quote Date || January 25, 2023&lt;br /&gt;
|-&lt;br /&gt;
| Quoted Cost || $12,155.32&lt;br /&gt;
|-&lt;br /&gt;
| Capacity || 40TB&lt;br /&gt;
|-&lt;br /&gt;
| Configuration || Dell R740xd&amp;lt;br/&amp;gt;16 x 8TB HDD&amp;lt;br/&amp;gt;4 x 1.9TB M.2 SSD&amp;lt;br/&amp;gt;Additional drives to extend backup system.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Compute Node ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| Quote Date || May 1, 2023&lt;br /&gt;
|-&lt;br /&gt;
| Quoted Cost || $17000&lt;br /&gt;
|-&lt;br /&gt;
| Compute || 112 vcpu&amp;lt;br/&amp;gt;512MB ram&lt;br /&gt;
|-&lt;br /&gt;
| Configuration || Dell R750&amp;lt;br/&amp;gt;2 x Xeon Gold 6330&amp;lt;br/&amp;gt;&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_User_Guide&amp;diff=218</id>
		<title>Rocky User Guide</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_User_Guide&amp;diff=218"/>
		<updated>2023-05-01T03:10:09Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: /* Example Jobs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= About Rocky =&lt;br /&gt;
&lt;br /&gt;
Rocky is a [https://en.wikipedia.org/wiki/High-performance_computing HPC] cluster comprised of compute heavy nodes with 40 cores/80 threads and 512GB of ram ['''rocky'''], memory intensive nodes with 20 cores/40 threads and 768GB of RAM ['''moose'''], and a [https://docs.ceph.com Ceph] storage subsystem ['''quarrel''']&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Requesting Access ==&lt;br /&gt;
In order to gain access to Rocky you must first fill out the [[Rocky_Access_Form]].  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging in to Rocky ==&lt;br /&gt;
Rocky's firewall limits access to the UTK network.  You will either need to be on campus or using the [https://utk.teamdynamix.com/TDClient/2277/OIT-Portal/KB/ArticleDet?ID=123517 Campus VPN]&lt;br /&gt;
&lt;br /&gt;
Once your account is created you will be able to SSH into a shell or SCP to copy files to/from Rocky.  &lt;br /&gt;
&lt;br /&gt;
Rocky uses Public Key Authentication for access instead of passwords.  &lt;br /&gt;
&lt;br /&gt;
Please review the following pages for OS specific instructions:&lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| [[Rocky_Access_SSH]] || Linux or Mac&lt;br /&gt;
|-&lt;br /&gt;
| [[Rocky_Access_Windows]] || Windows&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Environmental Modules ==&lt;br /&gt;
Rocky uses [https://lmod.readthedocs.io/en/latest/ Lmod] as it's environmental module system.  This allows you to easily set your session or job's environment to support the language, libraries, and specific versions needed.&lt;br /&gt;
&lt;br /&gt;
To learn more about using Lmod on Rocky, check out [[ Rocky_Environments | Rocky Environments]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Submitting a Job ==&lt;br /&gt;
&lt;br /&gt;
Rocky uses [https://slurm.schedmd.com/documentation.html Slurm] to queue and submit jobs to the cluster's compute nodes.  &lt;br /&gt;
&lt;br /&gt;
To learn how to set up your own jobs, check out the [[Rocky_Job_Anatomy | Anatomy of a Rocky Job]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Examples =&lt;br /&gt;
&lt;br /&gt;
Beyond the below examples, we have also started a [https://github.com/rocky-cluster Github Repository].&lt;br /&gt;
&lt;br /&gt;
==== Python ====&lt;br /&gt;
* [[ Rocky_Python_HelloWorld | Hello World ]]&lt;br /&gt;
* [[ Rocky_Python_Prime | Discover Prime Numbers ]]&lt;br /&gt;
* [[ Rocky_Python_Prime_Array | Discover Prime Numbers using a job array ]]&lt;br /&gt;
&lt;br /&gt;
==== R ====&lt;br /&gt;
* [[ Rocky_R_HelloWorld | Hello World ]]&lt;br /&gt;
* [[ Rocky_R_Prime | Discover Prime Numbers ]]&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_Python_Prime&amp;diff=217</id>
		<title>Rocky Python Prime</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_Python_Prime&amp;diff=217"/>
		<updated>2023-04-29T02:15:40Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Python Code =&lt;br /&gt;
&lt;br /&gt;
'''prime.py'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
MIN = 2&lt;br /&gt;
MAX = 100000&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def is_prime(num):&lt;br /&gt;
    if num &amp;lt;= 1:&lt;br /&gt;
        return False&lt;br /&gt;
    else:&lt;br /&gt;
        for i in range(2, num):&lt;br /&gt;
            if (num % i) == 0:&lt;br /&gt;
                return False&lt;br /&gt;
    return True&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
for i in range(MIN, MAX+1):&lt;br /&gt;
    if is_prime(i):&lt;br /&gt;
        print(i)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Batch Script =&lt;br /&gt;
&lt;br /&gt;
'''python-prime.run'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=PYTHON_PRIME&lt;br /&gt;
#SBATCH --output=python_prime_%j.out&lt;br /&gt;
&lt;br /&gt;
module load Python&lt;br /&gt;
&lt;br /&gt;
python3 prime.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Running Job =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ pwd&lt;br /&gt;
/home/test_user/projects/python/prime&lt;br /&gt;
&lt;br /&gt;
[test_user@rocky7 prime]$ ls&lt;br /&gt;
prime.py  python-prime.run&lt;br /&gt;
&lt;br /&gt;
[test_user@rocky7 prime]$ sbatch python-prime.run &lt;br /&gt;
Submitted batch job 3877&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here we can see the job was assigned to moose1.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ squeue&lt;br /&gt;
             JOBID PARTITION       NAME      USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              3877 compute_all PYTHON_P test_user  R       0:02      1 moose1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once job is no longer listed in the queue:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ ls&lt;br /&gt;
prime.py  python-prime.run  python_prime_3877.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ cat python_prime_3877.out &lt;br /&gt;
2&lt;br /&gt;
3&lt;br /&gt;
5&lt;br /&gt;
7&lt;br /&gt;
11&lt;br /&gt;
...&amp;lt;truncated&amp;gt;...&lt;br /&gt;
99881&lt;br /&gt;
99901&lt;br /&gt;
99907&lt;br /&gt;
99923&lt;br /&gt;
99929&lt;br /&gt;
99961&lt;br /&gt;
99971&lt;br /&gt;
99989&lt;br /&gt;
99991&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_MATLAB_Prime_Array&amp;diff=216</id>
		<title>Rocky MATLAB Prime Array</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_MATLAB_Prime_Array&amp;diff=216"/>
		<updated>2023-04-29T02:13:08Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: Created page with &amp;quot;= Job Array = Job arrays allow you to run the same code many times with a different task id.  The task id can then be used to determine which subset of your data to process.  This strategy breaks your large job up into multiple smaller jobs that not only execute more quickly but can run concurrently.  In the example of discovering prime numbers, lets say we want to discover all the primes in the first 1 million numbers.  We could just create code that goes from 1 to 1000...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Job Array =&lt;br /&gt;
Job arrays allow you to run the same code many times with a different task id.  The task id can then be used to determine which subset of your data to process.  This strategy breaks your large job up into multiple smaller jobs that not only execute more quickly but can run concurrently.&lt;br /&gt;
&lt;br /&gt;
In the example of discovering prime numbers, lets say we want to discover all the primes in the first 1 million numbers.  We could just create code that goes from 1 to 1000000.  But if we use a job array, we could create 100 jobs that each search 10000 numbers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Batch File =&lt;br /&gt;
&lt;br /&gt;
There are three differences when turning this into a job array.  &lt;br /&gt;
&lt;br /&gt;
First, we've added a SBATCH parameter to define not only how many jobs but the range of task ids to produce.  In our example, we're making the range 0 to 99 (we could have also done 1-100).&lt;br /&gt;
&lt;br /&gt;
Secondly, for the log file pattern, we're using %A and %a instead of %j.  These are patterns specific to job arrays.  You can read more about the file patterns at [https://slurm.schedmd.com/sbatch.html#SECTION_%3CB%3Efilename-pattern%3C/B%3E this link]&lt;br /&gt;
&lt;br /&gt;
Lastly, we pass the environment variable $SLURM_ARRAY_TASK_ID as a parameter to our code.  We will need to read in this parameter and use it to determine what data to process.  We know from our array definition that it will be a number from 0 to 99.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''matlab_prime_array.run'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --job-name=MATLAB_PRIME&lt;br /&gt;
#SBATCH --output=logs/matlab_prime_array_%A_%a.out&lt;br /&gt;
#SBATCH --array=0-99&lt;br /&gt;
&lt;br /&gt;
module load MATLAB/2022b &lt;br /&gt;
&lt;br /&gt;
matlab -nojvm -batch &amp;quot;task_id = ${SLURM_ARRAY_TASK_ID}; run('prime_array.m');&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= matlab Code =&lt;br /&gt;
&lt;br /&gt;
In the matlab code, we'll need determine the MIN and MAX values to search.  As long as we know our CHUNKSIZE, we should be able to calculate those values using the task id being passed in as a parameter.  This way, each execution of the code will process  different chunks of numbers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''prime_array.m'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
chunksize = 10000;&lt;br /&gt;
&lt;br /&gt;
start = task_id * chunksize;&lt;br /&gt;
stop = start + chunksize;&lt;br /&gt;
&lt;br /&gt;
for i = start : stop&lt;br /&gt;
    if isprime(i)&lt;br /&gt;
        fprintf(&amp;quot;%d\n&amp;quot;, i)&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Running Job =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ pwd&lt;br /&gt;
/home/test_user/projects/matlab/prime_array/&lt;br /&gt;
&lt;br /&gt;
$ ls&lt;br /&gt;
logs  prime_array.m  matlab_prime_array.run&lt;br /&gt;
&lt;br /&gt;
$ sbatch matlab_prime_array.run &lt;br /&gt;
Submitted batch job 5771&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here we can see the job is queued with all 100 jobs.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ squeue&lt;br /&gt;
            JOBID   PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
      5771_[0-99] compute_all MATLAB_P test_use PD       0:00      1 (Resources)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here we can see the jobs are beginning to run.  Two of the jobs have completed, 20 of them are currently running, and the rest are still queued.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ squeue&lt;br /&gt;
             JOBID   PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
      5771_[23-99] compute_all MATLAB_P test_use PD       0:00      1 (Resources)&lt;br /&gt;
           5771_22 compute_all MATLAB_P test_use  R       0:00      1 rocky2&lt;br /&gt;
           5771_20 compute_all MATLAB_P test_use  R       0:01      1 rocky2&lt;br /&gt;
           5771_21 compute_all MATLAB_P test_use  R       0:01      1 rocky2&lt;br /&gt;
            5771_3 compute_all MATLAB_P test_use  R       0:03      1 rocky2&lt;br /&gt;
            5771_4 compute_all MATLAB_P test_use  R       0:03      1 rocky2&lt;br /&gt;
            5771_5 compute_all MATLAB_P test_use  R       0:03      1 rocky2&lt;br /&gt;
            5771_6 compute_all MATLAB_P test_use  R       0:03      1 rocky2&lt;br /&gt;
            5771_7 compute_all MATLAB_P test_use  R       0:03      1 rocky2&lt;br /&gt;
            5771_8 compute_all MATLAB_P test_use  R       0:03      1 rocky2&lt;br /&gt;
            5771_9 compute_all MATLAB_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_10 compute_all MATLAB_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_11 compute_all MATLAB_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_12 compute_all MATLAB_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_13 compute_all MATLAB_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_14 compute_all MATLAB_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_15 compute_all MATLAB_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_16 compute_all MATLAB_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_17 compute_all MATLAB_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_18 compute_all MATLAB_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_19 compute_all MATLAB_P test_use  R       0:03      1 rocky2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once jobs are no longer listed in the queue, we see there is a log file for every task in the job array.  Each one contains the prime numbers in their respective chunk.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ls logs/*.out | sort -V &lt;br /&gt;
logs/matlab_prime_array_5771-0.out&lt;br /&gt;
logs/matlab_prime_array_5771-1.out&lt;br /&gt;
logs/matlab_prime_array_5771-2.out&lt;br /&gt;
logs/matlab_prime_array_5771-3.out&lt;br /&gt;
logs/matlab_prime_array_5771-4.out&lt;br /&gt;
logs/matlab_prime_array_5771-5.out&lt;br /&gt;
logs/matlab_prime_array_5771-6.out&lt;br /&gt;
logs/matlab_prime_array_5771-7.out&lt;br /&gt;
logs/matlab_prime_array_5771-8.out&lt;br /&gt;
logs/matlab_prime_array_5771-9.out&lt;br /&gt;
logs/matlab_prime_array_5771-10.out&lt;br /&gt;
logs/matlab_prime_array_5771-11.out&lt;br /&gt;
[truncated]&lt;br /&gt;
logs/matlab_prime_array_5771-90.out&lt;br /&gt;
logs/matlab_prime_array_5771-91.out&lt;br /&gt;
logs/matlab_prime_array_5771-92.out&lt;br /&gt;
logs/matlab_prime_array_5771-93.out&lt;br /&gt;
logs/matlab_prime_array_5771-94.out&lt;br /&gt;
logs/matlab_prime_array_5771-95.out&lt;br /&gt;
logs/matlab_prime_array_5771-96.out&lt;br /&gt;
logs/matlab_prime_array_5771-97.out&lt;br /&gt;
logs/matlab_prime_array_5771-98.out&lt;br /&gt;
logs/matlab_prime_array_5771-99.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We can see the results in order by using the cat command and then sorting the results.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat logs/matlab_prime_array_5771-*.out | sort -V&lt;br /&gt;
&lt;br /&gt;
3&lt;br /&gt;
5&lt;br /&gt;
7&lt;br /&gt;
11&lt;br /&gt;
13&lt;br /&gt;
17&lt;br /&gt;
19&lt;br /&gt;
23&lt;br /&gt;
29&lt;br /&gt;
31&lt;br /&gt;
37&lt;br /&gt;
41&lt;br /&gt;
43&lt;br /&gt;
47&lt;br /&gt;
53&lt;br /&gt;
59&lt;br /&gt;
61&lt;br /&gt;
[truncated]&lt;br /&gt;
999863&lt;br /&gt;
999883&lt;br /&gt;
999907&lt;br /&gt;
999917&lt;br /&gt;
999931&lt;br /&gt;
999953&lt;br /&gt;
999959&lt;br /&gt;
999961&lt;br /&gt;
999979&lt;br /&gt;
999983&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We can also use the wc command to count how many primes were found.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat logs/matlab_prime_array_5771-* | wc -l&lt;br /&gt;
78498&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_MATLAB_HelloWorld&amp;diff=215</id>
		<title>Rocky MATLAB HelloWorld</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_MATLAB_HelloWorld&amp;diff=215"/>
		<updated>2023-04-28T20:09:29Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: /* Running Job */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Code =&lt;br /&gt;
&lt;br /&gt;
'''helloworld.m'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
fprintf(&amp;quot;hello world&amp;quot;);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Batch Script =&lt;br /&gt;
&lt;br /&gt;
'''helloworld.run'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --job-name=MATLAB_HELLOWORLD&lt;br /&gt;
#SBATCH --output=logs/matlab_helloworld_%j.out&lt;br /&gt;
&lt;br /&gt;
module load MATLAB/2022b &lt;br /&gt;
&lt;br /&gt;
matlab -nojvm -batch &amp;quot;run('helloworld.m');&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Running Job =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 helloworld]$ pwd&lt;br /&gt;
/home/test_user/projects/matlab/helloworld&lt;br /&gt;
&lt;br /&gt;
[test_user@rocky7 helloworld]$ ls&lt;br /&gt;
helloworld.m  helloworld.run  logs&lt;br /&gt;
&lt;br /&gt;
[test_user@rocky7 helloworld]$ sbatch helloworld.run &lt;br /&gt;
Submitted batch job 3871&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This job will only take a few seconds to run and then we can check the log file.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 helloworld]$ ls logs&lt;br /&gt;
matlab_helloworld_3871.out&lt;br /&gt;
&lt;br /&gt;
[test_user@rocky7 helloworld]$ cat logs/matlab_helloworld_3871.out &lt;br /&gt;
hello world&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_MATLAB_HelloWorld&amp;diff=214</id>
		<title>Rocky MATLAB HelloWorld</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_MATLAB_HelloWorld&amp;diff=214"/>
		<updated>2023-04-28T19:54:34Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Code =&lt;br /&gt;
&lt;br /&gt;
'''helloworld.m'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
fprintf(&amp;quot;hello world&amp;quot;);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Batch Script =&lt;br /&gt;
&lt;br /&gt;
'''helloworld.run'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --job-name=MATLAB_HELLOWORLD&lt;br /&gt;
#SBATCH --output=logs/matlab_helloworld_%j.out&lt;br /&gt;
&lt;br /&gt;
module load MATLAB/2022b &lt;br /&gt;
&lt;br /&gt;
matlab -nojvm -batch &amp;quot;run('helloworld.m');&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Running Job =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 helloworld]$ pwd&lt;br /&gt;
/home/test_user/projects/matlab/helloworld&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 helloworld]$ ls&lt;br /&gt;
helloworld.m  helloworld.run  logs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 helloworld]$ sbatch helloworld.run &lt;br /&gt;
Submitted batch job 3871&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 helloworld]$ ls logs&lt;br /&gt;
matlab_helloworld_3871.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 helloworld]$ cat logs/matlab_helloworld_3871.out &lt;br /&gt;
hello world&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_MATLAB_HelloWorld&amp;diff=213</id>
		<title>Rocky MATLAB HelloWorld</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_MATLAB_HelloWorld&amp;diff=213"/>
		<updated>2023-04-28T19:53:57Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: Created page with &amp;quot;= Code =  '''helloworld.m''' &amp;lt;pre&amp;gt; fprintf(&amp;quot;hello world&amp;quot;); &amp;lt;/pre&amp;gt;  = Batch Script =  '''helloworld.run''' &amp;lt;pre&amp;gt; #!/bin/bash  #SBATCH --job-name=MATLAB_HELLOWORLD #SBATCH --output=logs/matlab_helloworld_%j.out  module load MATLAB/2022b   matlab -nojvm -batch &amp;quot;run('helloworld.m');&amp;quot; &amp;lt;/pre&amp;gt;  = Running Job =  &amp;lt;pre&amp;gt; [test_user@rocky7 helloworld]$ pwd /home/test_user/projects/matlab/helloworld &amp;lt;/pre&amp;gt; &amp;lt;pre&amp;gt; [test_user@rocky7 helloworld]$ ls helloworld.m  helloworld.run  logs &amp;lt;/p...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Code =&lt;br /&gt;
&lt;br /&gt;
'''helloworld.m'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
fprintf(&amp;quot;hello world&amp;quot;);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Batch Script =&lt;br /&gt;
&lt;br /&gt;
'''helloworld.run'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --job-name=MATLAB_HELLOWORLD&lt;br /&gt;
#SBATCH --output=logs/matlab_helloworld_%j.out&lt;br /&gt;
&lt;br /&gt;
module load MATLAB/2022b &lt;br /&gt;
&lt;br /&gt;
matlab -nojvm -batch &amp;quot;run('helloworld.m');&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Running Job =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 helloworld]$ pwd&lt;br /&gt;
/home/test_user/projects/matlab/helloworld&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 helloworld]$ ls&lt;br /&gt;
helloworld.m  helloworld.run  logs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 helloworld]$ sbatch helloworld.run &lt;br /&gt;
Submitted batch job 3871&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 helloworld]$ ls logs&lt;br /&gt;
matlab_helloworld_3871.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 helloworld]$ cat logs/matlab_helloworld_3871.out &lt;br /&gt;
hello world&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_R_HelloWorld&amp;diff=212</id>
		<title>Rocky R HelloWorld</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_R_HelloWorld&amp;diff=212"/>
		<updated>2023-04-25T18:37:37Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= R Code =&lt;br /&gt;
&lt;br /&gt;
'''helloworld.R'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
print(&amp;quot;Hello World!&amp;quot;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Batch Script =&lt;br /&gt;
&lt;br /&gt;
'''helloworld.run'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --job-name=R_HELLOWORLD&lt;br /&gt;
#SBATCH --output=R_hello_%j.out&lt;br /&gt;
&lt;br /&gt;
module load R/4.2.1-foss-2022a &lt;br /&gt;
&lt;br /&gt;
Rscript helloworld.R&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Running Job =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 helloworld]$ pwd&lt;br /&gt;
/home/test_user/projects/R/helloworld&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 helloworld]$ ls&lt;br /&gt;
helloworld.R  helloworld.run&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 helloworld]$ sbatch helloworld.run &lt;br /&gt;
Submitted batch job 3875&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 helloworld]$ ls&lt;br /&gt;
helloworld.py  helloworld.run  R_hell0_3875.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 helloworld]$ cat R_hell0_3875.out &lt;br /&gt;
Hello World!&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_Python_Prime&amp;diff=211</id>
		<title>Rocky Python Prime</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_Python_Prime&amp;diff=211"/>
		<updated>2023-04-25T18:34:16Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Python Code =&lt;br /&gt;
&lt;br /&gt;
'''prime.py'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
MIN = 2&lt;br /&gt;
MAX = 100000&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def is_prime(num):&lt;br /&gt;
    if num &amp;lt;= 1:&lt;br /&gt;
        return False&lt;br /&gt;
    else:&lt;br /&gt;
        for i in range(2, num):&lt;br /&gt;
            if (num % i) == 0:&lt;br /&gt;
                return False&lt;br /&gt;
    return True&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
for i in range(MIN, MAX+1):&lt;br /&gt;
    if is_prime(i):&lt;br /&gt;
        print(i)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Batch Script =&lt;br /&gt;
&lt;br /&gt;
'''python-prime.run'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=PYTHON_PRIME&lt;br /&gt;
#SBATCH --output=python_prime_%j.out&lt;br /&gt;
&lt;br /&gt;
module load Python&lt;br /&gt;
&lt;br /&gt;
python3 prime.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Running Job =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ pwd&lt;br /&gt;
/home/test_user/projects/python/prime&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ ls&lt;br /&gt;
prime.py  python-prime.run&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ sbatch python-prime.run &lt;br /&gt;
Submitted batch job 3877&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here we can see the job was assigned to moose1.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ squeue&lt;br /&gt;
             JOBID PARTITION       NAME      USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              3877 compute_all PYTHON_P test_user  R       0:02      1 moose1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once job is no longer listed in the queue:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ ls&lt;br /&gt;
prime.py  python-prime.run  python_prime_3877.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ cat python_prime_3877.out &lt;br /&gt;
2&lt;br /&gt;
3&lt;br /&gt;
5&lt;br /&gt;
7&lt;br /&gt;
11&lt;br /&gt;
...&amp;lt;truncated&amp;gt;...&lt;br /&gt;
99881&lt;br /&gt;
99901&lt;br /&gt;
99907&lt;br /&gt;
99923&lt;br /&gt;
99929&lt;br /&gt;
99961&lt;br /&gt;
99971&lt;br /&gt;
99989&lt;br /&gt;
99991&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_User_Guide&amp;diff=210</id>
		<title>Rocky User Guide</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_User_Guide&amp;diff=210"/>
		<updated>2023-04-22T01:49:51Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: /* Example Jobs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= About Rocky =&lt;br /&gt;
&lt;br /&gt;
Rocky is a [https://en.wikipedia.org/wiki/High-performance_computing HPC] cluster comprised of compute heavy nodes with 40 cores/80 threads and 512GB of ram ['''rocky'''], memory intensive nodes with 20 cores/40 threads and 768GB of RAM ['''moose'''], and a [https://docs.ceph.com Ceph] storage subsystem ['''quarrel''']&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Requesting Access ==&lt;br /&gt;
In order to gain access to Rocky you must first fill out the [[Rocky_Access_Form]].  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging in to Rocky ==&lt;br /&gt;
Rocky's firewall limits access to the UTK network.  You will either need to be on campus or using the [https://utk.teamdynamix.com/TDClient/2277/OIT-Portal/KB/ArticleDet?ID=123517 Campus VPN]&lt;br /&gt;
&lt;br /&gt;
Once your account is created you will be able to SSH into a shell or SCP to copy files to/from Rocky.  &lt;br /&gt;
&lt;br /&gt;
Rocky uses Public Key Authentication for access instead of passwords.  &lt;br /&gt;
&lt;br /&gt;
Please review the following pages for OS specific instructions:&lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| [[Rocky_Access_SSH]] || Linux or Mac&lt;br /&gt;
|-&lt;br /&gt;
| [[Rocky_Access_Windows]] || Windows&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Environmental Modules ==&lt;br /&gt;
Rocky uses [https://lmod.readthedocs.io/en/latest/ Lmod] as it's environmental module system.  This allows you to easily set your session or job's environment to support the language, libraries, and specific versions needed.&lt;br /&gt;
&lt;br /&gt;
To learn more about using Lmod on Rocky, check out [[ Rocky_Environments | Rocky Environments]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Submitting a Job ==&lt;br /&gt;
&lt;br /&gt;
Rocky uses [https://slurm.schedmd.com/documentation.html Slurm] to queue and submit jobs to the cluster's compute nodes.  &lt;br /&gt;
&lt;br /&gt;
To learn how to set up your own jobs, check out the [[Rocky_Job_Anatomy | Anatomy of a Rocky Job]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Example Jobs =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Python ====&lt;br /&gt;
* [[ Rocky_Python_HelloWorld | Hello World ]]&lt;br /&gt;
* [[ Rocky_Python_Prime | Discover Prime Numbers ]]&lt;br /&gt;
* [[ Rocky_Python_Prime_Array | Discover Prime Numbers using a job array ]]&lt;br /&gt;
&lt;br /&gt;
==== R ====&lt;br /&gt;
* [[ Rocky_R_HelloWorld | Hello World ]]&lt;br /&gt;
* [[ Rocky_R_Prime | Discover Prime Numbers ]]&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_Python_Prime_Array&amp;diff=209</id>
		<title>Rocky Python Prime Array</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_Python_Prime_Array&amp;diff=209"/>
		<updated>2023-04-21T21:30:52Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Job Array =&lt;br /&gt;
Job arrays allow you to run the same code many times with a different task id.  The task id can then be used to determine which subset of your data to process.  This strategy breaks your large job up into multiple smaller jobs that not only execute more quickly but can run concurrently.&lt;br /&gt;
&lt;br /&gt;
In the example of discovering prime numbers, lets say we want to discover all the primes in the first 1 million numbers.  We could just create code that goes from 1 to 1000000.  But if we use a job array, we could create 100 jobs that each search 10000 numbers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Batch File =&lt;br /&gt;
&lt;br /&gt;
There are three differences when turning this into a job array.  &lt;br /&gt;
&lt;br /&gt;
First, we've added a SBATCH parameter to define not only how many jobs but the range of task ids to produce.  In our example, we're making the range 0 to 99 (we could have also done 1-100).&lt;br /&gt;
&lt;br /&gt;
Secondly, for the log file pattern, we're using %A and %a instead of %j.  These are patterns specific to job arrays.  You can read more about the file patterns at [https://slurm.schedmd.com/sbatch.html#SECTION_%3CB%3Efilename-pattern%3C/B%3E this link]&lt;br /&gt;
&lt;br /&gt;
Lastly, we pass the environment variable $SLURM_ARRAY_TASK_ID as a parameter to our code.  We will need to read in this parameter and use it to determine what data to process.  We know from our array definition that it will be a number from 0 to 99.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''python-prime-array.run'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --job-name=PYTHON_PRIME_ARRAY&lt;br /&gt;
#SBATCH --output=logs/python_prime_array_%A-%a.out&lt;br /&gt;
#SBATCH --array=0-99&lt;br /&gt;
&lt;br /&gt;
module load Python&lt;br /&gt;
&lt;br /&gt;
python prime_array.py $SLURM_ARRAY_TASK_ID&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Python Code =&lt;br /&gt;
&lt;br /&gt;
In the python code, we'll need determine the MIN and MAX values to search.  As long as we know our CHUNKSIZE, we should be able to calculate those values using the task id being passed in as a parameter.  This way, each execution of the code will process  different chunks of numbers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''prime_array.py'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import sys&lt;br /&gt;
&lt;br /&gt;
# How many numbers to check for prime from each job&lt;br /&gt;
CHUNKSIZE = 10000&lt;br /&gt;
&lt;br /&gt;
ARRAYID=0&lt;br /&gt;
if len(sys.argv) &amp;gt; 1:&lt;br /&gt;
    ARRAYID = int(sys.argv[1])&lt;br /&gt;
&lt;br /&gt;
MIN = ARRAYID * CHUNKSIZE&lt;br /&gt;
MAX = MIN + CHUNKSIZE&lt;br /&gt;
&lt;br /&gt;
def is_prime(num):&lt;br /&gt;
    if num &amp;lt;= 1:&lt;br /&gt;
        return False&lt;br /&gt;
    else:&lt;br /&gt;
        for i in range(2, num):&lt;br /&gt;
            if (num % i) == 0:&lt;br /&gt;
                return False&lt;br /&gt;
    return True&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
for i in range(MIN, MAX+1):&lt;br /&gt;
    if is_prime(i):&lt;br /&gt;
        print(i)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Running Job =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ pwd&lt;br /&gt;
/home/test_user/projects/python/prime-array/&lt;br /&gt;
&lt;br /&gt;
$ ls&lt;br /&gt;
logs  prime_array.py  python-prime-array.run&lt;br /&gt;
&lt;br /&gt;
$ sbatch python-prime-array.run &lt;br /&gt;
Submitted batch job 5771&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here we can see the job is queued with all 100 jobs.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ squeue&lt;br /&gt;
            JOBID   PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
      5771_[0-99] compute_all PYTHON_P test_use PD       0:00      1 (Resources)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here we can see the jobs are beginning to run.  Two of the jobs have completed, 20 of them are currently running, and the rest are still queued.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ squeue&lt;br /&gt;
             JOBID   PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
      5771_[23-99] compute_all PYTHON_P test_use PD       0:00      1 (Resources)&lt;br /&gt;
           5771_22 compute_all PYTHON_P test_use  R       0:00      1 rocky2&lt;br /&gt;
           5771_20 compute_all PYTHON_P test_use  R       0:01      1 rocky2&lt;br /&gt;
           5771_21 compute_all PYTHON_P test_use  R       0:01      1 rocky2&lt;br /&gt;
            5771_3 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
            5771_4 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
            5771_5 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
            5771_6 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
            5771_7 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
            5771_8 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
            5771_9 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_10 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_11 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_12 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_13 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_14 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_15 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_16 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_17 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_18 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_19 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once jobs are no longer listed in the queue, we see there is a log file for every task in the job array.  Each one contains the prime numbers in their respective chunk.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ls logs/*.out | sort -V &lt;br /&gt;
logs/python_prime_array_5771-0.out&lt;br /&gt;
logs/python_prime_array_5771-1.out&lt;br /&gt;
logs/python_prime_array_5771-2.out&lt;br /&gt;
logs/python_prime_array_5771-3.out&lt;br /&gt;
logs/python_prime_array_5771-4.out&lt;br /&gt;
logs/python_prime_array_5771-5.out&lt;br /&gt;
logs/python_prime_array_5771-6.out&lt;br /&gt;
logs/python_prime_array_5771-7.out&lt;br /&gt;
logs/python_prime_array_5771-8.out&lt;br /&gt;
logs/python_prime_array_5771-9.out&lt;br /&gt;
logs/python_prime_array_5771-10.out&lt;br /&gt;
logs/python_prime_array_5771-11.out&lt;br /&gt;
[truncated]&lt;br /&gt;
logs/python_prime_array_5771-90.out&lt;br /&gt;
logs/python_prime_array_5771-91.out&lt;br /&gt;
logs/python_prime_array_5771-92.out&lt;br /&gt;
logs/python_prime_array_5771-93.out&lt;br /&gt;
logs/python_prime_array_5771-94.out&lt;br /&gt;
logs/python_prime_array_5771-95.out&lt;br /&gt;
logs/python_prime_array_5771-96.out&lt;br /&gt;
logs/python_prime_array_5771-97.out&lt;br /&gt;
logs/python_prime_array_5771-98.out&lt;br /&gt;
logs/python_prime_array_5771-99.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We can see the results in order by using the cat command and then sorting the results.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat logs/python_prime_array_5771-*.out | sort -V&lt;br /&gt;
&lt;br /&gt;
3&lt;br /&gt;
5&lt;br /&gt;
7&lt;br /&gt;
11&lt;br /&gt;
13&lt;br /&gt;
17&lt;br /&gt;
19&lt;br /&gt;
23&lt;br /&gt;
29&lt;br /&gt;
31&lt;br /&gt;
37&lt;br /&gt;
41&lt;br /&gt;
43&lt;br /&gt;
47&lt;br /&gt;
53&lt;br /&gt;
59&lt;br /&gt;
61&lt;br /&gt;
[truncated]&lt;br /&gt;
999863&lt;br /&gt;
999883&lt;br /&gt;
999907&lt;br /&gt;
999917&lt;br /&gt;
999931&lt;br /&gt;
999953&lt;br /&gt;
999959&lt;br /&gt;
999961&lt;br /&gt;
999979&lt;br /&gt;
999983&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We can also use the wc command to count how many primes were found.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat logs/python_prime_array_5771-* | wc -l&lt;br /&gt;
78498&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_User_Guide&amp;diff=208</id>
		<title>Rocky User Guide</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_User_Guide&amp;diff=208"/>
		<updated>2023-04-21T20:49:36Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: /* Example Jobs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= About Rocky =&lt;br /&gt;
&lt;br /&gt;
Rocky is a [https://en.wikipedia.org/wiki/High-performance_computing HPC] cluster comprised of compute heavy nodes with 40 cores/80 threads and 512GB of ram ['''rocky'''], memory intensive nodes with 20 cores/40 threads and 768GB of RAM ['''moose'''], and a [https://docs.ceph.com Ceph] storage subsystem ['''quarrel''']&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Requesting Access ==&lt;br /&gt;
In order to gain access to Rocky you must first fill out the [[Rocky_Access_Form]].  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging in to Rocky ==&lt;br /&gt;
Rocky's firewall limits access to the UTK network.  You will either need to be on campus or using the [https://utk.teamdynamix.com/TDClient/2277/OIT-Portal/KB/ArticleDet?ID=123517 Campus VPN]&lt;br /&gt;
&lt;br /&gt;
Once your account is created you will be able to SSH into a shell or SCP to copy files to/from Rocky.  &lt;br /&gt;
&lt;br /&gt;
Rocky uses Public Key Authentication for access instead of passwords.  &lt;br /&gt;
&lt;br /&gt;
Please review the following pages for OS specific instructions:&lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| [[Rocky_Access_SSH]] || Linux or Mac&lt;br /&gt;
|-&lt;br /&gt;
| [[Rocky_Access_Windows]] || Windows&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Environmental Modules ==&lt;br /&gt;
Rocky uses [https://lmod.readthedocs.io/en/latest/ Lmod] as it's environmental module system.  This allows you to easily set your session or job's environment to support the language, libraries, and specific versions needed.&lt;br /&gt;
&lt;br /&gt;
To learn more about using Lmod on Rocky, check out [[ Rocky_Environments | Rocky Environments]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Submitting a Job ==&lt;br /&gt;
&lt;br /&gt;
Rocky uses [https://slurm.schedmd.com/documentation.html Slurm] to queue and submit jobs to the cluster's compute nodes.  &lt;br /&gt;
&lt;br /&gt;
To learn how to set up your own jobs, check out the [[Rocky_Job_Anatomy | Anatomy of a Rocky Job]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Example Jobs =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Python ====&lt;br /&gt;
* [[ Rocky_Python_HelloWorld | Hello World ]]&lt;br /&gt;
* [[ Rocky_Python_Prime | Prime Numbers ]]&lt;br /&gt;
* [[ Rocky_Python_Prime_Array | Prime Numbers using Job Array ]]&lt;br /&gt;
&lt;br /&gt;
==== R ====&lt;br /&gt;
* [[ Rocky_R_HelloWorld | Hello World ]]&lt;br /&gt;
* [[ Rocky_R_Prime | Prime Numbers ]]&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_Python_Prime_Array&amp;diff=207</id>
		<title>Rocky Python Prime Array</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_Python_Prime_Array&amp;diff=207"/>
		<updated>2023-04-21T20:48:06Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Job Array =&lt;br /&gt;
Job arrays allow you to run the same code many times with a different task id.  The task id can then be used to determine which subset of your data to process.  This strategy breaks your large job up into multiple smaller jobs that not only execute more quickly but can run concurrently.&lt;br /&gt;
&lt;br /&gt;
In the example of discovering prime numbers, lets say we want to discover all the primes in the first 1 million numbers.  We could just create code that goes from 1 to 1000000.  But if we use a job array, we could create 100 jobs that each search 10000 numbers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Batch File =&lt;br /&gt;
&lt;br /&gt;
There are three differences when turning this into a job array.  &lt;br /&gt;
&lt;br /&gt;
First, we've added a SBATCH parameter to define not only how many jobs but the range of task ids to produce.  In our example, we're making the range 0 to 99 (we could have also done 1-100).&lt;br /&gt;
&lt;br /&gt;
Secondly, for the log file pattern, we're using %A and %a instead of %j.  These are patterns specific to job arrays.  You can read more about the file patterns at [https://slurm.schedmd.com/sbatch.html#SECTION_%3CB%3Efilename-pattern%3C/B%3E this link]&lt;br /&gt;
&lt;br /&gt;
Lastly, we pass the environment variable $SLURM_ARRAY_TASK_ID as a parameter to our code.  We will need to read in this parameter and use it to determine what data to process.  We know from our array definition that it will be a number from 0 to 99.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''python-prime-array.run'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --job-name=PYTHON_PRIME_ARRAY&lt;br /&gt;
#SBATCH --output=logs/python_prime_array_%A-%a.out&lt;br /&gt;
#SBATCH --array=0-99&lt;br /&gt;
&lt;br /&gt;
module load Python&lt;br /&gt;
&lt;br /&gt;
python prime_array.py $SLURM_ARRAY_TASK_ID&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Python Code =&lt;br /&gt;
&lt;br /&gt;
In the python code, we'll need determine the MIN and MAX values to search.  As long as we know our CHUNKSIZE, we should be able to calculate those values using the task id being passed in as a parameter.  This way, each execution of the code will process  different chunks of numbers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''prime_array.py'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import sys&lt;br /&gt;
&lt;br /&gt;
# How many numbers to check for prime from each job&lt;br /&gt;
CHUNKSIZE = 10000&lt;br /&gt;
&lt;br /&gt;
ARRAYID=0&lt;br /&gt;
if len(sys.argv) &amp;gt; 1:&lt;br /&gt;
    ARRAYID = int(sys.argv[1])&lt;br /&gt;
&lt;br /&gt;
MIN = ARRAYID * CHUNKSIZE&lt;br /&gt;
MAX = MIN + CHUNKSIZE&lt;br /&gt;
&lt;br /&gt;
def is_prime(num):&lt;br /&gt;
    if num &amp;lt;= 1:&lt;br /&gt;
        return False&lt;br /&gt;
    else:&lt;br /&gt;
        for i in range(2, num):&lt;br /&gt;
            if (num % i) == 0:&lt;br /&gt;
                return False&lt;br /&gt;
    return True&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
for i in range(MIN, MAX+1):&lt;br /&gt;
    if is_prime(i):&lt;br /&gt;
        print(i)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Running Job =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ pwd&lt;br /&gt;
/home/test_user/projects/python/prime-array/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ ls&lt;br /&gt;
logs  prime_array.py  python-prime-array.run&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ sbatch python-prime-array.run &lt;br /&gt;
Submitted batch job 5771&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here we can see the job is queued with all 100 jobs.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ squeue&lt;br /&gt;
            JOBID   PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
      5771_[0-99] compute_all PYTHON_P test_use PD       0:00      1 (Resources)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here we can see the jobs are beginning to run.  Two of the jobs have completed, 20 of them are currently running, and the rest are still queued.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
             JOBID   PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
      5771_[23-99] compute_all PYTHON_P test_use PD       0:00      1 (Resources)&lt;br /&gt;
           5771_22 compute_all PYTHON_P test_use  R       0:00      1 rocky2&lt;br /&gt;
           5771_20 compute_all PYTHON_P test_use  R       0:01      1 rocky2&lt;br /&gt;
           5771_21 compute_all PYTHON_P test_use  R       0:01      1 rocky2&lt;br /&gt;
            5771_3 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
            5771_4 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
            5771_5 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
            5771_6 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
            5771_7 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
            5771_8 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
            5771_9 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_10 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_11 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_12 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_13 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_14 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_15 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_16 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_17 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_18 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_19 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once jobs are no longer listed in the queue, we see there is a log file for every task in the job array.  Each one contains the prime numbers in their respective chunk.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime_array]$ ls logs/*.out | sort -V &lt;br /&gt;
logs/python_prime_array_5771-0.out&lt;br /&gt;
logs/python_prime_array_5771-1.out&lt;br /&gt;
logs/python_prime_array_5771-2.out&lt;br /&gt;
logs/python_prime_array_5771-3.out&lt;br /&gt;
logs/python_prime_array_5771-4.out&lt;br /&gt;
logs/python_prime_array_5771-5.out&lt;br /&gt;
logs/python_prime_array_5771-6.out&lt;br /&gt;
logs/python_prime_array_5771-7.out&lt;br /&gt;
logs/python_prime_array_5771-8.out&lt;br /&gt;
logs/python_prime_array_5771-9.out&lt;br /&gt;
logs/python_prime_array_5771-10.out&lt;br /&gt;
logs/python_prime_array_5771-11.out&lt;br /&gt;
[truncated]&lt;br /&gt;
logs/python_prime_array_5771-90.out&lt;br /&gt;
logs/python_prime_array_5771-91.out&lt;br /&gt;
logs/python_prime_array_5771-92.out&lt;br /&gt;
logs/python_prime_array_5771-93.out&lt;br /&gt;
logs/python_prime_array_5771-94.out&lt;br /&gt;
logs/python_prime_array_5771-95.out&lt;br /&gt;
logs/python_prime_array_5771-96.out&lt;br /&gt;
logs/python_prime_array_5771-97.out&lt;br /&gt;
logs/python_prime_array_5771-98.out&lt;br /&gt;
logs/python_prime_array_5771-99.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We can see the results in order by using the cat command and then sorting the results.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime_array]$ cat logs/python_prime_array_5771-*.out | sort -V&lt;br /&gt;
&lt;br /&gt;
3&lt;br /&gt;
5&lt;br /&gt;
7&lt;br /&gt;
11&lt;br /&gt;
13&lt;br /&gt;
17&lt;br /&gt;
19&lt;br /&gt;
23&lt;br /&gt;
29&lt;br /&gt;
31&lt;br /&gt;
37&lt;br /&gt;
41&lt;br /&gt;
43&lt;br /&gt;
47&lt;br /&gt;
53&lt;br /&gt;
59&lt;br /&gt;
61&lt;br /&gt;
[truncated]&lt;br /&gt;
999863&lt;br /&gt;
999883&lt;br /&gt;
999907&lt;br /&gt;
999917&lt;br /&gt;
999931&lt;br /&gt;
999953&lt;br /&gt;
999959&lt;br /&gt;
999961&lt;br /&gt;
999979&lt;br /&gt;
999983&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_Python_Prime_Array&amp;diff=206</id>
		<title>Rocky Python Prime Array</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_Python_Prime_Array&amp;diff=206"/>
		<updated>2023-04-21T20:46:55Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Job Array =&lt;br /&gt;
Job arrays allow you to run the same code many times with a different task id.  The task id can then be used to determine which subset of your data to process.  This strategy breaks your large job up into multiple smaller jobs that not only execute more quickly but can run concurrently.&lt;br /&gt;
&lt;br /&gt;
In the example of discovering prime numbers, lets say we want to discover all the primes in the first 1 million numbers.  We could just create code that goes from 1 to 1000000.  But if we use a job array, we could create 100 jobs that each search 10000 numbers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Batch File =&lt;br /&gt;
&lt;br /&gt;
There are three differences when turning this into a job array.  &lt;br /&gt;
&lt;br /&gt;
First, we've added a SBATCH parameter to define not only how many jobs but the range of task ids to produce.  In our example, we're making the range 0 to 99 (we could have also done 1-100).&lt;br /&gt;
&lt;br /&gt;
Secondly, for the log file pattern, we're using %A and %a instead of %j.  These are patterns specific to job arrays.  You can read more about the file patterns at [https://slurm.schedmd.com/sbatch.html#SECTION_%3CB%3Efilename-pattern%3C/B%3E this link]&lt;br /&gt;
&lt;br /&gt;
Lastly, we pass the environment variable $SLURM_ARRAY_TASK_ID as a parameter to our code.  We will need to read in this parameter and use it to determine what data to process.  We know from our array definition that it will be a number from 0 to 99.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''python-prime-array.run'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --job-name=PYTHON_PRIME_ARRAY&lt;br /&gt;
#SBATCH --output=logs/python_prime_array_%A-%a.out&lt;br /&gt;
#SBATCH --array=0-99&lt;br /&gt;
&lt;br /&gt;
module load Python&lt;br /&gt;
&lt;br /&gt;
python prime_array.py $SLURM_ARRAY_TASK_ID&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Python Code =&lt;br /&gt;
&lt;br /&gt;
In the python code, we'll need determine the MIN and MAX values to search.  As long as we know our CHUNKSIZE, we should be able to calculate those values using the task id being passed in as a parameter.  This way, each execution of the code will process  different chunks of numbers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''prime_array.py'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import sys&lt;br /&gt;
&lt;br /&gt;
# How many numbers to check for prime from each job&lt;br /&gt;
CHUNKSIZE = 10000&lt;br /&gt;
&lt;br /&gt;
ARRAYID=0&lt;br /&gt;
if len(sys.argv) &amp;gt; 1:&lt;br /&gt;
    ARRAYID = int(sys.argv[1])&lt;br /&gt;
&lt;br /&gt;
MIN = ARRAYID * CHUNKSIZE&lt;br /&gt;
MAX = MIN + CHUNKSIZE&lt;br /&gt;
&lt;br /&gt;
def is_prime(num):&lt;br /&gt;
    if num &amp;lt;= 1:&lt;br /&gt;
        return False&lt;br /&gt;
    else:&lt;br /&gt;
        for i in range(2, num):&lt;br /&gt;
            if (num % i) == 0:&lt;br /&gt;
                return False&lt;br /&gt;
    return True&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
for i in range(MIN, MAX+1):&lt;br /&gt;
    if is_prime(i):&lt;br /&gt;
        print(i)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Running Job =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ pwd&lt;br /&gt;
/home/test_user/projects/python/prime-array/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ ls&lt;br /&gt;
logs  prime_array.py  python-prime-array.run&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ sbatch python-prime-array.run &lt;br /&gt;
Submitted batch job 5771&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here we can see the job is queued with all 100 jobs.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ squeue&lt;br /&gt;
            JOBID   PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
      5771_[0-99] compute_all PYTHON_P test_use PD       0:00      1 (Resources)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here we can see the jobs are beginning to run.  Two of the jobs have completed, 20 of them are currently running, and the rest are still queued.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
             JOBID   PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
      5771_[23-99] compute_all PYTHON_P test_use PD       0:00      1 (Resources)&lt;br /&gt;
           5771_22 compute_all PYTHON_P test_use  R       0:00      1 rocky2&lt;br /&gt;
           5771_20 compute_all PYTHON_P test_use  R       0:01      1 rocky2&lt;br /&gt;
           5771_21 compute_all PYTHON_P test_use  R       0:01      1 rocky2&lt;br /&gt;
            5771_3 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
            5771_4 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
            5771_5 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
            5771_6 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
            5771_7 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
            5771_8 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
            5771_9 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_10 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_11 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_12 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_13 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_14 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_15 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_16 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_17 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_18 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_19 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once jobs are no longer listed in the queue, we see there is a log file for every task in the job array.  Each one contains the prime numbers in their respective chunk.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime_array]$ ls logs/*.out | sort -V &lt;br /&gt;
logs/python_prime_array_5771-0.out&lt;br /&gt;
logs/python_prime_array_5771-1.out&lt;br /&gt;
logs/python_prime_array_5771-2.out&lt;br /&gt;
logs/python_prime_array_5771-3.out&lt;br /&gt;
logs/python_prime_array_5771-4.out&lt;br /&gt;
logs/python_prime_array_5771-5.out&lt;br /&gt;
logs/python_prime_array_5771-6.out&lt;br /&gt;
logs/python_prime_array_5771-7.out&lt;br /&gt;
logs/python_prime_array_5771-8.out&lt;br /&gt;
logs/python_prime_array_5771-9.out&lt;br /&gt;
logs/python_prime_array_5771-10.out&lt;br /&gt;
logs/python_prime_array_5771-11.out&lt;br /&gt;
[truncated]&lt;br /&gt;
logs/python_prime_array_5771-90.out&lt;br /&gt;
logs/python_prime_array_5771-91.out&lt;br /&gt;
logs/python_prime_array_5771-92.out&lt;br /&gt;
logs/python_prime_array_5771-93.out&lt;br /&gt;
logs/python_prime_array_5771-94.out&lt;br /&gt;
logs/python_prime_array_5771-95.out&lt;br /&gt;
logs/python_prime_array_5771-96.out&lt;br /&gt;
logs/python_prime_array_5771-97.out&lt;br /&gt;
logs/python_prime_array_5771-98.out&lt;br /&gt;
logs/python_prime_array_5771-99.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime_array]$ cat logs/python_prime_array_5771-*.out | sort -V&lt;br /&gt;
&lt;br /&gt;
3&lt;br /&gt;
5&lt;br /&gt;
7&lt;br /&gt;
11&lt;br /&gt;
13&lt;br /&gt;
17&lt;br /&gt;
19&lt;br /&gt;
23&lt;br /&gt;
29&lt;br /&gt;
31&lt;br /&gt;
37&lt;br /&gt;
41&lt;br /&gt;
43&lt;br /&gt;
47&lt;br /&gt;
53&lt;br /&gt;
59&lt;br /&gt;
61&lt;br /&gt;
[truncated]&lt;br /&gt;
999863&lt;br /&gt;
999883&lt;br /&gt;
999907&lt;br /&gt;
999917&lt;br /&gt;
999931&lt;br /&gt;
999953&lt;br /&gt;
999959&lt;br /&gt;
999961&lt;br /&gt;
999979&lt;br /&gt;
999983&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_Python_Prime_Array&amp;diff=205</id>
		<title>Rocky Python Prime Array</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_Python_Prime_Array&amp;diff=205"/>
		<updated>2023-04-21T20:40:51Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Job Array =&lt;br /&gt;
Job arrays allow you to run the same code many times with a different task id.  The task id can then be used to determine which subset of your data to process.  This strategy breaks your large job up into multiple smaller jobs that not only execute more quickly but can run concurrently.&lt;br /&gt;
&lt;br /&gt;
In the example of discovering prime numbers, lets say we want to discover all the primes in the first 1 million numbers.  We could just create code that goes from 1 to 1000000.  But if we use a job array, we could create 100 jobs that each search 10000 numbers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Batch File =&lt;br /&gt;
&lt;br /&gt;
There are three differences when turning this into a job array.  &lt;br /&gt;
&lt;br /&gt;
First, we've added a SBATCH parameter to define not only how many jobs but the range of task ids to produce.  In our example, we're making the range 0 to 99 (we could have also done 1-100).&lt;br /&gt;
&lt;br /&gt;
Secondly, for the log file pattern, we're using %A and %a instead of %j.  These are patterns specific to job arrays.  You can read more about the file patterns at [https://slurm.schedmd.com/sbatch.html#SECTION_%3CB%3Efilename-pattern%3C/B%3E this link]&lt;br /&gt;
&lt;br /&gt;
Lastly, we pass the environment variable $SLURM_ARRAY_TASK_ID as a parameter to our code.  We will need to read in this parameter and use it to determine what data to process.  We know from our array definition that it will be a number from 0 to 99.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''python-prime-array.run'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --job-name=PYTHON_PRIME_ARRAY&lt;br /&gt;
#SBATCH --output=logs/python_prime_array_%A-%a.out&lt;br /&gt;
#SBATCH --array=0-99&lt;br /&gt;
&lt;br /&gt;
module load Python&lt;br /&gt;
&lt;br /&gt;
python prime_array.py $SLURM_ARRAY_TASK_ID&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Python Code =&lt;br /&gt;
&lt;br /&gt;
In the python code, we'll need determine the MIN and MAX values to search.  As long as we know our CHUNKSIZE, we should be able to calculate those values using the task id being passed in as a parameter.  This way, each execution of the code will process  different chunks of numbers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''prime_array.py'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import sys&lt;br /&gt;
&lt;br /&gt;
# How many numbers to check for prime from each job&lt;br /&gt;
CHUNKSIZE = 10000&lt;br /&gt;
&lt;br /&gt;
ARRAYID=0&lt;br /&gt;
if len(sys.argv) &amp;gt; 1:&lt;br /&gt;
    ARRAYID = int(sys.argv[1])&lt;br /&gt;
&lt;br /&gt;
MIN = ARRAYID * CHUNKSIZE&lt;br /&gt;
MAX = MIN + CHUNKSIZE&lt;br /&gt;
&lt;br /&gt;
def is_prime(num):&lt;br /&gt;
    if num &amp;lt;= 1:&lt;br /&gt;
        return False&lt;br /&gt;
    else:&lt;br /&gt;
        for i in range(2, num):&lt;br /&gt;
            if (num % i) == 0:&lt;br /&gt;
                return False&lt;br /&gt;
    return True&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
for i in range(MIN, MAX+1):&lt;br /&gt;
    if is_prime(i):&lt;br /&gt;
        print(i)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Running Job =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ pwd&lt;br /&gt;
/home/test_user/projects/python/prime-array/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ ls&lt;br /&gt;
logs  prime_array.py  python-prime-array.run&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ sbatch python-prime-array.run &lt;br /&gt;
Submitted batch job 5771&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here we can see the job is queued with all 100 jobs.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ squeue&lt;br /&gt;
            JOBID   PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
      5771_[0-99] compute_all PYTHON_P test_use PD       0:00      1 (Resources)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here we can see the jobs are beginning to run.  A few of the jobs have already completed, 20 of them are currently running, and the rest are still queued.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
             JOBID   PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
      5771_[23-99] compute_all PYTHON_P test_use PD       0:00      1 (Resources)&lt;br /&gt;
           5771_22 compute_all PYTHON_P test_use  R       0:00      1 rocky2&lt;br /&gt;
           5771_20 compute_all PYTHON_P test_use  R       0:01      1 rocky2&lt;br /&gt;
           5771_21 compute_all PYTHON_P test_use  R       0:01      1 rocky2&lt;br /&gt;
            5771_3 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
            5771_4 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
            5771_5 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
            5771_6 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
            5771_7 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
            5771_8 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
            5771_9 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_10 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_11 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_12 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_13 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_14 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_15 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_16 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_17 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_18 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
           5771_19 compute_all PYTHON_P test_use  R       0:03      1 rocky2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once job is no longer listed in the queue:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ ls logs/&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ cat &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_Python_Prime_Array&amp;diff=204</id>
		<title>Rocky Python Prime Array</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_Python_Prime_Array&amp;diff=204"/>
		<updated>2023-04-21T05:03:18Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Job Array =&lt;br /&gt;
Job arrays allow you to run the same code many times with a different task id.  The task id can then be used to determine which subset of your data to process.  This strategy breaks your large job up into multiple smaller jobs that not only execute more quickly but can run concurrently.&lt;br /&gt;
&lt;br /&gt;
In the example of discovering prime numbers, lets say we want to discover all the primes in the first 1 million numbers.  We could just create code that goes from 1 to 1000000.  But if we use a job array, we could create 100 jobs that each search 10000 numbers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Batch File =&lt;br /&gt;
&lt;br /&gt;
There are three differences when turning this into a job array.  &lt;br /&gt;
&lt;br /&gt;
First, we've added a SBATCH parameter to define not only how many jobs but the range of task ids to produce.  In our example, we're making the range 0 to 100.&lt;br /&gt;
&lt;br /&gt;
Secondly, for the log file pattern, we're using %A and %a instead of %j.  These are patterns specific to job arrays.  You can read more about the file patterns at [https://slurm.schedmd.com/sbatch.html#SECTION_%3CB%3Efilename-pattern%3C/B%3E this link]&lt;br /&gt;
&lt;br /&gt;
Lastly, we pass the environment variable $SLURM_ARRAY_TASK_ID as a parameter to our code.  We will need to read in this parameter and use it to determine what data to process.  We know from our array definition that it will be a number from 0 to 100.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''python-prime-array.run'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --job-name=PYTHON_PRIME_ARRAY&lt;br /&gt;
#SBATCH --output=logs/python_prime_array_%A-%a.out&lt;br /&gt;
#SBATCH --array=0-100&lt;br /&gt;
&lt;br /&gt;
module load Python&lt;br /&gt;
&lt;br /&gt;
python prime_array.py $SLURM_ARRAY_TASK_ID&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Python Code =&lt;br /&gt;
&lt;br /&gt;
In the python code, we'll need determine the MIN and MAX values to search.  As long as we know our CHUNKSIZE, we should be able to calculate those values using the task id being passed in as a parameter.  This way, each execution of the code will process  different chunks of numbers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''prime_array.py'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import sys&lt;br /&gt;
&lt;br /&gt;
# How many numbers to check for prime from each job&lt;br /&gt;
CHUNKSIZE = 10000&lt;br /&gt;
&lt;br /&gt;
ARRAYID=0&lt;br /&gt;
if len(sys.argv) &amp;gt; 1:&lt;br /&gt;
    ARRAYID = int(sys.argv[1])&lt;br /&gt;
&lt;br /&gt;
MIN = ARRAYID * CHUNKSIZE&lt;br /&gt;
MAX = MIN + CHUNKSIZE&lt;br /&gt;
&lt;br /&gt;
def is_prime(num):&lt;br /&gt;
    if num &amp;lt;= 1:&lt;br /&gt;
        return False&lt;br /&gt;
    else:&lt;br /&gt;
        for i in range(2, num):&lt;br /&gt;
            if (num % i) == 0:&lt;br /&gt;
                return False&lt;br /&gt;
    return True&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
for i in range(MIN, MAX+1):&lt;br /&gt;
    if is_prime(i):&lt;br /&gt;
        print(i)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Running Job =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ pwd&lt;br /&gt;
/home/test_user/projects/python/prime-array/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ ls&lt;br /&gt;
logs  prime_array.py  python-prime-array.run&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ sbatch python-prime-array.run &lt;br /&gt;
Submitted batch job 3877&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here we can see the job is queued with all 100 jobs.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ squeue&lt;br /&gt;
             JOBID   PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
      5667_[0-100] compute_all PYTHON_P test_use PD       0:00      1 (Resources)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once job is no longer listed in the queue:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ ls logs/&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ cat &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_Python_Prime_Array&amp;diff=203</id>
		<title>Rocky Python Prime Array</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_Python_Prime_Array&amp;diff=203"/>
		<updated>2023-04-21T04:50:00Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Job Array =&lt;br /&gt;
Job arrays allow you to run the same code many times with a different task id.  The task id can then be used to determine which subset of your data to process.  This strategy breaks your large job up into multiple smaller jobs that not only execute more quickly but can run concurrently.&lt;br /&gt;
&lt;br /&gt;
In the example of discovering prime numbers, lets say we want to discover all the primes in the first 1 million numbers.  We could just create code that goes from 1 to 1000000.  But if we use a job array, we could create 100 jobs that each search 10000 numbers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Batch File =&lt;br /&gt;
&lt;br /&gt;
There are three differences when turning this into a job array.  &lt;br /&gt;
&lt;br /&gt;
First, we've added a SBATCH parameter to define not only how many jobs but the range of task ids to produce.  In our example, we're making the range 0 to 100.&lt;br /&gt;
&lt;br /&gt;
Secondly, for the log file pattern, we're using %A and %a instead of %j.  These are patterns specific to job arrays.  You can read more about the file patterns at [https://slurm.schedmd.com/sbatch.html#SECTION_%3CB%3Efilename-pattern%3C/B%3E this link]&lt;br /&gt;
&lt;br /&gt;
Lastly, we pass the environment variable $SLURM_ARRAY_TASK_ID as a parameter to our code.  We will need to read in this parameter and use it to determine what data to process.  We know from our array definition that it will be a number from 0 to 100.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --job-name=PYTHON_PRIME_ARRAY&lt;br /&gt;
#SBATCH --output=logs/python_prime_array_%A-%a.out&lt;br /&gt;
#SBATCH --array=0-100&lt;br /&gt;
&lt;br /&gt;
module load Python&lt;br /&gt;
&lt;br /&gt;
python prime_array.py $SLURM_ARRAY_TASK_ID&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Python Code =&lt;br /&gt;
&lt;br /&gt;
In the python code, we'll need determine the MIN and MAX values to search.  As long as we know our CHUNKSIZE, we should be able to calculate those values using the task id being passed in as a parameter.  This way, each execution of the code will process  different chunks of numbers.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import sys&lt;br /&gt;
&lt;br /&gt;
# How many numbers to check for prime from each job&lt;br /&gt;
CHUNKSIZE = 10000&lt;br /&gt;
&lt;br /&gt;
ARRAYID=0&lt;br /&gt;
if len(sys.argv) &amp;gt; 1:&lt;br /&gt;
    ARRAYID = int(sys.argv[1])&lt;br /&gt;
&lt;br /&gt;
MIN = ARRAYID * CHUNKSIZE&lt;br /&gt;
MAX = MIN + CHUNKSIZE&lt;br /&gt;
&lt;br /&gt;
def is_prime(num):&lt;br /&gt;
    if num &amp;lt;= 1:&lt;br /&gt;
        return False&lt;br /&gt;
    else:&lt;br /&gt;
        for i in range(2, num):&lt;br /&gt;
            if (num % i) == 0:&lt;br /&gt;
                return False&lt;br /&gt;
    return True&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
for i in range(MIN, MAX+1):&lt;br /&gt;
    if is_prime(i):&lt;br /&gt;
        print(i)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Running Job =&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_Python_Prime_Array&amp;diff=202</id>
		<title>Rocky Python Prime Array</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_Python_Prime_Array&amp;diff=202"/>
		<updated>2023-04-21T04:42:47Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: Created page with &amp;quot;= Job Array = Job arrays allow you to run the same code many times with a different task id.  The task id can then be used to determine which subset of your data to process.  This strategy breaks your large job up into multiple smaller jobs that not only execute more quickly but can run concurrently.  In the example of discovering prime numbers, lets say we want to discover all the primes in the first 1 million numbers.  We could just create code that goes from 1 to 1000...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Job Array =&lt;br /&gt;
Job arrays allow you to run the same code many times with a different task id.  The task id can then be used to determine which subset of your data to process.  This strategy breaks your large job up into multiple smaller jobs that not only execute more quickly but can run concurrently.&lt;br /&gt;
&lt;br /&gt;
In the example of discovering prime numbers, lets say we want to discover all the primes in the first 1 million numbers.  We could just create code that goes from 1 to 1000000.  But if we use a job array, we could create 100 jobs that each search 10000 numbers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Batch File =&lt;br /&gt;
&lt;br /&gt;
There are three differences when turning this into a job array.  &lt;br /&gt;
&lt;br /&gt;
First, we've added a SBATCH parameter to define not only how many jobs but the range of task ids to produce.  In our example, we're making the range 0 to 100.&lt;br /&gt;
&lt;br /&gt;
Secondly, for the log file pattern, we're using %A and %a instead of %j.  These are patterns specific to job arrays.  You can read more about the file patterns at [https://slurm.schedmd.com/sbatch.html#SECTION_%3CB%3Efilename-pattern%3C/B%3E this link]&lt;br /&gt;
&lt;br /&gt;
Lastly, we pass the environment variable $SLURM_ARRAY_TASK_ID as a parameter to our code.  We will need to read in this parameter and use it to determine what data to process.  We know from our array definition that it will be a number from 0 to 100.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --job-name=PYTHON_PRIME_ARRAY&lt;br /&gt;
#SBATCH --output=logs/python_prime_array_%A-%a.out&lt;br /&gt;
#SBATCH --array=0-100&lt;br /&gt;
&lt;br /&gt;
module load Python&lt;br /&gt;
&lt;br /&gt;
python prime_array.py $SLURM_ARRAY_TASK_ID&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Python Code =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import sys&lt;br /&gt;
&lt;br /&gt;
# How many numbers to check for prime from each job&lt;br /&gt;
CHUNKSIZE = 10000&lt;br /&gt;
&lt;br /&gt;
ARRAYID=0&lt;br /&gt;
if len(sys.argv) &amp;gt; 1:&lt;br /&gt;
    ARRAYID = int(sys.argv[1])&lt;br /&gt;
&lt;br /&gt;
MIN = ARRAYID * CHUNKSIZE&lt;br /&gt;
MAX = MIN + CHUNKSIZE&lt;br /&gt;
&lt;br /&gt;
def is_prime(num):&lt;br /&gt;
    if num &amp;lt;= 1:&lt;br /&gt;
        return False&lt;br /&gt;
    else:&lt;br /&gt;
        for i in range(2, num):&lt;br /&gt;
            if (num % i) == 0:&lt;br /&gt;
                return False&lt;br /&gt;
    return True&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
for i in range(MIN, MAX+1):&lt;br /&gt;
    if is_prime(i):&lt;br /&gt;
        print(i)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Running Job =&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_Access_SSH&amp;diff=200</id>
		<title>Rocky Access SSH</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_Access_SSH&amp;diff=200"/>
		<updated>2023-04-19T15:05:13Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= SSH Software =&lt;br /&gt;
&lt;br /&gt;
On Linux or Mac, the OpenSSH command-line utilities are generally installed by default.&lt;br /&gt;
&lt;br /&gt;
As of Windows 10, Microsoft has made available the Terminal app with OpenSSH tools to install.&amp;lt;br/&amp;gt;&lt;br /&gt;
As of Windows 11, the terminal app and tools are installed by default.&amp;lt;br/&amp;gt;&lt;br /&gt;
With the terminal app, all the commands in this document work but referencing files and paths will be slightly different.&lt;br /&gt;
&lt;br /&gt;
= Generate Key Pair =&lt;br /&gt;
&lt;br /&gt;
[[File:Rocky keygen linuxmac.gif|thumb|SEE IN ACTION]]&lt;br /&gt;
&lt;br /&gt;
Open a terminal and type the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;ssh-keygen&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You will be prompted where to save your private key.  Just hitting enter will save it to the default location.  You will also be prompted to enter a password.  This password will be required to use your private key in the future.&lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| Default Private Key Location || ~/.ssh/id_rsa&lt;br /&gt;
|-&lt;br /&gt;
| Default Public Key Location || ~/.ssh/id_rsa.pub&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Assuming default location, output your public key with the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat ~/.ssh/id_rsa.pub&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style='color: red'&amp;gt;Keep your private key file and contents safe and do not share them.&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Connecting to Rocky =&lt;br /&gt;
&lt;br /&gt;
[[File:Rocky connect linuxmac.gif|thumb|SEE IN ACTION]]&lt;br /&gt;
&lt;br /&gt;
To connect to a Rocky shell you will use the ssh command on the edge node: '''rocky.nimbios.org'''&lt;br /&gt;
&lt;br /&gt;
The format for the ssh command is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -i &amp;lt;myprivatekeyfile&amp;gt; &amp;lt;username&amp;gt;@&amp;lt;hostname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you set a password on your private key, you will be prompted for it.&lt;br /&gt;
&lt;br /&gt;
If your private key file is in the default location, the ssh utilities will know to look there for it making the command just the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh &amp;lt;username&amp;gt;@&amp;lt;hostname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Upload/Download Files =&lt;br /&gt;
[[File:Rocky copy linuxmac.gif|thumb|SEE IT IN ACTION]]&lt;br /&gt;
&lt;br /&gt;
To copy files from remote systems through SCP we use the &amp;lt;code&amp;gt;scp&amp;lt;/code&amp;gt; command.  It is very much like the cp command except you can specify remote machines as the source or destination of the file.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
scp -i &amp;lt;myprivatekeyfile&amp;gt; &amp;lt;from&amp;gt; &amp;lt;to&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To specify a remote location, you would use the format &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;username&amp;gt;@&amp;lt;hostname&amp;gt;:[filename]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The colon is required.  If you leave newfilename blank then it will default to the same filename given in the other parameters.&lt;br /&gt;
&lt;br /&gt;
For example, if you are '''test_user''' and you want to copy a file on your local machine named '''localfile''' to your home directory on Rocky you could do the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
scp localfile test_user@rocky.nimbios.org:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Leaving the filename part blank after the colon means to use the same filename.&lt;br /&gt;
&lt;br /&gt;
Likewise, to copy a file named '''rockyfile''' from your Rocky home directory to your local machine:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
scp test_user@rocky.nimbios.org:rockyfile .&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, using a . means copy it to the current directory.&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_Job_Anatomy&amp;diff=199</id>
		<title>Rocky Job Anatomy</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_Job_Anatomy&amp;diff=199"/>
		<updated>2023-04-17T20:41:49Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Anatomy of a Rocky Job =&lt;br /&gt;
&lt;br /&gt;
Setting up a job to run on Rocky starts by creating or uploading your project's files to the project directory within your home directory on Rocky.  These files will include the code you've written, any data files needed, and a batch file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Your Code ====&lt;br /&gt;
&lt;br /&gt;
Your code is what is submitted and executed on Rocky's compute nodes.&amp;lt;br/&amp;gt;&lt;br /&gt;
It can be written in any of the languages supported by Rocky environment modules (Lmod).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Your Data ====&lt;br /&gt;
&lt;br /&gt;
If your job will be processing data, you'll need to upload that data to your project's directory.&lt;br /&gt;
&lt;br /&gt;
Your home directory is shared amongst all compute nodes.  No matter which node your job is assigned, it will have access to your data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Batch Script ====&lt;br /&gt;
&lt;br /&gt;
The batch script is a shell script that brings everything together by defining job parameters, loading any environment modules needed, and finally executing your code.&lt;br /&gt;
&lt;br /&gt;
Job parameters are defined one per line and start with &amp;lt;code&amp;gt;#SBATCH&amp;lt;/code&amp;gt;.&amp;lt;br/&amp;gt;&lt;br /&gt;
All parameters have default values and are optional but most batch scripts will use them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''my_job.run'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=MY_JOB         ### Job Name&lt;br /&gt;
#SBATCH --output=my_job_%j.out    ### File in which to store job output&lt;br /&gt;
#SBATCH --time=00:10:00           ### Wall clock time limit in Days-HH:MM:SS&lt;br /&gt;
#SBATCH --nodes=1                 ### Node count required for the job&lt;br /&gt;
#SBATCH --ntasks-per-node=1       ### Number of tasks to be launched per Node&lt;br /&gt;
#SBATCH --mem-per-cpu=2G&lt;br /&gt;
#SBATCH --cpus-per-task=40&lt;br /&gt;
&lt;br /&gt;
module load R/4.2.1-foss-2022a&lt;br /&gt;
&lt;br /&gt;
Rscript my_code.R&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Running Job =&lt;br /&gt;
&lt;br /&gt;
==== Submitting Job ====&lt;br /&gt;
&lt;br /&gt;
Jobs are submitted using the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command and passed your batch script as a parameter.  This will add your job to the queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sbatch my_job.run&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Watching Job ====&lt;br /&gt;
&lt;br /&gt;
While your job is in the queue or being executed you may see it's status using the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command.  If the job is currently running it will show which node(s) it is assigned.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 ~]$ squeue&lt;br /&gt;
             JOBID PARTITION       NAME      USER ST      TIME  NODES NODELIST(REASON)&lt;br /&gt;
              2947 compute_all    my_job test_use  R      0:05      1 moose1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Cancelling Job ====&lt;br /&gt;
&lt;br /&gt;
To cancel a job, use the &amp;lt;code&amp;gt;scancel&amp;lt;/code&amp;gt; command and pass the JOBID (as returned by &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
scancel 2947&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_R_HelloWorld&amp;diff=198</id>
		<title>Rocky R HelloWorld</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_R_HelloWorld&amp;diff=198"/>
		<updated>2023-04-17T14:45:55Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: /* Running Job */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= R Code =&lt;br /&gt;
&lt;br /&gt;
'''helloworld.R'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
print(&amp;quot;Hello World!&amp;quot;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Batch Script =&lt;br /&gt;
&lt;br /&gt;
'''helloworld.run'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#SBATCH --job-name=R_HELLOWORLD&lt;br /&gt;
#SBATCH --output=R_hello_%j.out&lt;br /&gt;
&lt;br /&gt;
module load R/4.2.1-foss-2022a &lt;br /&gt;
&lt;br /&gt;
Rscript helloworld.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Running Job =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 helloworld]$ pwd&lt;br /&gt;
/home/test_user/projects/R/helloworld&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 helloworld]$ ls&lt;br /&gt;
helloworld.R  helloworld.run&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 helloworld]$ sbatch helloworld.run &lt;br /&gt;
Submitted batch job 3875&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 helloworld]$ ls&lt;br /&gt;
helloworld.py  helloworld.run  R_hell0_3875.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 helloworld]$ cat R_hell0_3875.out &lt;br /&gt;
Hello World!&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_Support&amp;diff=197</id>
		<title>Rocky Support</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_Support&amp;diff=197"/>
		<updated>2023-04-11T18:31:32Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Supporting Rocky =&lt;br /&gt;
&lt;br /&gt;
Rocky grows through your ﬁnancial support (e.g.,&lt;br /&gt;
direct costs built into grants or start up funds). By&lt;br /&gt;
contributing to building and maintaining Rocky, you&lt;br /&gt;
can assure priority access to the resources you&lt;br /&gt;
contribute.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Node Costs =&lt;br /&gt;
&lt;br /&gt;
Below are examples of node costs.  While we update the below list routinely, hardware prices change daily.  The below table is meant to give a sense of scale and not a guaranteed cost.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Storage Node ====&lt;br /&gt;
&lt;br /&gt;
Storage Nodes are added to our Ceph storage subsystem to provide highly redundant and fault tolerant storage accessible to the Rocky Cluster.  &lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| Quote Date || January 25, 2023&lt;br /&gt;
|-&lt;br /&gt;
| Quoted Cost || $12,155.32&lt;br /&gt;
|-&lt;br /&gt;
| Capacity || 40TB&lt;br /&gt;
|-&lt;br /&gt;
| Configuration || Dell R740xd&amp;lt;br/&amp;gt;16 x 8TB HDD&amp;lt;br/&amp;gt;4 x 1.9TB M.2 SSD&amp;lt;br/&amp;gt;Additional drives to extend backup system.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Compute Node ====&lt;br /&gt;
&lt;br /&gt;
New quotes for compute nodes are currently being obtained and will be updated shortly.&lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| Quote Date || TBD&lt;br /&gt;
|-&lt;br /&gt;
| Quoted Cost || $TBD&lt;br /&gt;
|-&lt;br /&gt;
| Compute || TBD&lt;br /&gt;
|-&lt;br /&gt;
| Memory || TBD&lt;br /&gt;
|-&lt;br /&gt;
| Configuration || TBD&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_Support&amp;diff=196</id>
		<title>Rocky Support</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_Support&amp;diff=196"/>
		<updated>2023-04-11T18:29:08Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Supporting Rocky =&lt;br /&gt;
&lt;br /&gt;
Rocky grows through your ﬁnancial support (e.g.,&lt;br /&gt;
direct costs built into grants or start up funds). By&lt;br /&gt;
contributing to building and maintaining Rocky, you&lt;br /&gt;
can assure priority access to the resources you&lt;br /&gt;
contribute.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Node Costs =&lt;br /&gt;
&lt;br /&gt;
Below are examples of node costs.  While we update the below list routinely, hardware prices change daily.  The below table is meant to give a sense of scale and not a guaranteed cost.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Storage Node ====&lt;br /&gt;
&lt;br /&gt;
Storage Nodes are added to our Ceph storage subsystem to provide highly redundant and fault tolerant storage accessible to the Rocky Cluster.  &lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| Quote Date || January 25, 2023&lt;br /&gt;
|-&lt;br /&gt;
| Quoted Cost || $12,155.32&lt;br /&gt;
|-&lt;br /&gt;
| Capacity || 40TB&lt;br /&gt;
|-&lt;br /&gt;
| Configuration || Dell R740xd&amp;lt;br/&amp;gt;16 x 8TB HDD&amp;lt;br/&amp;gt;4 x 1.9TB M.2 SSD&amp;lt;br/&amp;gt;Additional drives to extend backup system.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Compute Node ====&lt;br /&gt;
&lt;br /&gt;
New quotes for compute nodes are currently being obtained and will be updated shortly.&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_Support&amp;diff=195</id>
		<title>Rocky Support</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_Support&amp;diff=195"/>
		<updated>2023-04-11T18:28:38Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Supporting Rocky =&lt;br /&gt;
&lt;br /&gt;
Rocky grows through your ﬁnancial support (e.g.,&lt;br /&gt;
direct costs built into grants or start up funds). By&lt;br /&gt;
contributing to building and maintaining Rocky, you&lt;br /&gt;
can assure priority access to the resources you&lt;br /&gt;
contribute.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Node Costs =&lt;br /&gt;
&lt;br /&gt;
Below are examples of node costs.  While we update the below list routinely, hardware prices change daily.  The below table is meant to give a sense of scale and not a guaranteed cost.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Storage Node ====&lt;br /&gt;
&lt;br /&gt;
Storage Nodes are added to our Ceph storage subsystem to provide highly redundant and fault tolerant storage accessible to the Rocky Cluster.  &lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| Quote Date || January 25, 2023&lt;br /&gt;
|-&lt;br /&gt;
| Quoted Cost || $12,155.32&lt;br /&gt;
|-&lt;br /&gt;
| Capacity || 40TB&lt;br /&gt;
|-&lt;br /&gt;
| Configuration || Dell R740xd&amp;lt;br/&amp;gt;16 x 8TB 3.5&amp;quot; drives&amp;lt;br/&amp;gt;4 x 1.9TB M.2 SSD&amp;lt;br/&amp;gt;Additional drives to extend backup system.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Compute Node ====&lt;br /&gt;
&lt;br /&gt;
New quotes for compute nodes are currently being obtained and will be updated shortly.&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_Support&amp;diff=194</id>
		<title>Rocky Support</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_Support&amp;diff=194"/>
		<updated>2023-04-11T18:27:50Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Supporting Rocky =&lt;br /&gt;
&lt;br /&gt;
Rocky grows through your ﬁnancial support (e.g.,&lt;br /&gt;
direct costs built into grants or start up funds). By&lt;br /&gt;
contributing to building and maintaining Rocky, you&lt;br /&gt;
can assure priority access to the resources you&lt;br /&gt;
contribute.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Node Costs =&lt;br /&gt;
&lt;br /&gt;
Below are examples of node costs.  While we update the below list routinely, hardware prices change daily.  The below table is meant to give a sense of scale and not a guaranteed cost.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Storage Node ====&lt;br /&gt;
&lt;br /&gt;
Storage Nodes are added to our Ceph storage subsystem to provide highly redundant and fault tolerant storage accessible to the Rocky Cluster.  &lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| Quote Date || January 25, 2023&lt;br /&gt;
|-&lt;br /&gt;
| Quoted Cost || $12,155.32&lt;br /&gt;
|-&lt;br /&gt;
| Capacity || 40TB&lt;br /&gt;
|-&lt;br /&gt;
| Configuration || Dell R740xd&amp;lt;br/&amp;gt;16 x 8TB 3.5&amp;quot; drives&amp;lt;br/&amp;gt;4 x M.2 SSD&amp;lt;br/&amp;gt;Additional drives to extend backup system.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Compute Node ====&lt;br /&gt;
&lt;br /&gt;
New quotes for compute nodes are currently being obtained and will be updated shortly.&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_Support&amp;diff=193</id>
		<title>Rocky Support</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_Support&amp;diff=193"/>
		<updated>2023-04-11T18:26:26Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Supporting Rocky =&lt;br /&gt;
&lt;br /&gt;
Rocky grows through your ﬁnancial support (e.g.,&lt;br /&gt;
direct costs built into grants or start up funds). By&lt;br /&gt;
contributing to building and maintaining Rocky, you&lt;br /&gt;
can assure priority access to the resources you&lt;br /&gt;
contribute.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Node Costs =&lt;br /&gt;
&lt;br /&gt;
Below are examples of node costs.  While we update the below list routinely, hardware prices change daily.  The below table is meant to give a sense of scale and not a guaranteed cost.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Storage Node ====&lt;br /&gt;
&lt;br /&gt;
Storage Nodes are added to our Ceph storage subsystem to provide highly redundant and fault tolerant storage accessible to the Rocky Cluster.  &lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| Quote Date || 1/25/2023&lt;br /&gt;
|-&lt;br /&gt;
| Quoted Cost || $12,155.32&lt;br /&gt;
|-&lt;br /&gt;
| Capacity || 40TB&lt;br /&gt;
|-&lt;br /&gt;
| Configuration || Dell R740xd&amp;lt;br/&amp;gt;16 x 8TB 3.5&amp;quot; drives&amp;lt;br/&amp;gt;4 x M.2 SSD&amp;lt;br/&amp;gt;Additional drives to extend backup system.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Compute Node ====&lt;br /&gt;
&lt;br /&gt;
New quotes for compute nodes are currently being obtained and will be updated shortly.&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_Support&amp;diff=192</id>
		<title>Rocky Support</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_Support&amp;diff=192"/>
		<updated>2023-04-11T18:26:11Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: Created page with &amp;quot;= Supporting Rocky =  Rocky grows through your ﬁnancial support (e.g., direct costs built into grants or start up funds). By contributing to building and maintaining Rocky, you can assure priority access to the resources you contribute.  = Node Costs =  Below are examples of node costs.  While we update the below list routinely, hardware prices change daily.  The below table is meant to give a sense of scale and not a guaranteed cost.   ==== Storage Node ====  Storage...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Supporting Rocky =&lt;br /&gt;
&lt;br /&gt;
Rocky grows through your ﬁnancial support (e.g.,&lt;br /&gt;
direct costs built into grants or start up funds). By&lt;br /&gt;
contributing to building and maintaining Rocky, you&lt;br /&gt;
can assure priority access to the resources you&lt;br /&gt;
contribute.&lt;br /&gt;
&lt;br /&gt;
= Node Costs =&lt;br /&gt;
&lt;br /&gt;
Below are examples of node costs.  While we update the below list routinely, hardware prices change daily.  The below table is meant to give a sense of scale and not a guaranteed cost.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Storage Node ====&lt;br /&gt;
&lt;br /&gt;
Storage Nodes are added to our Ceph storage subsystem to provide highly redundant and fault tolerant storage accessible to the Rocky Cluster.  &lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| Quote Date || 1/25/2023&lt;br /&gt;
|-&lt;br /&gt;
| Quoted Cost || $12,155.32&lt;br /&gt;
|-&lt;br /&gt;
| Capacity || 40TB&lt;br /&gt;
|-&lt;br /&gt;
| Configuration || Dell R740xd&amp;lt;br/&amp;gt;16 x 8TB 3.5&amp;quot; drives&amp;lt;br/&amp;gt;4 x M.2 SSD&amp;lt;br/&amp;gt;Additional drives to extend backup system.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Compute Node ====&lt;br /&gt;
&lt;br /&gt;
New quotes for compute nodes are currently being obtained and will be updated shortly.&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_Python_Prime&amp;diff=191</id>
		<title>Rocky Python Prime</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_Python_Prime&amp;diff=191"/>
		<updated>2023-04-10T20:36:37Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Python Code =&lt;br /&gt;
&lt;br /&gt;
'''prime.py'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
MIN = 2&lt;br /&gt;
MAX = 100000&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def is_prime(num):&lt;br /&gt;
    if num &amp;lt;= 1:&lt;br /&gt;
        return False&lt;br /&gt;
    else:&lt;br /&gt;
        for i in range(2, num):&lt;br /&gt;
            if (num % i) == 0:&lt;br /&gt;
                return False&lt;br /&gt;
    return True&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
for i in range(MIN, MAX+1):&lt;br /&gt;
    if is_prime(i):&lt;br /&gt;
        print(i)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Batch Script =&lt;br /&gt;
&lt;br /&gt;
'''python-prime.run'''&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#SBATCH --job-name=PYTHON_PRIME&lt;br /&gt;
#SBATCH --output=python_prime_%j.out&lt;br /&gt;
&lt;br /&gt;
module load Python&lt;br /&gt;
&lt;br /&gt;
python3 prime.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Running Job =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ pwd&lt;br /&gt;
/home/test_user/projects/python/prime&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ ls&lt;br /&gt;
prime.py  python-prime.run&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ sbatch python-prime.run &lt;br /&gt;
Submitted batch job 3877&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here we can see the job was assigned to moose1.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ squeue&lt;br /&gt;
             JOBID PARTITION       NAME      USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              3877 compute_all PYTHON_P test_user  R       0:02      1 moose1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once job is no longer listed in the queue:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ ls&lt;br /&gt;
prime.py  python-prime.run  python_prime_3877.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ cat python_prime_3877.out &lt;br /&gt;
2&lt;br /&gt;
3&lt;br /&gt;
5&lt;br /&gt;
7&lt;br /&gt;
11&lt;br /&gt;
...&amp;lt;truncated&amp;gt;...&lt;br /&gt;
99881&lt;br /&gt;
99901&lt;br /&gt;
99907&lt;br /&gt;
99923&lt;br /&gt;
99929&lt;br /&gt;
99961&lt;br /&gt;
99971&lt;br /&gt;
99989&lt;br /&gt;
99991&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_Python_Prime&amp;diff=190</id>
		<title>Rocky Python Prime</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_Python_Prime&amp;diff=190"/>
		<updated>2023-04-10T20:36:25Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: /* Batch Script */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Python Code =&lt;br /&gt;
&lt;br /&gt;
'''prime.py'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
MIN = 2&lt;br /&gt;
MAX = 100000&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def is_prime(num):&lt;br /&gt;
    if num &amp;lt;= 1:&lt;br /&gt;
        return False&lt;br /&gt;
    else:&lt;br /&gt;
        for i in range(2, num):&lt;br /&gt;
            if (num % i) == 0:&lt;br /&gt;
                return False&lt;br /&gt;
    return True&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
for i in range(MIN, MAX+1):&lt;br /&gt;
    if is_prime(i):&lt;br /&gt;
        print(i)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Batch Script =&lt;br /&gt;
&lt;br /&gt;
'''python-prime.run'''&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#SBATCH --job-name=PYTHON_PRIME&lt;br /&gt;
#SBATCH --output=python_prime_%j.out&lt;br /&gt;
&lt;br /&gt;
module load Python&lt;br /&gt;
&lt;br /&gt;
python3 prime.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Running Job =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ pwd&lt;br /&gt;
/home/test_user/projects/python/prime&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ ls&lt;br /&gt;
prime.py  python-prime.run&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ sbatch python-prime.run &lt;br /&gt;
Submitted batch job 3877&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here we can see the job was assigned to moose1.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ squeue&lt;br /&gt;
             JOBID PARTITION       NAME      USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              3877 compute_all PYTHON_P test_user  R       0:02      1 moose1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once job is no longer listed in the queue:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ ls&lt;br /&gt;
prime.py  python-prime.run  python_prime_3877.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ cat python_prime_3877.out &lt;br /&gt;
2&lt;br /&gt;
3&lt;br /&gt;
5&lt;br /&gt;
7&lt;br /&gt;
11&lt;br /&gt;
...&amp;lt;truncated&amp;gt;...&lt;br /&gt;
99881&lt;br /&gt;
99901&lt;br /&gt;
99907&lt;br /&gt;
99923&lt;br /&gt;
99929&lt;br /&gt;
99961&lt;br /&gt;
99971&lt;br /&gt;
99989&lt;br /&gt;
99991&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_User_Guide&amp;diff=189</id>
		<title>Rocky User Guide</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_User_Guide&amp;diff=189"/>
		<updated>2023-04-10T19:54:26Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= About Rocky =&lt;br /&gt;
&lt;br /&gt;
Rocky is a [https://en.wikipedia.org/wiki/High-performance_computing HPC] cluster comprised of compute heavy nodes with 40 cores/80 threads and 512GB of ram ['''rocky'''], memory intensive nodes with 20 cores/40 threads and 768GB of RAM ['''moose'''], and a [https://docs.ceph.com Ceph] storage subsystem ['''quarrel''']&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Requesting Access ==&lt;br /&gt;
In order to gain access to Rocky you must first fill out the [[Rocky_Access_Form]].  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging in to Rocky ==&lt;br /&gt;
Rocky's firewall limits access to the UTK network.  You will either need to be on campus or using the [https://utk.teamdynamix.com/TDClient/2277/OIT-Portal/KB/ArticleDet?ID=123517 Campus VPN]&lt;br /&gt;
&lt;br /&gt;
Once your account is created you will be able to SSH into a shell or SCP to copy files to/from Rocky.  &lt;br /&gt;
&lt;br /&gt;
Rocky uses Public Key Authentication for access instead of passwords.  &lt;br /&gt;
&lt;br /&gt;
Please review the following pages for OS specific instructions:&lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| [[Rocky_Access_SSH]] || Linux or Mac&lt;br /&gt;
|-&lt;br /&gt;
| [[Rocky_Access_Windows]] || Windows&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Environmental Modules ==&lt;br /&gt;
Rocky uses [https://lmod.readthedocs.io/en/latest/ Lmod] as it's environmental module system.  This allows you to easily set your session or job's environment to support the language, libraries, and specific versions needed.&lt;br /&gt;
&lt;br /&gt;
To learn more about using Lmod on Rocky, check out [[ Rocky_Environments | Rocky Environments]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Submitting a Job ==&lt;br /&gt;
&lt;br /&gt;
Rocky uses [https://slurm.schedmd.com/documentation.html Slurm] to queue and submit jobs to the cluster's compute nodes.  &lt;br /&gt;
&lt;br /&gt;
To learn how to set up your own jobs, check out the [[Rocky_Job_Anatomy | Anatomy of a Rocky Job]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Example Jobs =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Python ====&lt;br /&gt;
* [[ Rocky_Python_HelloWorld | Hello World ]]&lt;br /&gt;
* [[ Rocky_Python_Prime | Calculate Prime Numbers ]]&lt;br /&gt;
&lt;br /&gt;
==== R ====&lt;br /&gt;
* [[ Rocky_R_HelloWorld | Hello World ]]&lt;br /&gt;
* [[ Rocky_R_Prime | Calculate Prime Numbers ]]&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_User_Guide&amp;diff=188</id>
		<title>Rocky User Guide</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_User_Guide&amp;diff=188"/>
		<updated>2023-04-10T19:54:14Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: /* Environmental Modules */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= About Rocky =&lt;br /&gt;
&lt;br /&gt;
Rocky is a [https://en.wikipedia.org/wiki/High-performance_computing HPC] cluster comprised of compute heavy nodes with 40 cores/80 threads and 512GB of ram ['''rocky'''], memory intensive nodes with 20 cores/40 threads and 768GB of RAM ['''moose'''], and a [https://docs.ceph.com Ceph] storage subsystem ['''quarrel''']&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Requesting Access ==&lt;br /&gt;
In order to gain access to Rocky you must first fill out the [[Rocky_Access_Form]].  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging in to Rocky ==&lt;br /&gt;
Rocky's firewall limits access to the UTK network.  You will either need to be on campus or using the [https://utk.teamdynamix.com/TDClient/2277/OIT-Portal/KB/ArticleDet?ID=123517 Campus VPN]&lt;br /&gt;
&lt;br /&gt;
Once your account is created you will be able to SSH into a shell or SCP to copy files to/from Rocky.  &lt;br /&gt;
&lt;br /&gt;
Rocky uses Public Key Authentication for access instead of passwords.  &lt;br /&gt;
&lt;br /&gt;
Please review the following pages for OS specific instructions:&lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| [[Rocky_Access_SSH]] || Linux or Mac&lt;br /&gt;
|-&lt;br /&gt;
| [[Rocky_Access_Windows]] || Windows&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Environmental Modules ==&lt;br /&gt;
Rocky uses [https://lmod.readthedocs.io/en/latest/ Lmod] as it's environmental module system.  This allows you to easily set your session or job's environment to support the language, libraries, and specific versions needed.&lt;br /&gt;
&lt;br /&gt;
To learn more about using Lmod on Rocky, check out [[ Rocky_Environments | Rocky Environments]].&lt;br /&gt;
&lt;br /&gt;
== Submitting a Job ==&lt;br /&gt;
&lt;br /&gt;
Rocky uses [https://slurm.schedmd.com/documentation.html Slurm] to queue and submit jobs to the cluster's compute nodes.  &lt;br /&gt;
&lt;br /&gt;
To learn how to set up your own jobs, check out the [[Rocky_Job_Anatomy | Anatomy of a Rocky Job]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Example Jobs =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Python ====&lt;br /&gt;
* [[ Rocky_Python_HelloWorld | Hello World ]]&lt;br /&gt;
* [[ Rocky_Python_Prime | Calculate Prime Numbers ]]&lt;br /&gt;
&lt;br /&gt;
==== R ====&lt;br /&gt;
* [[ Rocky_R_HelloWorld | Hello World ]]&lt;br /&gt;
* [[ Rocky_R_Prime | Calculate Prime Numbers ]]&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_User_Guide&amp;diff=187</id>
		<title>Rocky User Guide</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_User_Guide&amp;diff=187"/>
		<updated>2023-04-10T19:53:56Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: /* Environmental Modules */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= About Rocky =&lt;br /&gt;
&lt;br /&gt;
Rocky is a [https://en.wikipedia.org/wiki/High-performance_computing HPC] cluster comprised of compute heavy nodes with 40 cores/80 threads and 512GB of ram ['''rocky'''], memory intensive nodes with 20 cores/40 threads and 768GB of RAM ['''moose'''], and a [https://docs.ceph.com Ceph] storage subsystem ['''quarrel''']&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Requesting Access ==&lt;br /&gt;
In order to gain access to Rocky you must first fill out the [[Rocky_Access_Form]].  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging in to Rocky ==&lt;br /&gt;
Rocky's firewall limits access to the UTK network.  You will either need to be on campus or using the [https://utk.teamdynamix.com/TDClient/2277/OIT-Portal/KB/ArticleDet?ID=123517 Campus VPN]&lt;br /&gt;
&lt;br /&gt;
Once your account is created you will be able to SSH into a shell or SCP to copy files to/from Rocky.  &lt;br /&gt;
&lt;br /&gt;
Rocky uses Public Key Authentication for access instead of passwords.  &lt;br /&gt;
&lt;br /&gt;
Please review the following pages for OS specific instructions:&lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| [[Rocky_Access_SSH]] || Linux or Mac&lt;br /&gt;
|-&lt;br /&gt;
| [[Rocky_Access_Windows]] || Windows&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Environmental Modules ==&lt;br /&gt;
Rocky uses [https://lmod.readthedocs.io/en/latest/ Lmod] as it's environmental module system.  This allows you to easily set your session or job's environment to support the language, libraries, and specific versions needed.&lt;br /&gt;
&lt;br /&gt;
To learn more about using Lmod on Rocky, read about [[ Rocky_Environments | Rocky Environments]].&lt;br /&gt;
&lt;br /&gt;
== Submitting a Job ==&lt;br /&gt;
&lt;br /&gt;
Rocky uses [https://slurm.schedmd.com/documentation.html Slurm] to queue and submit jobs to the cluster's compute nodes.  &lt;br /&gt;
&lt;br /&gt;
To learn how to set up your own jobs, check out the [[Rocky_Job_Anatomy | Anatomy of a Rocky Job]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Example Jobs =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Python ====&lt;br /&gt;
* [[ Rocky_Python_HelloWorld | Hello World ]]&lt;br /&gt;
* [[ Rocky_Python_Prime | Calculate Prime Numbers ]]&lt;br /&gt;
&lt;br /&gt;
==== R ====&lt;br /&gt;
* [[ Rocky_R_HelloWorld | Hello World ]]&lt;br /&gt;
* [[ Rocky_R_Prime | Calculate Prime Numbers ]]&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_Environments&amp;diff=186</id>
		<title>Rocky Environments</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_Environments&amp;diff=186"/>
		<updated>2023-04-10T19:51:07Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: Created page with &amp;quot;= About = Rocky uses Lmod to easily set up environments for your projects.  By default your Rocky session has a minimal of software/utilities available to you.  By loading modules, environment variables and paths are set to give you easy access to not only language utilities and libraries but also specific versions of those.  To use lmod, you will use the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command.   = Loading Modules =  By default, support for R is not loaded. &amp;lt;pre&amp;gt; [test_user@rocky7...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= About =&lt;br /&gt;
Rocky uses Lmod to easily set up environments for your projects.  By default your Rocky session has a minimal of software/utilities available to you.  By loading modules, environment variables and paths are set to give you easy access to not only language utilities and libraries but also specific versions of those.&lt;br /&gt;
&lt;br /&gt;
To use lmod, you will use the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Loading Modules =&lt;br /&gt;
&lt;br /&gt;
By default, support for R is not loaded.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 ~]$ Rscript&lt;br /&gt;
-bash: Rscript: command not found&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By loading the R module, we now have access to R language tools.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 ~]$ module load R&lt;br /&gt;
[test_user@rocky7 ~]$ Rscript&lt;br /&gt;
Usage: Rscript [options] file [args]&lt;br /&gt;
   or: Rscript [options] -e expr [-e expr2 ...] [args]&lt;br /&gt;
A binary front-end to R, for use in scripting applications.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Explore Modules =&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command has a few different parameters to help you find what is available.&lt;br /&gt;
&lt;br /&gt;
{| class='wikitable' &lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;module avail [name]&amp;lt;/code&amp;gt; || List available modules with optional partial name search.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;module keyword &amp;lt;keyword&amp;gt; &amp;lt;/code&amp;gt; || Search module names and descriptions for &amp;lt;keyword&amp;gt; and list them.&lt;br /&gt;
|- &lt;br /&gt;
| &amp;lt;code&amp;gt; module spider &amp;lt;name&amp;gt; &amp;lt;/code&amp;gt; || Search module names and list them with all versions available.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt; module help &amp;lt;modulename&amp;gt; &amp;lt;/code&amp;gt; || Describe a module and all included extensions.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 ~]$ module avail octave&lt;br /&gt;
&lt;br /&gt;
------------------------ /apps/current/modules/all ------------------------&lt;br /&gt;
   Octave/5.1.0-foss-2019b    &lt;br /&gt;
   Octave/6.2.0-foss-2020b&lt;br /&gt;
   Octave/7.1.0-foss-2021b (D)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 ~]$ module spider Python&lt;br /&gt;
------------&lt;br /&gt;
  Python:&lt;br /&gt;
------------&lt;br /&gt;
    Description:&lt;br /&gt;
      Python is a programming language that lets you work more quickly and integrate your systems more effectively.&lt;br /&gt;
&lt;br /&gt;
     Versions:&lt;br /&gt;
        Python/2.7.15-foss-2018b&lt;br /&gt;
        Python/2.7.15-GCCcore-8.2.0&lt;br /&gt;
        Python/2.7.16-GCCcore-8.3.0&lt;br /&gt;
        Python/2.7.18-GCCcore-9.3.0&lt;br /&gt;
        Python/2.7.18-GCCcore-10.2.0&lt;br /&gt;
        Python/2.7.18-GCCcore-10.3.0-bare&lt;br /&gt;
        Python/2.7.18-GCCcore-11.2.0-bare&lt;br /&gt;
        Python/2.7.18-GCCcore-11.2.0&lt;br /&gt;
        Python/2.7.18-GCCcore-11.3.0-bare&lt;br /&gt;
        Python/3.7.2-GCCcore-8.2.0&lt;br /&gt;
        Python/3.7.4-GCCcore-8.3.0&lt;br /&gt;
        Python/3.8.2-GCCcore-9.3.0&lt;br /&gt;
        Python/3.8.6-GCCcore-10.2.0&lt;br /&gt;
        Python/3.9.5-GCCcore-10.3.0-bare&lt;br /&gt;
        Python/3.9.5-GCCcore-10.3.0&lt;br /&gt;
        Python/3.9.6-GCCcore-11.2.0-bare&lt;br /&gt;
        Python/3.9.6-GCCcore-11.2.0&lt;br /&gt;
        Python/3.10.4-GCCcore-11.3.0-bare&lt;br /&gt;
        Python/3.10.4-GCCcore-11.3.0&lt;br /&gt;
        Python/3.10.8-GCCcore-12.2.0-bare&lt;br /&gt;
        Python/3.10.8-GCCcore-12.2.0&lt;br /&gt;
        Python/3.11.2-GCCcore-12.2.0-bare&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Module Versions =&lt;br /&gt;
&lt;br /&gt;
We can load a specific version of python.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 ~]$ module load Python/2.7.18-GCCcore-11.3.0-bare &lt;br /&gt;
[test_user@rocky7 ~]$ python --version&lt;br /&gt;
Python 2.7.18&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If we switch to a different version of python, Lmod will take care of all loaded modules that need to switch with it.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 ~]$ module load Python/3.11.2-GCCcore-12.2.0-bare &lt;br /&gt;
&lt;br /&gt;
The following have been reloaded with a version change:&lt;br /&gt;
  1) GCCcore/11.3.0 =&amp;gt; GCCcore/12.2.0&lt;br /&gt;
  2) Python/2.7.18-GCCcore-11.3.0-bare =&amp;gt; Python/3.11.2-GCCcore-12.2.0-bare&lt;br /&gt;
  3) SQLite/3.38.3-GCCcore-11.3.0 =&amp;gt; SQLite/3.39.4-GCCcore-12.2.0&lt;br /&gt;
  4) Tcl/8.6.12-GCCcore-11.3.0 =&amp;gt; Tcl/8.6.12-GCCcore-12.2.0&lt;br /&gt;
  5) binutils/2.38-GCCcore-11.3.0 =&amp;gt; binutils/2.39-GCCcore-12.2.0&lt;br /&gt;
  6) bzip2/1.0.8-GCCcore-11.3.0 =&amp;gt; bzip2/1.0.8-GCCcore-12.2.0&lt;br /&gt;
  7) libreadline/8.1.2-GCCcore-11.3.0 =&amp;gt; libreadline/8.2-GCCcore-12.2.0&lt;br /&gt;
  8) ncurses/6.3-GCCcore-11.3.0 =&amp;gt; ncurses/6.3-GCCcore-12.2.0&lt;br /&gt;
  9) zlib/1.2.12-GCCcore-11.3.0 =&amp;gt; zlib/1.2.12-GCCcore-12.2.0&lt;br /&gt;
&lt;br /&gt;
[test_user@rocky7 ~]$ python --version&lt;br /&gt;
Python 3.11.2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Managing Loaded Modules =&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command has parameters to help you manage which modules are loaded.&lt;br /&gt;
&lt;br /&gt;
{| class='wikitable' &lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;module list&amp;lt;/code&amp;gt; || List all currently loaded modules.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;module unload &amp;lt;module&amp;gt; &amp;lt;/code&amp;gt; || Unload a specific module.&lt;br /&gt;
|- &lt;br /&gt;
| &amp;lt;code&amp;gt; module purge &amp;lt;/code&amp;gt; || Unload all loaded modules.&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_Python_Prime&amp;diff=183</id>
		<title>Rocky Python Prime</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_Python_Prime&amp;diff=183"/>
		<updated>2023-04-10T17:51:26Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Python Code =&lt;br /&gt;
&lt;br /&gt;
'''prime.py'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
MIN = 2&lt;br /&gt;
MAX = 100000&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def is_prime(num):&lt;br /&gt;
    if num &amp;lt;= 1:&lt;br /&gt;
        return False&lt;br /&gt;
    else:&lt;br /&gt;
        for i in range(2, num):&lt;br /&gt;
            if (num % i) == 0:&lt;br /&gt;
                return False&lt;br /&gt;
    return True&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
for i in range(MIN, MAX+1):&lt;br /&gt;
    if is_prime(i):&lt;br /&gt;
        print(i)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Batch Script =&lt;br /&gt;
&lt;br /&gt;
'''python-prime.run'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#SBATCH --job-name=PYTHON_PRIME&lt;br /&gt;
#SBATCH --output=python_prime_%j.out&lt;br /&gt;
&lt;br /&gt;
module load Python&lt;br /&gt;
&lt;br /&gt;
python3 prime.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Running Job =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ pwd&lt;br /&gt;
/home/test_user/projects/python/prime&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ ls&lt;br /&gt;
prime.py  python-prime.run&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ sbatch python-prime.run &lt;br /&gt;
Submitted batch job 3877&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here we can see the job was assigned to moose1.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ squeue&lt;br /&gt;
             JOBID PARTITION       NAME      USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              3877 compute_all PYTHON_P test_user  R       0:02      1 moose1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once job is no longer listed in the queue:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ ls&lt;br /&gt;
prime.py  python-prime.run  python_prime_3877.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ cat python_prime_3877.out &lt;br /&gt;
2&lt;br /&gt;
3&lt;br /&gt;
5&lt;br /&gt;
7&lt;br /&gt;
11&lt;br /&gt;
...&amp;lt;truncated&amp;gt;...&lt;br /&gt;
99881&lt;br /&gt;
99901&lt;br /&gt;
99907&lt;br /&gt;
99923&lt;br /&gt;
99929&lt;br /&gt;
99961&lt;br /&gt;
99971&lt;br /&gt;
99989&lt;br /&gt;
99991&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_User_Guide&amp;diff=182</id>
		<title>Rocky User Guide</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_User_Guide&amp;diff=182"/>
		<updated>2023-04-10T17:48:05Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= About Rocky =&lt;br /&gt;
&lt;br /&gt;
Rocky is a [https://en.wikipedia.org/wiki/High-performance_computing HPC] cluster comprised of compute heavy nodes with 40 cores/80 threads and 512GB of ram ['''rocky'''], memory intensive nodes with 20 cores/40 threads and 768GB of RAM ['''moose'''], and a [https://docs.ceph.com Ceph] storage subsystem ['''quarrel''']&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Requesting Access ==&lt;br /&gt;
In order to gain access to Rocky you must first fill out the [[Rocky_Access_Form]].  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging in to Rocky ==&lt;br /&gt;
Rocky's firewall limits access to the UTK network.  You will either need to be on campus or using the [https://utk.teamdynamix.com/TDClient/2277/OIT-Portal/KB/ArticleDet?ID=123517 Campus VPN]&lt;br /&gt;
&lt;br /&gt;
Once your account is created you will be able to SSH into a shell or SCP to copy files to/from Rocky.  &lt;br /&gt;
&lt;br /&gt;
Rocky uses Public Key Authentication for access instead of passwords.  &lt;br /&gt;
&lt;br /&gt;
Please review the following pages for OS specific instructions:&lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| [[Rocky_Access_SSH]] || Linux or Mac&lt;br /&gt;
|-&lt;br /&gt;
| [[Rocky_Access_Windows]] || Windows&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Environmental Modules ==&lt;br /&gt;
Rocky uses [https://lmod.readthedocs.io/en/latest/ Lmod] as it's environmental module system.  This allows you to easily set your session or job's environment to support the language, libraries, and specific versions needed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Submitting a Job ==&lt;br /&gt;
&lt;br /&gt;
Rocky uses [https://slurm.schedmd.com/documentation.html Slurm] to queue and submit jobs to the cluster's compute nodes.  &lt;br /&gt;
&lt;br /&gt;
To learn how to set up your own jobs, check out the [[Rocky_Job_Anatomy | Anatomy of a Rocky Job]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Example Jobs =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Python ====&lt;br /&gt;
* [[ Rocky_Python_HelloWorld | Hello World ]]&lt;br /&gt;
* [[ Rocky_Python_Prime | Calculate Prime Numbers ]]&lt;br /&gt;
&lt;br /&gt;
==== R ====&lt;br /&gt;
* [[ Rocky_R_HelloWorld | Hello World ]]&lt;br /&gt;
* [[ Rocky_R_Prime | Calculate Prime Numbers ]]&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_User_Guide&amp;diff=181</id>
		<title>Rocky User Guide</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_User_Guide&amp;diff=181"/>
		<updated>2023-04-10T17:47:48Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: /* Logging in to Rocky */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= About Rocky =&lt;br /&gt;
&lt;br /&gt;
Rocky is a [https://en.wikipedia.org/wiki/High-performance_computing HPC] cluster comprised of compute heavy nodes with 40 cores/80 threads and 512GB of ram ['''rocky'''], memory intensive nodes with 20 cores/40 threads and 768GB of RAM ['''moose'''], and a [https://docs.ceph.com Ceph] storage subsystem ['''quarrel''']&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Requesting Access ==&lt;br /&gt;
In order to gain access to Rocky you must first fill out the [[Rocky_Access_Form]].  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging in to Rocky ==&lt;br /&gt;
Rocky's firewall limits access to the UTK network.  You will either need to be on campus or using the [https://utk.teamdynamix.com/TDClient/2277/OIT-Portal/KB/ArticleDet?ID=123517 Campus VPN]&lt;br /&gt;
&lt;br /&gt;
Once your account is created you will be able to SSH into a shell or SCP to copy files to/from Rocky.  &lt;br /&gt;
&lt;br /&gt;
Rocky uses Public Key Authentication for access instead of passwords.  &lt;br /&gt;
&lt;br /&gt;
Please review the following pages for OS specific instructions:&lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| [[Rocky_Access_SSH]] || Linux or Mac&lt;br /&gt;
|-&lt;br /&gt;
| [[Rocky_Access_Windows]] || Windows&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Environmental Modules ==&lt;br /&gt;
Rocky uses [https://lmod.readthedocs.io/en/latest/ Lmod] as it's environmental module system.  This allows you to easily set your session or job's environment to support the language, libraries, and specific versions needed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Submitting a Job ==&lt;br /&gt;
&lt;br /&gt;
Rocky uses [https://slurm.schedmd.com/documentation.html Slurm] to queue and submit jobs to the cluster's compute nodes.  &lt;br /&gt;
&lt;br /&gt;
To learn how to set up your own jobs, check out the [[Rocky_Job_Anatomy | Anatomy of a Rocky Job]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Example Jobs =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Python ====&lt;br /&gt;
* [[ Rocky_Python_HelloWorld | Hello World ]]&lt;br /&gt;
* [[ Rocky_Python_Prime | Calculate Prime Numbers ]]&lt;br /&gt;
&lt;br /&gt;
==== R ====&lt;br /&gt;
* [[ Rocky_R_HelloWorld | Hello World ]]&lt;br /&gt;
* [[ Rocky_R_Prime | Calculate Prime Numbers ]]&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_User_Guide&amp;diff=180</id>
		<title>Rocky User Guide</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_User_Guide&amp;diff=180"/>
		<updated>2023-04-10T17:46:12Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= About Rocky =&lt;br /&gt;
&lt;br /&gt;
Rocky is a [https://en.wikipedia.org/wiki/High-performance_computing HPC] cluster comprised of compute heavy nodes with 40 cores/80 threads and 512GB of ram ['''rocky'''], memory intensive nodes with 20 cores/40 threads and 768GB of RAM ['''moose'''], and a [https://docs.ceph.com Ceph] storage subsystem ['''quarrel''']&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Requesting Access ==&lt;br /&gt;
In order to gain access to Rocky you must first fill out the [[Rocky_Access_Form]].  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging in to Rocky ==&lt;br /&gt;
Rocky's firewall limits access to the UTK network.  You will either need to be on campus or using the [https://utk.teamdynamix.com/TDClient/2277/OIT-Portal/KB/ArticleDet?ID=123517 Campus VPN]&lt;br /&gt;
&lt;br /&gt;
Once your account is created you will be able to SSH into a shell or SCP to copy files to/from Rocky.  &lt;br /&gt;
&lt;br /&gt;
Rocky uses Public Key Authentication for access instead of passwords.  Please review the following pages about accessing Rocky:&lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| [[Rocky_Access_SSH]] || Linux or Mac&lt;br /&gt;
|-&lt;br /&gt;
| [[Rocky_Access_Windows]] || Windows&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Environmental Modules ==&lt;br /&gt;
Rocky uses [https://lmod.readthedocs.io/en/latest/ Lmod] as it's environmental module system.  This allows you to easily set your session or job's environment to support the language, libraries, and specific versions needed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Submitting a Job ==&lt;br /&gt;
&lt;br /&gt;
Rocky uses [https://slurm.schedmd.com/documentation.html Slurm] to queue and submit jobs to the cluster's compute nodes.  &lt;br /&gt;
&lt;br /&gt;
To learn how to set up your own jobs, check out the [[Rocky_Job_Anatomy | Anatomy of a Rocky Job]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Example Jobs =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Python ====&lt;br /&gt;
* [[ Rocky_Python_HelloWorld | Hello World ]]&lt;br /&gt;
* [[ Rocky_Python_Prime | Calculate Prime Numbers ]]&lt;br /&gt;
&lt;br /&gt;
==== R ====&lt;br /&gt;
* [[ Rocky_R_HelloWorld | Hello World ]]&lt;br /&gt;
* [[ Rocky_R_Prime | Calculate Prime Numbers ]]&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_User_Guide&amp;diff=179</id>
		<title>Rocky User Guide</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_User_Guide&amp;diff=179"/>
		<updated>2023-04-10T17:45:46Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: /* Example Jobs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= About Rocky =&lt;br /&gt;
&lt;br /&gt;
Rocky is a [https://en.wikipedia.org/wiki/High-performance_computing HPC] cluster comprised of compute heavy nodes with 40 cores/80 threads and 512GB of ram ['''rocky'''], memory intensive nodes with 20 cores/40 threads and 768GB of RAM ['''moose'''], and a [https://docs.ceph.com Ceph] storage subsystem ['''quarrel''']&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Requesting Access ==&lt;br /&gt;
In order to gain access to Rocky you must first fill out the [[Rocky_Access_Form]].  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging in to Rocky ==&lt;br /&gt;
Rocky's firewall limits access to the UTK network.  You will either need to be on campus or using the [https://utk.teamdynamix.com/TDClient/2277/OIT-Portal/KB/ArticleDet?ID=123517 Campus VPN]&lt;br /&gt;
&lt;br /&gt;
Once your account is created you will be able to SSH into a shell or SCP to copy files to/from Rocky.  &lt;br /&gt;
&lt;br /&gt;
Rocky uses Public Key Authentication for access instead of passwords.  Please review the following pages about accessing Rocky:&lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| [[Rocky_Access_SSH]] || Linux or Mac&lt;br /&gt;
|-&lt;br /&gt;
| [[Rocky_Access_Windows]] || Windows&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Environmental Modules ==&lt;br /&gt;
Rocky uses [https://lmod.readthedocs.io/en/latest/ Lmod] as it's environmental module system.  This allows you to easily set your session or job's environment to support the language, libraries, and specific versions needed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Submitting a Job ==&lt;br /&gt;
&lt;br /&gt;
Rocky uses [https://slurm.schedmd.com/documentation.html Slurm] to queue and submit jobs to the cluster's compute nodes.  &lt;br /&gt;
&lt;br /&gt;
To learn how to set up your own jobs, check out the [[Rocky_Job_Anatomy | Anatomy of a Rocky Job]]&lt;br /&gt;
&lt;br /&gt;
= Example Jobs =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Python ====&lt;br /&gt;
* [[ Rocky_Python_HelloWorld | Hello World ]]&lt;br /&gt;
* [[ Rocky_Python_Prime | Calculate Prime Numbers ]]&lt;br /&gt;
&lt;br /&gt;
==== R ====&lt;br /&gt;
* [[ Rocky_R_HelloWorld | Hello World ]]&lt;br /&gt;
* [[ Rocky_R_Prime | Calculate Prime Numbers ]]&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_Python_Prime&amp;diff=178</id>
		<title>Rocky Python Prime</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_Python_Prime&amp;diff=178"/>
		<updated>2023-04-10T17:44:59Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: Created page with &amp;quot;= Python Code =  '''prime.py''' &amp;lt;pre&amp;gt; MIN = 2 MAX = 100000   def is_prime(num): 	if num &amp;lt;= 1: 		return False 	else: 		for i in range(2, num): 			if (num % i) == 0: 				return False 	return True   for i in range(MIN, MAX+1): 	if is_prime(i): 		print(i) &amp;lt;/pre&amp;gt;   = Batch Script =  '''python-prime.run''' &amp;lt;pre&amp;gt; #SBATCH --job-name=PYTHON_PRIME #SBATCH --output=python_prime_%j.out  module load Python  python3 prime.py &amp;lt;/pre&amp;gt;   = Running Job =  &amp;lt;pre&amp;gt; [test_user@rocky7 prime]$ pw...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Python Code =&lt;br /&gt;
&lt;br /&gt;
'''prime.py'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
MIN = 2&lt;br /&gt;
MAX = 100000&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def is_prime(num):&lt;br /&gt;
	if num &amp;lt;= 1:&lt;br /&gt;
		return False&lt;br /&gt;
	else:&lt;br /&gt;
		for i in range(2, num):&lt;br /&gt;
			if (num % i) == 0:&lt;br /&gt;
				return False&lt;br /&gt;
	return True&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
for i in range(MIN, MAX+1):&lt;br /&gt;
	if is_prime(i):&lt;br /&gt;
		print(i)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Batch Script =&lt;br /&gt;
&lt;br /&gt;
'''python-prime.run'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#SBATCH --job-name=PYTHON_PRIME&lt;br /&gt;
#SBATCH --output=python_prime_%j.out&lt;br /&gt;
&lt;br /&gt;
module load Python&lt;br /&gt;
&lt;br /&gt;
python3 prime.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Running Job =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ pwd&lt;br /&gt;
/home/test_user/projects/python/prime&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ ls&lt;br /&gt;
prime.py  python-prime.run&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ sbatch python-prime.run &lt;br /&gt;
Submitted batch job 3877&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here we can see the job was assigned to moose1.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ squeue&lt;br /&gt;
             JOBID PARTITION       NAME      USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              3877 compute_all PYTHON_P test_user  R       0:02      1 moose1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once job is no longer listed in the queue:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ ls&lt;br /&gt;
prime.py  python-prime.run  python_prime_3877.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 prime]$ cat python_prime_3877.out &lt;br /&gt;
2&lt;br /&gt;
3&lt;br /&gt;
5&lt;br /&gt;
7&lt;br /&gt;
11&lt;br /&gt;
...&amp;lt;truncated&amp;gt;...&lt;br /&gt;
99881&lt;br /&gt;
99901&lt;br /&gt;
99907&lt;br /&gt;
99923&lt;br /&gt;
99929&lt;br /&gt;
99961&lt;br /&gt;
99971&lt;br /&gt;
99989&lt;br /&gt;
99991&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_User_Guide&amp;diff=177</id>
		<title>Rocky User Guide</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_User_Guide&amp;diff=177"/>
		<updated>2023-04-10T17:22:13Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: /* Submitting a Job */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= About Rocky =&lt;br /&gt;
&lt;br /&gt;
Rocky is a [https://en.wikipedia.org/wiki/High-performance_computing HPC] cluster comprised of compute heavy nodes with 40 cores/80 threads and 512GB of ram ['''rocky'''], memory intensive nodes with 20 cores/40 threads and 768GB of RAM ['''moose'''], and a [https://docs.ceph.com Ceph] storage subsystem ['''quarrel''']&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Requesting Access ==&lt;br /&gt;
In order to gain access to Rocky you must first fill out the [[Rocky_Access_Form]].  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging in to Rocky ==&lt;br /&gt;
Rocky's firewall limits access to the UTK network.  You will either need to be on campus or using the [https://utk.teamdynamix.com/TDClient/2277/OIT-Portal/KB/ArticleDet?ID=123517 Campus VPN]&lt;br /&gt;
&lt;br /&gt;
Once your account is created you will be able to SSH into a shell or SCP to copy files to/from Rocky.  &lt;br /&gt;
&lt;br /&gt;
Rocky uses Public Key Authentication for access instead of passwords.  Please review the following pages about accessing Rocky:&lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| [[Rocky_Access_SSH]] || Linux or Mac&lt;br /&gt;
|-&lt;br /&gt;
| [[Rocky_Access_Windows]] || Windows&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Environmental Modules ==&lt;br /&gt;
Rocky uses [https://lmod.readthedocs.io/en/latest/ Lmod] as it's environmental module system.  This allows you to easily set your session or job's environment to support the language, libraries, and specific versions needed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Submitting a Job ==&lt;br /&gt;
&lt;br /&gt;
Rocky uses [https://slurm.schedmd.com/documentation.html Slurm] to queue and submit jobs to the cluster's compute nodes.  &lt;br /&gt;
&lt;br /&gt;
To learn how to set up your own jobs, check out the [[Rocky_Job_Anatomy | Anatomy of a Rocky Job]]&lt;br /&gt;
&lt;br /&gt;
= Example Jobs =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Python ====&lt;br /&gt;
* [[ Rocky_Python_HelloWorld | Hello World ]]&lt;br /&gt;
&lt;br /&gt;
==== R ====&lt;br /&gt;
* [[ Rocky_R_HelloWorld | Hello World ]]&lt;br /&gt;
* [[ Rocky_R_Prime | Calculate Prime Numbers ]]&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_User_Guide&amp;diff=176</id>
		<title>Rocky User Guide</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_User_Guide&amp;diff=176"/>
		<updated>2023-04-10T17:21:20Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= About Rocky =&lt;br /&gt;
&lt;br /&gt;
Rocky is a [https://en.wikipedia.org/wiki/High-performance_computing HPC] cluster comprised of compute heavy nodes with 40 cores/80 threads and 512GB of ram ['''rocky'''], memory intensive nodes with 20 cores/40 threads and 768GB of RAM ['''moose'''], and a [https://docs.ceph.com Ceph] storage subsystem ['''quarrel''']&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Requesting Access ==&lt;br /&gt;
In order to gain access to Rocky you must first fill out the [[Rocky_Access_Form]].  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging in to Rocky ==&lt;br /&gt;
Rocky's firewall limits access to the UTK network.  You will either need to be on campus or using the [https://utk.teamdynamix.com/TDClient/2277/OIT-Portal/KB/ArticleDet?ID=123517 Campus VPN]&lt;br /&gt;
&lt;br /&gt;
Once your account is created you will be able to SSH into a shell or SCP to copy files to/from Rocky.  &lt;br /&gt;
&lt;br /&gt;
Rocky uses Public Key Authentication for access instead of passwords.  Please review the following pages about accessing Rocky:&lt;br /&gt;
&lt;br /&gt;
{| class='wikitable'&lt;br /&gt;
|-&lt;br /&gt;
| [[Rocky_Access_SSH]] || Linux or Mac&lt;br /&gt;
|-&lt;br /&gt;
| [[Rocky_Access_Windows]] || Windows&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Environmental Modules ==&lt;br /&gt;
Rocky uses [https://lmod.readthedocs.io/en/latest/ Lmod] as it's environmental module system.  This allows you to easily set your session or job's environment to support the language, libraries, and specific versions needed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Submitting a Job ==&lt;br /&gt;
&lt;br /&gt;
Rocky uses [https://slurm.schedmd.com/documentation.html Slurm] to queue and submit jobs to the cluster's compute nodes.  &lt;br /&gt;
&lt;br /&gt;
To learn how to set up your own jobs, check out:&amp;lt;br/&amp;gt;&lt;br /&gt;
[[Rocky_Job_Anatomy | Anatomy of a Job]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Example Jobs =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Python ====&lt;br /&gt;
* [[ Rocky_Python_HelloWorld | Hello World ]]&lt;br /&gt;
&lt;br /&gt;
==== R ====&lt;br /&gt;
* [[ Rocky_R_HelloWorld | Hello World ]]&lt;br /&gt;
* [[ Rocky_R_Prime | Calculate Prime Numbers ]]&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_Job_Anatomy&amp;diff=175</id>
		<title>Rocky Job Anatomy</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_Job_Anatomy&amp;diff=175"/>
		<updated>2023-04-10T17:21:09Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Anatomy of a Rocky Job =&lt;br /&gt;
&lt;br /&gt;
Setting up a job to run on Rocky starts by creating or uploading your project's files to the project directory within your home directory on Rocky.  These files will include the code you've written, any data files needed, and a batch file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Your Code ====&lt;br /&gt;
&lt;br /&gt;
Your code is what is submitted and executed on Rocky's compute nodes.&amp;lt;br/&amp;gt;&lt;br /&gt;
It can be written in any of the languages supported by Rocky environment modules (Lmod).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Your Data ====&lt;br /&gt;
&lt;br /&gt;
If your job will be processing data, you'll need to upload that data to your project's directory.&lt;br /&gt;
&lt;br /&gt;
Your home directory is shared amongst all compute nodes.  No matter which node your job is assigned, it will have access to your data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Batch Script ====&lt;br /&gt;
&lt;br /&gt;
The batch script is a shell script that brings everything together by defining job parameters, loading any environment modules needed, and finally executing your code.&lt;br /&gt;
&lt;br /&gt;
Job parameters are defined one per line and start with &amp;lt;code&amp;gt;#SBATCH&amp;lt;/code&amp;gt;.&amp;lt;br/&amp;gt;&lt;br /&gt;
All parameters have default values are are optional but most batch scripts will use them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''my_job.run'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=MY_JOB         ### Job Name&lt;br /&gt;
#SBATCH --output=my_job_%j.out    ### File in which to store job output&lt;br /&gt;
#SBATCH --time=00:10:00           ### Wall clock time limit in Days-HH:MM:SS&lt;br /&gt;
#SBATCH --nodes=1                 ### Node count required for the job&lt;br /&gt;
#SBATCH --ntasks-per-node=1       ### Number of tasks to be launched per Node&lt;br /&gt;
#SBATCH --mem-per-cpu=2G&lt;br /&gt;
#SBATCH --cpus-per-task=40&lt;br /&gt;
&lt;br /&gt;
module load R/4.2.1-foss-2022a&lt;br /&gt;
&lt;br /&gt;
Rscript my_code.R&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Running Job =&lt;br /&gt;
&lt;br /&gt;
==== Submitting Job ====&lt;br /&gt;
&lt;br /&gt;
Jobs are submitted using the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command and passed your batch script as a parameter.  This will add your job to the queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sbatch my_job.run&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Watching Job ====&lt;br /&gt;
&lt;br /&gt;
While your job is in the queue or being executed you may see it's status using the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command.  If the job is currently running it will show which node(s) it is assigned.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 ~]$ squeue&lt;br /&gt;
             JOBID PARTITION       NAME      USER ST      TIME  NODES NODELIST(REASON)&lt;br /&gt;
              2947 compute_all    my_job test_use  R      0:05      1 moose1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Cancelling Job ====&lt;br /&gt;
&lt;br /&gt;
To cancel a job, use the &amp;lt;code&amp;gt;scancel&amp;lt;/code&amp;gt; command and pass the JOBID (as returned by &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
scancel 2947&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
	<entry>
		<id>https://wiki.nimbios.org/index.php?title=Rocky_Job_Anatomy&amp;diff=173</id>
		<title>Rocky Job Anatomy</title>
		<link rel="alternate" type="text/html" href="https://wiki.nimbios.org/index.php?title=Rocky_Job_Anatomy&amp;diff=173"/>
		<updated>2023-04-10T17:17:48Z</updated>

		<summary type="html">&lt;p&gt;Jstratt7: /* Example Jobs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Anatomy of a Rocky Job =&lt;br /&gt;
&lt;br /&gt;
Setting up a job to run on Rocky starts by creating or uploading your project's files to the project directory within your home directory on Rocky.  These files will include the code you've written, any data files needed, and a batch file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Your Code ====&lt;br /&gt;
&lt;br /&gt;
Your code is what is submitted and executed on Rocky's compute nodes.&amp;lt;br/&amp;gt;&lt;br /&gt;
It can be written in any of the languages supported by Rocky environment modules (Lmod).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Your Data ====&lt;br /&gt;
&lt;br /&gt;
If your job will be processing data, you'll need to upload that data to your project's directory.&lt;br /&gt;
&lt;br /&gt;
Your home directory is shared amongst all compute nodes.  No matter which node your job is assigned, it will have access to your data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Batch Script ====&lt;br /&gt;
&lt;br /&gt;
The batch script is a shell script that brings everything together by defining job parameters, loading any environment modules needed, and finally executing your code.&lt;br /&gt;
&lt;br /&gt;
Job parameters are defined one per line and start with &amp;lt;code&amp;gt;#SBATCH&amp;lt;/code&amp;gt;.&amp;lt;br/&amp;gt;&lt;br /&gt;
All parameters have default values are are optional but most batch scripts will use them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''my_job.run'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=MY_JOB         ### Job Name&lt;br /&gt;
#SBATCH --output=my_job_%j.out    ### File in which to store job output&lt;br /&gt;
#SBATCH --time=00:10:00           ### Wall clock time limit in Days-HH:MM:SS&lt;br /&gt;
#SBATCH --nodes=1                 ### Node count required for the job&lt;br /&gt;
#SBATCH --ntasks-per-node=1       ### Number of tasks to be launched per Node&lt;br /&gt;
#SBATCH --mem-per-cpu=2G&lt;br /&gt;
#SBATCH --cpus-per-task=40&lt;br /&gt;
&lt;br /&gt;
module load R/4.2.1-foss-2022a&lt;br /&gt;
&lt;br /&gt;
Rscript my_code.R&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Running Job =&lt;br /&gt;
&lt;br /&gt;
==== Submitting Job ====&lt;br /&gt;
&lt;br /&gt;
Jobs are submitted using the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command and passed your batch script as a parameter.  This will add your job to the queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sbatch my_job.run&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Watching Job ====&lt;br /&gt;
&lt;br /&gt;
While your job is in the queue or being executed you may see it's status using the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command.  If the job is currently running it will show which node(s) it is assigned.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[test_user@rocky7 ~]$ squeue&lt;br /&gt;
             JOBID PARTITION       NAME      USER ST      TIME  NODES NODELIST(REASON)&lt;br /&gt;
              2947 compute_all    my_job test_use  R      0:05      1 moose1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Cancelling Job ====&lt;br /&gt;
&lt;br /&gt;
To cancel a job, use the &amp;lt;code&amp;gt;scancel&amp;lt;/code&amp;gt; command and pass the JOBID (as returned by &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
scancel 2947&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Example Jobs =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Python ====&lt;br /&gt;
* [[ Rocky_Python_HelloWorld | Hello World ]]&lt;br /&gt;
&lt;br /&gt;
==== R ====&lt;br /&gt;
* [[ Rocky_R_HelloWorld | Hello World ]]&lt;br /&gt;
* [[ Rocky_R_Prime | Calculate Prime Numbers ]]&lt;/div&gt;</summary>
		<author><name>Jstratt7</name></author>
	</entry>
</feed>