<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://e.math.cornell.edu/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Admin</id>
	<title>mathpub - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://e.math.cornell.edu/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Admin"/>
	<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php/Special:Contributions/Admin"/>
	<updated>2026-04-28T09:00:55Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.35.7</generator>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Cluster_Info&amp;diff=175</id>
		<title>Cluster Info</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Cluster_Info&amp;diff=175"/>
		<updated>2022-09-28T04:11:37Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:Cluster.jpg|thumb|Picture of 64 node cluster on a metal shelf.]]&lt;br /&gt;
== Math Cluster ==&lt;br /&gt;
The Math Department has an experimental computational cluster. It works fine for computation, but the main purpose is for configuring and testing distributed computing applications so that they can be run on larger clusters.&lt;br /&gt;
&lt;br /&gt;
The cluster is 40 machines with i7 processors. The first 32 are 6th-generation processors, and the last 8 are 7th generation processors.&lt;br /&gt;
&lt;br /&gt;
Jobs can be sent to all nodes in the cluster, or a subset of the nodes. &lt;br /&gt;
&lt;br /&gt;
At this time, the cluster is working with ssh and Mathematica. We're in the process of getting it working with other applications including MPI, Matlab, Maple, and Magma. Check back on this page because support is rapidly evolving.&lt;br /&gt;
&lt;br /&gt;
Also, if you can get your own application running on the cluster, please write up a short description of how that was done and send it to me, humphrey@cornell.edu, so that we can include it in our documentation.&lt;br /&gt;
&lt;br /&gt;
First Step: Setting up your [[Cluster SSH access]], including testing your access with pdsh commands.&lt;br /&gt;
&lt;br /&gt;
Next: Launching [[Mathematica Remote Kernels]].&lt;br /&gt;
&lt;br /&gt;
MPI: Trying out the MPI 'Hello World' program, which is a step to running many types of parallel jobs. [[MPI Hello World]]&lt;br /&gt;
&lt;br /&gt;
Here is some info on running remote workers in Magma: [[Magma Cluster]] This is a work in progress since we don't have a nice, working example yet.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=File:Cluster.jpg&amp;diff=174</id>
		<title>File:Cluster.jpg</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=File:Cluster.jpg&amp;diff=174"/>
		<updated>2022-09-28T04:10:53Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Picture of a 64 node cluster on a shelf.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Cluster_Info&amp;diff=173</id>
		<title>Cluster Info</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Cluster_Info&amp;diff=173"/>
		<updated>2022-09-28T04:05:59Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Math Cluster */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
[[File:Cluster-light2.jpg]]&lt;br /&gt;
== Math Cluster ==&lt;br /&gt;
The Math Department has an experimental computational cluster. It works fine for computation, but the main purpose is for configuring and testing distributed computing applications so that they can be run on larger clusters.&lt;br /&gt;
&lt;br /&gt;
The cluster is 40 machines with i7 processors. The first 32 are 6th-generation processors, and the last 8 are 7th generation processors.&lt;br /&gt;
&lt;br /&gt;
Jobs can be sent to all nodes in the cluster, or a subset of the nodes. &lt;br /&gt;
&lt;br /&gt;
At this time, the cluster is working with ssh and Mathematica. We're in the process of getting it working with other applications including MPI, Matlab, Maple, and Magma. Check back on this page because support is rapidly evolving.&lt;br /&gt;
&lt;br /&gt;
Also, if you can get your own application running on the cluster, please write up a short description of how that was done and send it to me, humphrey@cornell.edu, so that we can include it in our documentation.&lt;br /&gt;
&lt;br /&gt;
First Step: Setting up your [[Cluster SSH access]], including testing your access with pdsh commands.&lt;br /&gt;
&lt;br /&gt;
Next: Launching [[Mathematica Remote Kernels]].&lt;br /&gt;
&lt;br /&gt;
MPI: Trying out the MPI 'Hello World' program, which is a step to running many types of parallel jobs. [[MPI Hello World]]&lt;br /&gt;
&lt;br /&gt;
Here is some info on running remote workers in Magma: [[Magma Cluster]] This is a work in progress since we don't have a nice, working example yet.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=File:Cluster-light2.jpg&amp;diff=172</id>
		<title>File:Cluster-light2.jpg</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=File:Cluster-light2.jpg&amp;diff=172"/>
		<updated>2022-09-28T04:05:37Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Picture of the 64 node cluster on a shelf.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=MPI_Hello_World&amp;diff=171</id>
		<title>MPI Hello World</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=MPI_Hello_World&amp;diff=171"/>
		<updated>2022-09-24T17:53:38Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* MPI Hello World */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== MPI Hello World ==&lt;br /&gt;
&lt;br /&gt;
Many parallel jobs are using [https://en.wikipedia.org/wiki/Message_Passing_Interface MPI] at the lowest level to manage parallel compute resources. You will want to successfully run this example to make sure your parallel computing environment is working properly. These instructions assume you've already set up your cluster ssh keys, as described in [[Cluster SSH access]]&lt;br /&gt;
&lt;br /&gt;
This is a 'Hello World' program that will test the operation of sending jobs to remote workers.&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 /*&lt;br /&gt;
 * Sample MPI &amp;quot;hello world&amp;quot; application in C&lt;br /&gt;
 */&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;quot;mpi.h&amp;quot;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char* argv[])&lt;br /&gt;
{&lt;br /&gt;
    int rank, size;&lt;br /&gt;
&lt;br /&gt;
    MPI_Init(&amp;amp;argc, &amp;amp;argv);&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;rank);&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;size);&lt;br /&gt;
    printf(&amp;quot;Hello, world, I am %d of %d \n&amp;quot;, rank, size);&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
create a folder and save this file as &amp;lt;tt&amp;gt; hello_mpi.c &amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can compile this program with&lt;br /&gt;
&lt;br /&gt;
 mpicc -g hello_mpi.c -o hello_mpi&lt;br /&gt;
&lt;br /&gt;
You now have an executable called &amp;lt;tt&amp;gt; hello_mpi &amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can run the command with &lt;br /&gt;
&lt;br /&gt;
 mpirun ./hello_mpi&lt;br /&gt;
&lt;br /&gt;
With no other arguments, mpirun will run all of the tasks on the local machine, usually one for each CPU core. You should see a hello world from each process, on each cpu core.&lt;br /&gt;
&lt;br /&gt;
Now, to run your mpi program on the cluster, you will need to create a hostfile.&lt;br /&gt;
First, lets create a simple hostfile that just runs four processes on the local machine&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Put the following line in a file called &amp;lt;tt&amp;gt;localhost &amp;lt;/tt&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
localhost slots=4 max_slots=8&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now run your program with that hostfile, using&lt;br /&gt;
&lt;br /&gt;
 mpirun --hostfile localhost ./hello_mpi&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
Hello, world, I am 0 of 4 &lt;br /&gt;
Hello, world, I am 2 of 4 &lt;br /&gt;
Hello, world, I am 1 of 4 &lt;br /&gt;
Hello, world, I am 3 of 4 &lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now create a hostfile called &amp;lt;tt&amp;gt; cluster_hosts &amp;lt;/tt&amp;gt; with the following entries:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
pnode01 slots=4 max_slots=8&lt;br /&gt;
pnode02 slots=4 max_slots=8&lt;br /&gt;
pnode03 slots=4 max_slots=8&lt;br /&gt;
pnode04 slots=4 max_slots=8&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and keep adding lines until you get to &amp;lt;tt&amp;gt; pnode40 slots=4 max_slots=8 &amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The next step assumes you have set up your ssh keys as described in [[Cluster SSH access]]&lt;br /&gt;
&lt;br /&gt;
With 40 nodes listed in your &amp;lt;tt&amp;gt; cluster_hosts &amp;lt;/tt&amp;gt; file, run your program again with&lt;br /&gt;
&lt;br /&gt;
 mpirun --hostfile cluster_hosts ./hello_mpi&lt;br /&gt;
&lt;br /&gt;
You should see the output shown below, which is 160 lines long, four responses from each host, one for each core described.&lt;br /&gt;
&lt;br /&gt;
Once you are able to run this program successfully, your MPI setup is working. This will be necessary before you can run other MPI-based programs, such as ipyparallel programs which use MPI.&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
Hello, world, I am 126 of 160 &lt;br /&gt;
Hello, world, I am 69 of 160 &lt;br /&gt;
Hello, world, I am 159 of 160 &lt;br /&gt;
Hello, world, I am 137 of 160 &lt;br /&gt;
Hello, world, I am 11 of 160 &lt;br /&gt;
Hello, world, I am 138 of 160 &lt;br /&gt;
Hello, world, I am 139 of 160 &lt;br /&gt;
Hello, world, I am 3 of 160 &lt;br /&gt;
Hello, world, I am 125 of 160 &lt;br /&gt;
Hello, world, I am 68 of 160 &lt;br /&gt;
Hello, world, I am 0 of 160 &lt;br /&gt;
Hello, world, I am 127 of 160 &lt;br /&gt;
Hello, world, I am 71 of 160 &lt;br /&gt;
Hello, world, I am 8 of 160 &lt;br /&gt;
Hello, world, I am 136 of 160 &lt;br /&gt;
Hello, world, I am 85 of 160 &lt;br /&gt;
Hello, world, I am 2 of 160 &lt;br /&gt;
Hello, world, I am 124 of 160 &lt;br /&gt;
Hello, world, I am 70 of 160 &lt;br /&gt;
Hello, world, I am 51 of 160 &lt;br /&gt;
Hello, world, I am 9 of 160 &lt;br /&gt;
Hello, world, I am 86 of 160 &lt;br /&gt;
Hello, world, I am 21 of 160 &lt;br /&gt;
Hello, world, I am 1 of 160 &lt;br /&gt;
Hello, world, I am 25 of 160 &lt;br /&gt;
Hello, world, I am 143 of 160 &lt;br /&gt;
Hello, world, I am 119 of 160 &lt;br /&gt;
Hello, world, I am 6 of 160 &lt;br /&gt;
Hello, world, I am 10 of 160 &lt;br /&gt;
Hello, world, I am 84 of 160 &lt;br /&gt;
Hello, world, I am 133 of 160 &lt;br /&gt;
Hello, world, I am 156 of 160 &lt;br /&gt;
Hello, world, I am 23 of 160 &lt;br /&gt;
Hello, world, I am 100 of 160 &lt;br /&gt;
Hello, world, I am 146 of 160 &lt;br /&gt;
Hello, world, I am 27 of 160 &lt;br /&gt;
Hello, world, I am 118 of 160 &lt;br /&gt;
Hello, world, I am 55 of 160 &lt;br /&gt;
Hello, world, I am 32 of 160 &lt;br /&gt;
Hello, world, I am 123 of 160 &lt;br /&gt;
Hello, world, I am 67 of 160 &lt;br /&gt;
Hello, world, I am 87 of 160 &lt;br /&gt;
Hello, world, I am 135 of 160 &lt;br /&gt;
Hello, world, I am 157 of 160 &lt;br /&gt;
Hello, world, I am 20 of 160 &lt;br /&gt;
Hello, world, I am 130 of 160 &lt;br /&gt;
Hello, world, I am 120 of 160 &lt;br /&gt;
Hello, world, I am 12 of 160 &lt;br /&gt;
Hello, world, I am 82 of 160 &lt;br /&gt;
Hello, world, I am 48 of 160 &lt;br /&gt;
Hello, world, I am 132 of 160 &lt;br /&gt;
Hello, world, I am 158 of 160 &lt;br /&gt;
Hello, world, I am 22 of 160 &lt;br /&gt;
Hello, world, I am 131 of 160 &lt;br /&gt;
Hello, world, I am 101 of 160 &lt;br /&gt;
Hello, world, I am 145 of 160 &lt;br /&gt;
Hello, world, I am 24 of 160 &lt;br /&gt;
Hello, world, I am 140 of 160 &lt;br /&gt;
Hello, world, I am 117 of 160 &lt;br /&gt;
Hello, world, I am 113 of 160 &lt;br /&gt;
Hello, world, I am 154 of 160 &lt;br /&gt;
Hello, world, I am 7 of 160 &lt;br /&gt;
Hello, world, I am 53 of 160 &lt;br /&gt;
Hello, world, I am 33 of 160 &lt;br /&gt;
Hello, world, I am 16 of 160 &lt;br /&gt;
Hello, world, I am 28 of 160 &lt;br /&gt;
Hello, world, I am 111 of 160 &lt;br /&gt;
Hello, world, I am 121 of 160 &lt;br /&gt;
Hello, world, I am 13 of 160 &lt;br /&gt;
Hello, world, I am 64 of 160 &lt;br /&gt;
Hello, world, I am 88 of 160 &lt;br /&gt;
Hello, world, I am 96 of 160 &lt;br /&gt;
Hello, world, I am 83 of 160 &lt;br /&gt;
Hello, world, I am 49 of 160 &lt;br /&gt;
Hello, world, I am 134 of 160 &lt;br /&gt;
Hello, world, I am 128 of 160 &lt;br /&gt;
Hello, world, I am 102 of 160 &lt;br /&gt;
Hello, world, I am 148 of 160 &lt;br /&gt;
Hello, world, I am 147 of 160 &lt;br /&gt;
Hello, world, I am 26 of 160 &lt;br /&gt;
Hello, world, I am 36 of 160 &lt;br /&gt;
Hello, world, I am 141 of 160 &lt;br /&gt;
Hello, world, I am 58 of 160 &lt;br /&gt;
Hello, world, I am 73 of 160 &lt;br /&gt;
Hello, world, I am 46 of 160 &lt;br /&gt;
Hello, world, I am 116 of 160 &lt;br /&gt;
Hello, world, I am 114 of 160 &lt;br /&gt;
Hello, world, I am 155 of 160 &lt;br /&gt;
Hello, world, I am 4 of 160 &lt;br /&gt;
Hello, world, I am 52 of 160 &lt;br /&gt;
Hello, world, I am 34 of 160 &lt;br /&gt;
Hello, world, I am 62 of 160 &lt;br /&gt;
Hello, world, I am 17 of 160 &lt;br /&gt;
Hello, world, I am 29 of 160 &lt;br /&gt;
Hello, world, I am 76 of 160 &lt;br /&gt;
Hello, world, I am 92 of 160 &lt;br /&gt;
Hello, world, I am 81 of 160 &lt;br /&gt;
Hello, world, I am 50 of 160 &lt;br /&gt;
Hello, world, I am 129 of 160 &lt;br /&gt;
Hello, world, I am 103 of 160 &lt;br /&gt;
Hello, world, I am 149 of 160 &lt;br /&gt;
Hello, world, I am 144 of 160 &lt;br /&gt;
Hello, world, I am 37 of 160 &lt;br /&gt;
Hello, world, I am 142 of 160 &lt;br /&gt;
Hello, world, I am 56 of 160 &lt;br /&gt;
Hello, world, I am 75 of 160 &lt;br /&gt;
Hello, world, I am 47 of 160 &lt;br /&gt;
Hello, world, I am 40 of 160 &lt;br /&gt;
Hello, world, I am 106 of 160 &lt;br /&gt;
Hello, world, I am 115 of 160 &lt;br /&gt;
Hello, world, I am 152 of 160 &lt;br /&gt;
Hello, world, I am 5 of 160 &lt;br /&gt;
Hello, world, I am 54 of 160 &lt;br /&gt;
Hello, world, I am 35 of 160 &lt;br /&gt;
Hello, world, I am 63 of 160 &lt;br /&gt;
Hello, world, I am 18 of 160 &lt;br /&gt;
Hello, world, I am 30 of 160 &lt;br /&gt;
Hello, world, I am 77 of 160 &lt;br /&gt;
Hello, world, I am 93 of 160 &lt;br /&gt;
Hello, world, I am 108 of 160 &lt;br /&gt;
Hello, world, I am 122 of 160 &lt;br /&gt;
Hello, world, I am 14 of 160 &lt;br /&gt;
Hello, world, I am 65 of 160 &lt;br /&gt;
Hello, world, I am 89 of 160 &lt;br /&gt;
Hello, world, I am 99 of 160 &lt;br /&gt;
Hello, world, I am 153 of 160 &lt;br /&gt;
Hello, world, I am 61 of 160 &lt;br /&gt;
Hello, world, I am 19 of 160 &lt;br /&gt;
Hello, world, I am 31 of 160 &lt;br /&gt;
Hello, world, I am 78 of 160 &lt;br /&gt;
Hello, world, I am 94 of 160 &lt;br /&gt;
Hello, world, I am 109 of 160 &lt;br /&gt;
Hello, world, I am 15 of 160 &lt;br /&gt;
Hello, world, I am 66 of 160 &lt;br /&gt;
Hello, world, I am 91 of 160 &lt;br /&gt;
Hello, world, I am 97 of 160 &lt;br /&gt;
Hello, world, I am 80 of 160 &lt;br /&gt;
Hello, world, I am 150 of 160 &lt;br /&gt;
Hello, world, I am 38 of 160 &lt;br /&gt;
Hello, world, I am 57 of 160 &lt;br /&gt;
Hello, world, I am 72 of 160 &lt;br /&gt;
Hello, world, I am 44 of 160 &lt;br /&gt;
Hello, world, I am 41 of 160 &lt;br /&gt;
Hello, world, I am 107 of 160 &lt;br /&gt;
Hello, world, I am 112 of 160 &lt;br /&gt;
Hello, world, I am 59 of 160 &lt;br /&gt;
Hello, world, I am 74 of 160 &lt;br /&gt;
Hello, world, I am 45 of 160 &lt;br /&gt;
Hello, world, I am 42 of 160 &lt;br /&gt;
Hello, world, I am 104 of 160 &lt;br /&gt;
Hello, world, I am 79 of 160 &lt;br /&gt;
Hello, world, I am 110 of 160 &lt;br /&gt;
Hello, world, I am 90 of 160 &lt;br /&gt;
Hello, world, I am 98 of 160 &lt;br /&gt;
Hello, world, I am 151 of 160 &lt;br /&gt;
Hello, world, I am 39 of 160 &lt;br /&gt;
Hello, world, I am 43 of 160 &lt;br /&gt;
Hello, world, I am 105 of 160 &lt;br /&gt;
Hello, world, I am 60 of 160 &lt;br /&gt;
Hello, world, I am 95 of 160 &lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=MPI_Hello_World&amp;diff=170</id>
		<title>MPI Hello World</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=MPI_Hello_World&amp;diff=170"/>
		<updated>2022-09-24T17:44:19Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== MPI Hello World ==&lt;br /&gt;
&lt;br /&gt;
Many parallel jobs are using [https://en.wikipedia.org/wiki/Message_Passing_Interface MPI] at the lowest level to manage parallel compute resources. You will want to successfully run this example to make sure your parallel computing environment is working properly. These instructions assume you've already set up your cluster ssh keys, as described in [[Cluster SSH access]]&lt;br /&gt;
&lt;br /&gt;
This is a 'Hello World' program that will test the operation of sending jobs to remote workers.&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 /*&lt;br /&gt;
 * Sample MPI &amp;quot;hello world&amp;quot; application in C&lt;br /&gt;
 */&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;quot;mpi.h&amp;quot;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char* argv[])&lt;br /&gt;
{&lt;br /&gt;
    int rank, size, len;&lt;br /&gt;
    char version[MPI_MAX_LIBRARY_VERSION_STRING];&lt;br /&gt;
&lt;br /&gt;
    MPI_Init(&amp;amp;argc, &amp;amp;argv);&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;rank);&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;size);&lt;br /&gt;
    printf(&amp;quot;Hello, world, I am %d of %d \n&amp;quot;, rank, size);&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
create a folder and save this file as &amp;lt;tt&amp;gt; hello_mpi.c &amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can compile this program with&lt;br /&gt;
&lt;br /&gt;
 mpicc -g hello_mpi.c -o hello_mpi&lt;br /&gt;
&lt;br /&gt;
You now have an executable called &amp;lt;tt&amp;gt; hello_mpi &amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can run the command with &lt;br /&gt;
&lt;br /&gt;
 mpirun ./hello_mpi&lt;br /&gt;
&lt;br /&gt;
With no other arguments, mpirun will run all of the tasks on the local machine, usually one for each CPU core. You should see a hello world from each process, on each cpu core.&lt;br /&gt;
&lt;br /&gt;
Now, to run your mpi program on the cluster, you will need to create a hostfile.&lt;br /&gt;
First, lets create a simple hostfile that just runs four processes on the local machine&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Put the following line in a file called &amp;lt;tt&amp;gt;localhost &amp;lt;/tt&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
localhost slots=4 max_slots=8&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now run your program with that hostfile, using&lt;br /&gt;
&lt;br /&gt;
 mpirun --hostfile localhost ./hello_mpi&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
Hello, world, I am 0 of 4 &lt;br /&gt;
Hello, world, I am 2 of 4 &lt;br /&gt;
Hello, world, I am 1 of 4 &lt;br /&gt;
Hello, world, I am 3 of 4 &lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now create a hostfile called &amp;lt;tt&amp;gt; cluster_hosts &amp;lt;/tt&amp;gt; with the following entries:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
pnode01 slots=4 max_slots=8&lt;br /&gt;
pnode02 slots=4 max_slots=8&lt;br /&gt;
pnode03 slots=4 max_slots=8&lt;br /&gt;
pnode04 slots=4 max_slots=8&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and keep adding lines until you get to &amp;lt;tt&amp;gt; pnode40 slots=4 max_slots=8 &amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The next step assumes you have set up your ssh keys as described in [[Cluster SSH access]]&lt;br /&gt;
&lt;br /&gt;
With 40 nodes listed in your &amp;lt;tt&amp;gt; cluster_hosts &amp;lt;/tt&amp;gt; file, run your program again with&lt;br /&gt;
&lt;br /&gt;
 mpirun --hostfile cluster_hosts ./hello_mpi&lt;br /&gt;
&lt;br /&gt;
You should see the output shown below, which is 160 lines long, four responses from each host, one for each core described.&lt;br /&gt;
&lt;br /&gt;
Once you are able to run this program successfully, your MPI setup is working. This will be necessary before you can run other MPI-based programs, such as ipyparallel programs which use MPI.&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
Hello, world, I am 126 of 160 &lt;br /&gt;
Hello, world, I am 69 of 160 &lt;br /&gt;
Hello, world, I am 159 of 160 &lt;br /&gt;
Hello, world, I am 137 of 160 &lt;br /&gt;
Hello, world, I am 11 of 160 &lt;br /&gt;
Hello, world, I am 138 of 160 &lt;br /&gt;
Hello, world, I am 139 of 160 &lt;br /&gt;
Hello, world, I am 3 of 160 &lt;br /&gt;
Hello, world, I am 125 of 160 &lt;br /&gt;
Hello, world, I am 68 of 160 &lt;br /&gt;
Hello, world, I am 0 of 160 &lt;br /&gt;
Hello, world, I am 127 of 160 &lt;br /&gt;
Hello, world, I am 71 of 160 &lt;br /&gt;
Hello, world, I am 8 of 160 &lt;br /&gt;
Hello, world, I am 136 of 160 &lt;br /&gt;
Hello, world, I am 85 of 160 &lt;br /&gt;
Hello, world, I am 2 of 160 &lt;br /&gt;
Hello, world, I am 124 of 160 &lt;br /&gt;
Hello, world, I am 70 of 160 &lt;br /&gt;
Hello, world, I am 51 of 160 &lt;br /&gt;
Hello, world, I am 9 of 160 &lt;br /&gt;
Hello, world, I am 86 of 160 &lt;br /&gt;
Hello, world, I am 21 of 160 &lt;br /&gt;
Hello, world, I am 1 of 160 &lt;br /&gt;
Hello, world, I am 25 of 160 &lt;br /&gt;
Hello, world, I am 143 of 160 &lt;br /&gt;
Hello, world, I am 119 of 160 &lt;br /&gt;
Hello, world, I am 6 of 160 &lt;br /&gt;
Hello, world, I am 10 of 160 &lt;br /&gt;
Hello, world, I am 84 of 160 &lt;br /&gt;
Hello, world, I am 133 of 160 &lt;br /&gt;
Hello, world, I am 156 of 160 &lt;br /&gt;
Hello, world, I am 23 of 160 &lt;br /&gt;
Hello, world, I am 100 of 160 &lt;br /&gt;
Hello, world, I am 146 of 160 &lt;br /&gt;
Hello, world, I am 27 of 160 &lt;br /&gt;
Hello, world, I am 118 of 160 &lt;br /&gt;
Hello, world, I am 55 of 160 &lt;br /&gt;
Hello, world, I am 32 of 160 &lt;br /&gt;
Hello, world, I am 123 of 160 &lt;br /&gt;
Hello, world, I am 67 of 160 &lt;br /&gt;
Hello, world, I am 87 of 160 &lt;br /&gt;
Hello, world, I am 135 of 160 &lt;br /&gt;
Hello, world, I am 157 of 160 &lt;br /&gt;
Hello, world, I am 20 of 160 &lt;br /&gt;
Hello, world, I am 130 of 160 &lt;br /&gt;
Hello, world, I am 120 of 160 &lt;br /&gt;
Hello, world, I am 12 of 160 &lt;br /&gt;
Hello, world, I am 82 of 160 &lt;br /&gt;
Hello, world, I am 48 of 160 &lt;br /&gt;
Hello, world, I am 132 of 160 &lt;br /&gt;
Hello, world, I am 158 of 160 &lt;br /&gt;
Hello, world, I am 22 of 160 &lt;br /&gt;
Hello, world, I am 131 of 160 &lt;br /&gt;
Hello, world, I am 101 of 160 &lt;br /&gt;
Hello, world, I am 145 of 160 &lt;br /&gt;
Hello, world, I am 24 of 160 &lt;br /&gt;
Hello, world, I am 140 of 160 &lt;br /&gt;
Hello, world, I am 117 of 160 &lt;br /&gt;
Hello, world, I am 113 of 160 &lt;br /&gt;
Hello, world, I am 154 of 160 &lt;br /&gt;
Hello, world, I am 7 of 160 &lt;br /&gt;
Hello, world, I am 53 of 160 &lt;br /&gt;
Hello, world, I am 33 of 160 &lt;br /&gt;
Hello, world, I am 16 of 160 &lt;br /&gt;
Hello, world, I am 28 of 160 &lt;br /&gt;
Hello, world, I am 111 of 160 &lt;br /&gt;
Hello, world, I am 121 of 160 &lt;br /&gt;
Hello, world, I am 13 of 160 &lt;br /&gt;
Hello, world, I am 64 of 160 &lt;br /&gt;
Hello, world, I am 88 of 160 &lt;br /&gt;
Hello, world, I am 96 of 160 &lt;br /&gt;
Hello, world, I am 83 of 160 &lt;br /&gt;
Hello, world, I am 49 of 160 &lt;br /&gt;
Hello, world, I am 134 of 160 &lt;br /&gt;
Hello, world, I am 128 of 160 &lt;br /&gt;
Hello, world, I am 102 of 160 &lt;br /&gt;
Hello, world, I am 148 of 160 &lt;br /&gt;
Hello, world, I am 147 of 160 &lt;br /&gt;
Hello, world, I am 26 of 160 &lt;br /&gt;
Hello, world, I am 36 of 160 &lt;br /&gt;
Hello, world, I am 141 of 160 &lt;br /&gt;
Hello, world, I am 58 of 160 &lt;br /&gt;
Hello, world, I am 73 of 160 &lt;br /&gt;
Hello, world, I am 46 of 160 &lt;br /&gt;
Hello, world, I am 116 of 160 &lt;br /&gt;
Hello, world, I am 114 of 160 &lt;br /&gt;
Hello, world, I am 155 of 160 &lt;br /&gt;
Hello, world, I am 4 of 160 &lt;br /&gt;
Hello, world, I am 52 of 160 &lt;br /&gt;
Hello, world, I am 34 of 160 &lt;br /&gt;
Hello, world, I am 62 of 160 &lt;br /&gt;
Hello, world, I am 17 of 160 &lt;br /&gt;
Hello, world, I am 29 of 160 &lt;br /&gt;
Hello, world, I am 76 of 160 &lt;br /&gt;
Hello, world, I am 92 of 160 &lt;br /&gt;
Hello, world, I am 81 of 160 &lt;br /&gt;
Hello, world, I am 50 of 160 &lt;br /&gt;
Hello, world, I am 129 of 160 &lt;br /&gt;
Hello, world, I am 103 of 160 &lt;br /&gt;
Hello, world, I am 149 of 160 &lt;br /&gt;
Hello, world, I am 144 of 160 &lt;br /&gt;
Hello, world, I am 37 of 160 &lt;br /&gt;
Hello, world, I am 142 of 160 &lt;br /&gt;
Hello, world, I am 56 of 160 &lt;br /&gt;
Hello, world, I am 75 of 160 &lt;br /&gt;
Hello, world, I am 47 of 160 &lt;br /&gt;
Hello, world, I am 40 of 160 &lt;br /&gt;
Hello, world, I am 106 of 160 &lt;br /&gt;
Hello, world, I am 115 of 160 &lt;br /&gt;
Hello, world, I am 152 of 160 &lt;br /&gt;
Hello, world, I am 5 of 160 &lt;br /&gt;
Hello, world, I am 54 of 160 &lt;br /&gt;
Hello, world, I am 35 of 160 &lt;br /&gt;
Hello, world, I am 63 of 160 &lt;br /&gt;
Hello, world, I am 18 of 160 &lt;br /&gt;
Hello, world, I am 30 of 160 &lt;br /&gt;
Hello, world, I am 77 of 160 &lt;br /&gt;
Hello, world, I am 93 of 160 &lt;br /&gt;
Hello, world, I am 108 of 160 &lt;br /&gt;
Hello, world, I am 122 of 160 &lt;br /&gt;
Hello, world, I am 14 of 160 &lt;br /&gt;
Hello, world, I am 65 of 160 &lt;br /&gt;
Hello, world, I am 89 of 160 &lt;br /&gt;
Hello, world, I am 99 of 160 &lt;br /&gt;
Hello, world, I am 153 of 160 &lt;br /&gt;
Hello, world, I am 61 of 160 &lt;br /&gt;
Hello, world, I am 19 of 160 &lt;br /&gt;
Hello, world, I am 31 of 160 &lt;br /&gt;
Hello, world, I am 78 of 160 &lt;br /&gt;
Hello, world, I am 94 of 160 &lt;br /&gt;
Hello, world, I am 109 of 160 &lt;br /&gt;
Hello, world, I am 15 of 160 &lt;br /&gt;
Hello, world, I am 66 of 160 &lt;br /&gt;
Hello, world, I am 91 of 160 &lt;br /&gt;
Hello, world, I am 97 of 160 &lt;br /&gt;
Hello, world, I am 80 of 160 &lt;br /&gt;
Hello, world, I am 150 of 160 &lt;br /&gt;
Hello, world, I am 38 of 160 &lt;br /&gt;
Hello, world, I am 57 of 160 &lt;br /&gt;
Hello, world, I am 72 of 160 &lt;br /&gt;
Hello, world, I am 44 of 160 &lt;br /&gt;
Hello, world, I am 41 of 160 &lt;br /&gt;
Hello, world, I am 107 of 160 &lt;br /&gt;
Hello, world, I am 112 of 160 &lt;br /&gt;
Hello, world, I am 59 of 160 &lt;br /&gt;
Hello, world, I am 74 of 160 &lt;br /&gt;
Hello, world, I am 45 of 160 &lt;br /&gt;
Hello, world, I am 42 of 160 &lt;br /&gt;
Hello, world, I am 104 of 160 &lt;br /&gt;
Hello, world, I am 79 of 160 &lt;br /&gt;
Hello, world, I am 110 of 160 &lt;br /&gt;
Hello, world, I am 90 of 160 &lt;br /&gt;
Hello, world, I am 98 of 160 &lt;br /&gt;
Hello, world, I am 151 of 160 &lt;br /&gt;
Hello, world, I am 39 of 160 &lt;br /&gt;
Hello, world, I am 43 of 160 &lt;br /&gt;
Hello, world, I am 105 of 160 &lt;br /&gt;
Hello, world, I am 60 of 160 &lt;br /&gt;
Hello, world, I am 95 of 160 &lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=MPI_Hello_World&amp;diff=169</id>
		<title>MPI Hello World</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=MPI_Hello_World&amp;diff=169"/>
		<updated>2022-09-24T17:41:56Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* MPI Hello World */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== MPI Hello World ==&lt;br /&gt;
&lt;br /&gt;
Many parallel jobs are using [https://en.wikipedia.org/wiki/Message_Passing_Interface MPI] at the lowest level to manage parallel compute resources.&lt;br /&gt;
&lt;br /&gt;
This is a 'Hello World' program that will test the operation of sending jobs to remote workers.&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 /*&lt;br /&gt;
 * Sample MPI &amp;quot;hello world&amp;quot; application in C&lt;br /&gt;
 */&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;quot;mpi.h&amp;quot;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char* argv[])&lt;br /&gt;
{&lt;br /&gt;
    int rank, size, len;&lt;br /&gt;
    char version[MPI_MAX_LIBRARY_VERSION_STRING];&lt;br /&gt;
&lt;br /&gt;
    MPI_Init(&amp;amp;argc, &amp;amp;argv);&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;rank);&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;size);&lt;br /&gt;
    printf(&amp;quot;Hello, world, I am %d of %d \n&amp;quot;, rank, size);&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
create a folder and save this file as &amp;lt;tt&amp;gt; hello_mpi.c &amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can compile this program with&lt;br /&gt;
&lt;br /&gt;
 mpicc -g hello_mpi.c -o hello_mpi&lt;br /&gt;
&lt;br /&gt;
You now have an executable called &amp;lt;tt&amp;gt; hello_mpi &amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can run the command with &lt;br /&gt;
&lt;br /&gt;
 mpirun ./hello_mpi&lt;br /&gt;
&lt;br /&gt;
With no other arguments, mpirun will run all of the tasks on the local machine, usually one for each CPU core. You should see a hello world from each process, on each cpu core.&lt;br /&gt;
&lt;br /&gt;
Now, to run your mpi program on the cluster, you will need to create a hostfile.&lt;br /&gt;
First, lets create a simple hostfile that just runs four processes on the local machine&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Put the following line in a file called &amp;lt;tt&amp;gt;localhost &amp;lt;/tt&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
localhost slots=4 max_slots=8&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now run your program with that hostfile, using&lt;br /&gt;
&lt;br /&gt;
 mpirun --hostfile localhost ./hello_mpi&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
Hello, world, I am 0 of 4 &lt;br /&gt;
Hello, world, I am 2 of 4 &lt;br /&gt;
Hello, world, I am 1 of 4 &lt;br /&gt;
Hello, world, I am 3 of 4 &lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now create a hostfile called &amp;lt;tt&amp;gt; cluster_hosts &amp;lt;/tt&amp;gt; with the following entries:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
pnode01 slots=4 max_slots=8&lt;br /&gt;
pnode02 slots=4 max_slots=8&lt;br /&gt;
pnode03 slots=4 max_slots=8&lt;br /&gt;
pnode04 slots=4 max_slots=8&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and keep adding lines until you get to &amp;lt;tt&amp;gt; pnode40 slots=4 max_slots=8 &amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The next step assumes you have set up your ssh keys as described in [[Cluster SSH Access]]&lt;br /&gt;
&lt;br /&gt;
With 40 nodes listed in your &amp;lt;tt&amp;gt; cluster_hosts &amp;lt;/tt&amp;gt; file, run your program again with&lt;br /&gt;
&lt;br /&gt;
 mpirun --hostfile cluster_hosts ./hello_mpi&lt;br /&gt;
&lt;br /&gt;
You should see the output shown below, which is 160 lines long, four responses from each host, one for each core described.&lt;br /&gt;
&lt;br /&gt;
Once you are able to run this program successfully, your MPI setup is working. This will be necessary before you can run other MPI-based programs, such as ipyparallel programs which use MPI.&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
Hello, world, I am 126 of 160 &lt;br /&gt;
Hello, world, I am 69 of 160 &lt;br /&gt;
Hello, world, I am 159 of 160 &lt;br /&gt;
Hello, world, I am 137 of 160 &lt;br /&gt;
Hello, world, I am 11 of 160 &lt;br /&gt;
Hello, world, I am 138 of 160 &lt;br /&gt;
Hello, world, I am 139 of 160 &lt;br /&gt;
Hello, world, I am 3 of 160 &lt;br /&gt;
Hello, world, I am 125 of 160 &lt;br /&gt;
Hello, world, I am 68 of 160 &lt;br /&gt;
Hello, world, I am 0 of 160 &lt;br /&gt;
Hello, world, I am 127 of 160 &lt;br /&gt;
Hello, world, I am 71 of 160 &lt;br /&gt;
Hello, world, I am 8 of 160 &lt;br /&gt;
Hello, world, I am 136 of 160 &lt;br /&gt;
Hello, world, I am 85 of 160 &lt;br /&gt;
Hello, world, I am 2 of 160 &lt;br /&gt;
Hello, world, I am 124 of 160 &lt;br /&gt;
Hello, world, I am 70 of 160 &lt;br /&gt;
Hello, world, I am 51 of 160 &lt;br /&gt;
Hello, world, I am 9 of 160 &lt;br /&gt;
Hello, world, I am 86 of 160 &lt;br /&gt;
Hello, world, I am 21 of 160 &lt;br /&gt;
Hello, world, I am 1 of 160 &lt;br /&gt;
Hello, world, I am 25 of 160 &lt;br /&gt;
Hello, world, I am 143 of 160 &lt;br /&gt;
Hello, world, I am 119 of 160 &lt;br /&gt;
Hello, world, I am 6 of 160 &lt;br /&gt;
Hello, world, I am 10 of 160 &lt;br /&gt;
Hello, world, I am 84 of 160 &lt;br /&gt;
Hello, world, I am 133 of 160 &lt;br /&gt;
Hello, world, I am 156 of 160 &lt;br /&gt;
Hello, world, I am 23 of 160 &lt;br /&gt;
Hello, world, I am 100 of 160 &lt;br /&gt;
Hello, world, I am 146 of 160 &lt;br /&gt;
Hello, world, I am 27 of 160 &lt;br /&gt;
Hello, world, I am 118 of 160 &lt;br /&gt;
Hello, world, I am 55 of 160 &lt;br /&gt;
Hello, world, I am 32 of 160 &lt;br /&gt;
Hello, world, I am 123 of 160 &lt;br /&gt;
Hello, world, I am 67 of 160 &lt;br /&gt;
Hello, world, I am 87 of 160 &lt;br /&gt;
Hello, world, I am 135 of 160 &lt;br /&gt;
Hello, world, I am 157 of 160 &lt;br /&gt;
Hello, world, I am 20 of 160 &lt;br /&gt;
Hello, world, I am 130 of 160 &lt;br /&gt;
Hello, world, I am 120 of 160 &lt;br /&gt;
Hello, world, I am 12 of 160 &lt;br /&gt;
Hello, world, I am 82 of 160 &lt;br /&gt;
Hello, world, I am 48 of 160 &lt;br /&gt;
Hello, world, I am 132 of 160 &lt;br /&gt;
Hello, world, I am 158 of 160 &lt;br /&gt;
Hello, world, I am 22 of 160 &lt;br /&gt;
Hello, world, I am 131 of 160 &lt;br /&gt;
Hello, world, I am 101 of 160 &lt;br /&gt;
Hello, world, I am 145 of 160 &lt;br /&gt;
Hello, world, I am 24 of 160 &lt;br /&gt;
Hello, world, I am 140 of 160 &lt;br /&gt;
Hello, world, I am 117 of 160 &lt;br /&gt;
Hello, world, I am 113 of 160 &lt;br /&gt;
Hello, world, I am 154 of 160 &lt;br /&gt;
Hello, world, I am 7 of 160 &lt;br /&gt;
Hello, world, I am 53 of 160 &lt;br /&gt;
Hello, world, I am 33 of 160 &lt;br /&gt;
Hello, world, I am 16 of 160 &lt;br /&gt;
Hello, world, I am 28 of 160 &lt;br /&gt;
Hello, world, I am 111 of 160 &lt;br /&gt;
Hello, world, I am 121 of 160 &lt;br /&gt;
Hello, world, I am 13 of 160 &lt;br /&gt;
Hello, world, I am 64 of 160 &lt;br /&gt;
Hello, world, I am 88 of 160 &lt;br /&gt;
Hello, world, I am 96 of 160 &lt;br /&gt;
Hello, world, I am 83 of 160 &lt;br /&gt;
Hello, world, I am 49 of 160 &lt;br /&gt;
Hello, world, I am 134 of 160 &lt;br /&gt;
Hello, world, I am 128 of 160 &lt;br /&gt;
Hello, world, I am 102 of 160 &lt;br /&gt;
Hello, world, I am 148 of 160 &lt;br /&gt;
Hello, world, I am 147 of 160 &lt;br /&gt;
Hello, world, I am 26 of 160 &lt;br /&gt;
Hello, world, I am 36 of 160 &lt;br /&gt;
Hello, world, I am 141 of 160 &lt;br /&gt;
Hello, world, I am 58 of 160 &lt;br /&gt;
Hello, world, I am 73 of 160 &lt;br /&gt;
Hello, world, I am 46 of 160 &lt;br /&gt;
Hello, world, I am 116 of 160 &lt;br /&gt;
Hello, world, I am 114 of 160 &lt;br /&gt;
Hello, world, I am 155 of 160 &lt;br /&gt;
Hello, world, I am 4 of 160 &lt;br /&gt;
Hello, world, I am 52 of 160 &lt;br /&gt;
Hello, world, I am 34 of 160 &lt;br /&gt;
Hello, world, I am 62 of 160 &lt;br /&gt;
Hello, world, I am 17 of 160 &lt;br /&gt;
Hello, world, I am 29 of 160 &lt;br /&gt;
Hello, world, I am 76 of 160 &lt;br /&gt;
Hello, world, I am 92 of 160 &lt;br /&gt;
Hello, world, I am 81 of 160 &lt;br /&gt;
Hello, world, I am 50 of 160 &lt;br /&gt;
Hello, world, I am 129 of 160 &lt;br /&gt;
Hello, world, I am 103 of 160 &lt;br /&gt;
Hello, world, I am 149 of 160 &lt;br /&gt;
Hello, world, I am 144 of 160 &lt;br /&gt;
Hello, world, I am 37 of 160 &lt;br /&gt;
Hello, world, I am 142 of 160 &lt;br /&gt;
Hello, world, I am 56 of 160 &lt;br /&gt;
Hello, world, I am 75 of 160 &lt;br /&gt;
Hello, world, I am 47 of 160 &lt;br /&gt;
Hello, world, I am 40 of 160 &lt;br /&gt;
Hello, world, I am 106 of 160 &lt;br /&gt;
Hello, world, I am 115 of 160 &lt;br /&gt;
Hello, world, I am 152 of 160 &lt;br /&gt;
Hello, world, I am 5 of 160 &lt;br /&gt;
Hello, world, I am 54 of 160 &lt;br /&gt;
Hello, world, I am 35 of 160 &lt;br /&gt;
Hello, world, I am 63 of 160 &lt;br /&gt;
Hello, world, I am 18 of 160 &lt;br /&gt;
Hello, world, I am 30 of 160 &lt;br /&gt;
Hello, world, I am 77 of 160 &lt;br /&gt;
Hello, world, I am 93 of 160 &lt;br /&gt;
Hello, world, I am 108 of 160 &lt;br /&gt;
Hello, world, I am 122 of 160 &lt;br /&gt;
Hello, world, I am 14 of 160 &lt;br /&gt;
Hello, world, I am 65 of 160 &lt;br /&gt;
Hello, world, I am 89 of 160 &lt;br /&gt;
Hello, world, I am 99 of 160 &lt;br /&gt;
Hello, world, I am 153 of 160 &lt;br /&gt;
Hello, world, I am 61 of 160 &lt;br /&gt;
Hello, world, I am 19 of 160 &lt;br /&gt;
Hello, world, I am 31 of 160 &lt;br /&gt;
Hello, world, I am 78 of 160 &lt;br /&gt;
Hello, world, I am 94 of 160 &lt;br /&gt;
Hello, world, I am 109 of 160 &lt;br /&gt;
Hello, world, I am 15 of 160 &lt;br /&gt;
Hello, world, I am 66 of 160 &lt;br /&gt;
Hello, world, I am 91 of 160 &lt;br /&gt;
Hello, world, I am 97 of 160 &lt;br /&gt;
Hello, world, I am 80 of 160 &lt;br /&gt;
Hello, world, I am 150 of 160 &lt;br /&gt;
Hello, world, I am 38 of 160 &lt;br /&gt;
Hello, world, I am 57 of 160 &lt;br /&gt;
Hello, world, I am 72 of 160 &lt;br /&gt;
Hello, world, I am 44 of 160 &lt;br /&gt;
Hello, world, I am 41 of 160 &lt;br /&gt;
Hello, world, I am 107 of 160 &lt;br /&gt;
Hello, world, I am 112 of 160 &lt;br /&gt;
Hello, world, I am 59 of 160 &lt;br /&gt;
Hello, world, I am 74 of 160 &lt;br /&gt;
Hello, world, I am 45 of 160 &lt;br /&gt;
Hello, world, I am 42 of 160 &lt;br /&gt;
Hello, world, I am 104 of 160 &lt;br /&gt;
Hello, world, I am 79 of 160 &lt;br /&gt;
Hello, world, I am 110 of 160 &lt;br /&gt;
Hello, world, I am 90 of 160 &lt;br /&gt;
Hello, world, I am 98 of 160 &lt;br /&gt;
Hello, world, I am 151 of 160 &lt;br /&gt;
Hello, world, I am 39 of 160 &lt;br /&gt;
Hello, world, I am 43 of 160 &lt;br /&gt;
Hello, world, I am 105 of 160 &lt;br /&gt;
Hello, world, I am 60 of 160 &lt;br /&gt;
Hello, world, I am 95 of 160 &lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=MPI_Hello_World&amp;diff=168</id>
		<title>MPI Hello World</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=MPI_Hello_World&amp;diff=168"/>
		<updated>2022-09-24T17:39:35Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== MPI Hello World ==&lt;br /&gt;
&lt;br /&gt;
Many parallel jobs are using MPI at the lowest level to manage parallel compute resources.&lt;br /&gt;
&lt;br /&gt;
This is a 'Hello World' program that will test the operation of sending jobs to remote workers.&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 /*&lt;br /&gt;
 * Sample MPI &amp;quot;hello world&amp;quot; application in C&lt;br /&gt;
 */&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;quot;mpi.h&amp;quot;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char* argv[])&lt;br /&gt;
{&lt;br /&gt;
    int rank, size, len;&lt;br /&gt;
    char version[MPI_MAX_LIBRARY_VERSION_STRING];&lt;br /&gt;
&lt;br /&gt;
    MPI_Init(&amp;amp;argc, &amp;amp;argv);&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;rank);&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;size);&lt;br /&gt;
    printf(&amp;quot;Hello, world, I am %d of %d \n&amp;quot;, rank, size);&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
create a folder and save this file as &amp;lt;tt&amp;gt; hello_mpi.c &amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can compile this program with&lt;br /&gt;
&lt;br /&gt;
 mpicc -g hello_mpi.c -o hello_mpi&lt;br /&gt;
&lt;br /&gt;
You now have an executable called &amp;lt;tt&amp;gt; hello_mpi &amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can run the command with &lt;br /&gt;
&lt;br /&gt;
 mpirun ./hello_mpi&lt;br /&gt;
&lt;br /&gt;
With no other arguments, mpirun will run all of the tasks on the local machine, usually one for each CPU core. You should see a hello world from each process, on each cpu core.&lt;br /&gt;
&lt;br /&gt;
Now, to run your mpi program on the cluster, you will need to create a hostfile.&lt;br /&gt;
First, lets create a simple hostfile that just runs four processes on the local machine&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Put the following line in a file called &amp;lt;tt&amp;gt;localhost &amp;lt;/tt&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
localhost slots=4 max_slots=8&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now run your program with that hostfile, using&lt;br /&gt;
&lt;br /&gt;
 mpirun --hostfile localhost ./hello_mpi&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
Hello, world, I am 0 of 4 &lt;br /&gt;
Hello, world, I am 2 of 4 &lt;br /&gt;
Hello, world, I am 1 of 4 &lt;br /&gt;
Hello, world, I am 3 of 4 &lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now create a hostfile called &amp;lt;tt&amp;gt; cluster_hosts &amp;lt;/tt&amp;gt; with the following entries:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
pnode01 slots=4 max_slots=8&lt;br /&gt;
pnode02 slots=4 max_slots=8&lt;br /&gt;
pnode03 slots=4 max_slots=8&lt;br /&gt;
pnode04 slots=4 max_slots=8&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and keep adding lines until you get to &amp;lt;tt&amp;gt; pnode40 slots=4 max_slots=8 &amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The next step assumes you have set up your ssh keys as described in [[Cluster SSH Access]]&lt;br /&gt;
&lt;br /&gt;
With 40 nodes listed in your &amp;lt;tt&amp;gt; cluster_hosts &amp;lt;/tt&amp;gt; file, run your program again with&lt;br /&gt;
&lt;br /&gt;
 mpirun --hostfile cluster_hosts ./hello_mpi&lt;br /&gt;
&lt;br /&gt;
You should see the output shown below, which is 160 lines long, four responses from each host, one for each core described.&lt;br /&gt;
&lt;br /&gt;
Once you are able to run this program successfully, your MPI setup is working. This will be necessary before you can run other MPI-based programs, such as ipyparallel programs which use MPI.&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
Hello, world, I am 126 of 160 &lt;br /&gt;
Hello, world, I am 69 of 160 &lt;br /&gt;
Hello, world, I am 159 of 160 &lt;br /&gt;
Hello, world, I am 137 of 160 &lt;br /&gt;
Hello, world, I am 11 of 160 &lt;br /&gt;
Hello, world, I am 138 of 160 &lt;br /&gt;
Hello, world, I am 139 of 160 &lt;br /&gt;
Hello, world, I am 3 of 160 &lt;br /&gt;
Hello, world, I am 125 of 160 &lt;br /&gt;
Hello, world, I am 68 of 160 &lt;br /&gt;
Hello, world, I am 0 of 160 &lt;br /&gt;
Hello, world, I am 127 of 160 &lt;br /&gt;
Hello, world, I am 71 of 160 &lt;br /&gt;
Hello, world, I am 8 of 160 &lt;br /&gt;
Hello, world, I am 136 of 160 &lt;br /&gt;
Hello, world, I am 85 of 160 &lt;br /&gt;
Hello, world, I am 2 of 160 &lt;br /&gt;
Hello, world, I am 124 of 160 &lt;br /&gt;
Hello, world, I am 70 of 160 &lt;br /&gt;
Hello, world, I am 51 of 160 &lt;br /&gt;
Hello, world, I am 9 of 160 &lt;br /&gt;
Hello, world, I am 86 of 160 &lt;br /&gt;
Hello, world, I am 21 of 160 &lt;br /&gt;
Hello, world, I am 1 of 160 &lt;br /&gt;
Hello, world, I am 25 of 160 &lt;br /&gt;
Hello, world, I am 143 of 160 &lt;br /&gt;
Hello, world, I am 119 of 160 &lt;br /&gt;
Hello, world, I am 6 of 160 &lt;br /&gt;
Hello, world, I am 10 of 160 &lt;br /&gt;
Hello, world, I am 84 of 160 &lt;br /&gt;
Hello, world, I am 133 of 160 &lt;br /&gt;
Hello, world, I am 156 of 160 &lt;br /&gt;
Hello, world, I am 23 of 160 &lt;br /&gt;
Hello, world, I am 100 of 160 &lt;br /&gt;
Hello, world, I am 146 of 160 &lt;br /&gt;
Hello, world, I am 27 of 160 &lt;br /&gt;
Hello, world, I am 118 of 160 &lt;br /&gt;
Hello, world, I am 55 of 160 &lt;br /&gt;
Hello, world, I am 32 of 160 &lt;br /&gt;
Hello, world, I am 123 of 160 &lt;br /&gt;
Hello, world, I am 67 of 160 &lt;br /&gt;
Hello, world, I am 87 of 160 &lt;br /&gt;
Hello, world, I am 135 of 160 &lt;br /&gt;
Hello, world, I am 157 of 160 &lt;br /&gt;
Hello, world, I am 20 of 160 &lt;br /&gt;
Hello, world, I am 130 of 160 &lt;br /&gt;
Hello, world, I am 120 of 160 &lt;br /&gt;
Hello, world, I am 12 of 160 &lt;br /&gt;
Hello, world, I am 82 of 160 &lt;br /&gt;
Hello, world, I am 48 of 160 &lt;br /&gt;
Hello, world, I am 132 of 160 &lt;br /&gt;
Hello, world, I am 158 of 160 &lt;br /&gt;
Hello, world, I am 22 of 160 &lt;br /&gt;
Hello, world, I am 131 of 160 &lt;br /&gt;
Hello, world, I am 101 of 160 &lt;br /&gt;
Hello, world, I am 145 of 160 &lt;br /&gt;
Hello, world, I am 24 of 160 &lt;br /&gt;
Hello, world, I am 140 of 160 &lt;br /&gt;
Hello, world, I am 117 of 160 &lt;br /&gt;
Hello, world, I am 113 of 160 &lt;br /&gt;
Hello, world, I am 154 of 160 &lt;br /&gt;
Hello, world, I am 7 of 160 &lt;br /&gt;
Hello, world, I am 53 of 160 &lt;br /&gt;
Hello, world, I am 33 of 160 &lt;br /&gt;
Hello, world, I am 16 of 160 &lt;br /&gt;
Hello, world, I am 28 of 160 &lt;br /&gt;
Hello, world, I am 111 of 160 &lt;br /&gt;
Hello, world, I am 121 of 160 &lt;br /&gt;
Hello, world, I am 13 of 160 &lt;br /&gt;
Hello, world, I am 64 of 160 &lt;br /&gt;
Hello, world, I am 88 of 160 &lt;br /&gt;
Hello, world, I am 96 of 160 &lt;br /&gt;
Hello, world, I am 83 of 160 &lt;br /&gt;
Hello, world, I am 49 of 160 &lt;br /&gt;
Hello, world, I am 134 of 160 &lt;br /&gt;
Hello, world, I am 128 of 160 &lt;br /&gt;
Hello, world, I am 102 of 160 &lt;br /&gt;
Hello, world, I am 148 of 160 &lt;br /&gt;
Hello, world, I am 147 of 160 &lt;br /&gt;
Hello, world, I am 26 of 160 &lt;br /&gt;
Hello, world, I am 36 of 160 &lt;br /&gt;
Hello, world, I am 141 of 160 &lt;br /&gt;
Hello, world, I am 58 of 160 &lt;br /&gt;
Hello, world, I am 73 of 160 &lt;br /&gt;
Hello, world, I am 46 of 160 &lt;br /&gt;
Hello, world, I am 116 of 160 &lt;br /&gt;
Hello, world, I am 114 of 160 &lt;br /&gt;
Hello, world, I am 155 of 160 &lt;br /&gt;
Hello, world, I am 4 of 160 &lt;br /&gt;
Hello, world, I am 52 of 160 &lt;br /&gt;
Hello, world, I am 34 of 160 &lt;br /&gt;
Hello, world, I am 62 of 160 &lt;br /&gt;
Hello, world, I am 17 of 160 &lt;br /&gt;
Hello, world, I am 29 of 160 &lt;br /&gt;
Hello, world, I am 76 of 160 &lt;br /&gt;
Hello, world, I am 92 of 160 &lt;br /&gt;
Hello, world, I am 81 of 160 &lt;br /&gt;
Hello, world, I am 50 of 160 &lt;br /&gt;
Hello, world, I am 129 of 160 &lt;br /&gt;
Hello, world, I am 103 of 160 &lt;br /&gt;
Hello, world, I am 149 of 160 &lt;br /&gt;
Hello, world, I am 144 of 160 &lt;br /&gt;
Hello, world, I am 37 of 160 &lt;br /&gt;
Hello, world, I am 142 of 160 &lt;br /&gt;
Hello, world, I am 56 of 160 &lt;br /&gt;
Hello, world, I am 75 of 160 &lt;br /&gt;
Hello, world, I am 47 of 160 &lt;br /&gt;
Hello, world, I am 40 of 160 &lt;br /&gt;
Hello, world, I am 106 of 160 &lt;br /&gt;
Hello, world, I am 115 of 160 &lt;br /&gt;
Hello, world, I am 152 of 160 &lt;br /&gt;
Hello, world, I am 5 of 160 &lt;br /&gt;
Hello, world, I am 54 of 160 &lt;br /&gt;
Hello, world, I am 35 of 160 &lt;br /&gt;
Hello, world, I am 63 of 160 &lt;br /&gt;
Hello, world, I am 18 of 160 &lt;br /&gt;
Hello, world, I am 30 of 160 &lt;br /&gt;
Hello, world, I am 77 of 160 &lt;br /&gt;
Hello, world, I am 93 of 160 &lt;br /&gt;
Hello, world, I am 108 of 160 &lt;br /&gt;
Hello, world, I am 122 of 160 &lt;br /&gt;
Hello, world, I am 14 of 160 &lt;br /&gt;
Hello, world, I am 65 of 160 &lt;br /&gt;
Hello, world, I am 89 of 160 &lt;br /&gt;
Hello, world, I am 99 of 160 &lt;br /&gt;
Hello, world, I am 153 of 160 &lt;br /&gt;
Hello, world, I am 61 of 160 &lt;br /&gt;
Hello, world, I am 19 of 160 &lt;br /&gt;
Hello, world, I am 31 of 160 &lt;br /&gt;
Hello, world, I am 78 of 160 &lt;br /&gt;
Hello, world, I am 94 of 160 &lt;br /&gt;
Hello, world, I am 109 of 160 &lt;br /&gt;
Hello, world, I am 15 of 160 &lt;br /&gt;
Hello, world, I am 66 of 160 &lt;br /&gt;
Hello, world, I am 91 of 160 &lt;br /&gt;
Hello, world, I am 97 of 160 &lt;br /&gt;
Hello, world, I am 80 of 160 &lt;br /&gt;
Hello, world, I am 150 of 160 &lt;br /&gt;
Hello, world, I am 38 of 160 &lt;br /&gt;
Hello, world, I am 57 of 160 &lt;br /&gt;
Hello, world, I am 72 of 160 &lt;br /&gt;
Hello, world, I am 44 of 160 &lt;br /&gt;
Hello, world, I am 41 of 160 &lt;br /&gt;
Hello, world, I am 107 of 160 &lt;br /&gt;
Hello, world, I am 112 of 160 &lt;br /&gt;
Hello, world, I am 59 of 160 &lt;br /&gt;
Hello, world, I am 74 of 160 &lt;br /&gt;
Hello, world, I am 45 of 160 &lt;br /&gt;
Hello, world, I am 42 of 160 &lt;br /&gt;
Hello, world, I am 104 of 160 &lt;br /&gt;
Hello, world, I am 79 of 160 &lt;br /&gt;
Hello, world, I am 110 of 160 &lt;br /&gt;
Hello, world, I am 90 of 160 &lt;br /&gt;
Hello, world, I am 98 of 160 &lt;br /&gt;
Hello, world, I am 151 of 160 &lt;br /&gt;
Hello, world, I am 39 of 160 &lt;br /&gt;
Hello, world, I am 43 of 160 &lt;br /&gt;
Hello, world, I am 105 of 160 &lt;br /&gt;
Hello, world, I am 60 of 160 &lt;br /&gt;
Hello, world, I am 95 of 160 &lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=MPI_Hello_World&amp;diff=167</id>
		<title>MPI Hello World</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=MPI_Hello_World&amp;diff=167"/>
		<updated>2022-09-24T17:37:05Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* MPI Hello World */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== MPI Hello World ==&lt;br /&gt;
&lt;br /&gt;
Many parallel jobs are using MPI at the lowest level to manage parallel compute resources.&lt;br /&gt;
&lt;br /&gt;
This is a 'Hello World' program that will test the operation of sending jobs to remote workers.&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 /*&lt;br /&gt;
 * Sample MPI &amp;quot;hello world&amp;quot; application in C&lt;br /&gt;
 */&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;quot;mpi.h&amp;quot;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char* argv[])&lt;br /&gt;
{&lt;br /&gt;
    int rank, size, len;&lt;br /&gt;
    char version[MPI_MAX_LIBRARY_VERSION_STRING];&lt;br /&gt;
&lt;br /&gt;
    MPI_Init(&amp;amp;argc, &amp;amp;argv);&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;rank);&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;size);&lt;br /&gt;
    printf(&amp;quot;Hello, world, I am %d of %d \n&amp;quot;, rank, size);&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
create a folder and save this file as &amp;lt;tt&amp;gt; hello_mpi.c &amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can compile this program with&lt;br /&gt;
&lt;br /&gt;
 mpicc -g hello_mpi.c -o hello_mpi&lt;br /&gt;
&lt;br /&gt;
You now have an executable called &amp;lt;tt&amp;gt; hello_mpi &amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can run the command with &lt;br /&gt;
&lt;br /&gt;
 mpirun ./hello_mpi&lt;br /&gt;
&lt;br /&gt;
With no other arguments, mpirun will run all of the tasks on the local machine, usually one for each CPU core. You should see a hello world from each process, on each cpu core.&lt;br /&gt;
&lt;br /&gt;
Now, to run your mpi program on the cluster, you will need to create a hostfile.&lt;br /&gt;
First, lets create a simple hostfile that just runs four processes on the local machine&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
localhost slots=4 max_slots=8&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put that line in a file called &amp;lt;tt&amp;gt;localhost &amp;lt;/tt&amp;gt;. &lt;br /&gt;
Now run your program with that hostfile, using&lt;br /&gt;
 mpirun --hostfile localhost ./hello_mpi&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
Hello, world, I am 0 of 4 &lt;br /&gt;
Hello, world, I am 2 of 4 &lt;br /&gt;
Hello, world, I am 1 of 4 &lt;br /&gt;
Hello, world, I am 3 of 4 &lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now create a hostfile called &amp;lt;tt&amp;gt; cluster_hosts &amp;lt;/tt&amp;gt; with the following entries:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
pnode01 slots=4 max_slots=8&lt;br /&gt;
pnode02 slots=4 max_slots=8&lt;br /&gt;
pnode03 slots=4 max_slots=8&lt;br /&gt;
pnode04 slots=4 max_slots=8&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and keep adding lines until you get to &amp;lt;tt&amp;gt; pnode40 &amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The next step assumes you have set up your ssh keys as described in [[Cluster SSH Access]]&lt;br /&gt;
&lt;br /&gt;
With 40 nodes listed in your &amp;lt;tt&amp;gt; cluster_hosts &amp;lt;/tt&amp;gt; file, run your program again with&lt;br /&gt;
&lt;br /&gt;
 mpirun --hostfile cluster_hosts ./hello_mpi&lt;br /&gt;
&lt;br /&gt;
You should see the output shown below, which is 160 lines long, four responses from each host, one for each core described.&lt;br /&gt;
&lt;br /&gt;
Once you are able to run this program successfully, your MPI setup is working. This will be necessary before you can run other MPI-based programs, such as ipyparallel programs which use MPI.&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
Hello, world, I am 126 of 160 &lt;br /&gt;
Hello, world, I am 69 of 160 &lt;br /&gt;
Hello, world, I am 159 of 160 &lt;br /&gt;
Hello, world, I am 137 of 160 &lt;br /&gt;
Hello, world, I am 11 of 160 &lt;br /&gt;
Hello, world, I am 138 of 160 &lt;br /&gt;
Hello, world, I am 139 of 160 &lt;br /&gt;
Hello, world, I am 3 of 160 &lt;br /&gt;
Hello, world, I am 125 of 160 &lt;br /&gt;
Hello, world, I am 68 of 160 &lt;br /&gt;
Hello, world, I am 0 of 160 &lt;br /&gt;
Hello, world, I am 127 of 160 &lt;br /&gt;
Hello, world, I am 71 of 160 &lt;br /&gt;
Hello, world, I am 8 of 160 &lt;br /&gt;
Hello, world, I am 136 of 160 &lt;br /&gt;
Hello, world, I am 85 of 160 &lt;br /&gt;
Hello, world, I am 2 of 160 &lt;br /&gt;
Hello, world, I am 124 of 160 &lt;br /&gt;
Hello, world, I am 70 of 160 &lt;br /&gt;
Hello, world, I am 51 of 160 &lt;br /&gt;
Hello, world, I am 9 of 160 &lt;br /&gt;
Hello, world, I am 86 of 160 &lt;br /&gt;
Hello, world, I am 21 of 160 &lt;br /&gt;
Hello, world, I am 1 of 160 &lt;br /&gt;
Hello, world, I am 25 of 160 &lt;br /&gt;
Hello, world, I am 143 of 160 &lt;br /&gt;
Hello, world, I am 119 of 160 &lt;br /&gt;
Hello, world, I am 6 of 160 &lt;br /&gt;
Hello, world, I am 10 of 160 &lt;br /&gt;
Hello, world, I am 84 of 160 &lt;br /&gt;
Hello, world, I am 133 of 160 &lt;br /&gt;
Hello, world, I am 156 of 160 &lt;br /&gt;
Hello, world, I am 23 of 160 &lt;br /&gt;
Hello, world, I am 100 of 160 &lt;br /&gt;
Hello, world, I am 146 of 160 &lt;br /&gt;
Hello, world, I am 27 of 160 &lt;br /&gt;
Hello, world, I am 118 of 160 &lt;br /&gt;
Hello, world, I am 55 of 160 &lt;br /&gt;
Hello, world, I am 32 of 160 &lt;br /&gt;
Hello, world, I am 123 of 160 &lt;br /&gt;
Hello, world, I am 67 of 160 &lt;br /&gt;
Hello, world, I am 87 of 160 &lt;br /&gt;
Hello, world, I am 135 of 160 &lt;br /&gt;
Hello, world, I am 157 of 160 &lt;br /&gt;
Hello, world, I am 20 of 160 &lt;br /&gt;
Hello, world, I am 130 of 160 &lt;br /&gt;
Hello, world, I am 120 of 160 &lt;br /&gt;
Hello, world, I am 12 of 160 &lt;br /&gt;
Hello, world, I am 82 of 160 &lt;br /&gt;
Hello, world, I am 48 of 160 &lt;br /&gt;
Hello, world, I am 132 of 160 &lt;br /&gt;
Hello, world, I am 158 of 160 &lt;br /&gt;
Hello, world, I am 22 of 160 &lt;br /&gt;
Hello, world, I am 131 of 160 &lt;br /&gt;
Hello, world, I am 101 of 160 &lt;br /&gt;
Hello, world, I am 145 of 160 &lt;br /&gt;
Hello, world, I am 24 of 160 &lt;br /&gt;
Hello, world, I am 140 of 160 &lt;br /&gt;
Hello, world, I am 117 of 160 &lt;br /&gt;
Hello, world, I am 113 of 160 &lt;br /&gt;
Hello, world, I am 154 of 160 &lt;br /&gt;
Hello, world, I am 7 of 160 &lt;br /&gt;
Hello, world, I am 53 of 160 &lt;br /&gt;
Hello, world, I am 33 of 160 &lt;br /&gt;
Hello, world, I am 16 of 160 &lt;br /&gt;
Hello, world, I am 28 of 160 &lt;br /&gt;
Hello, world, I am 111 of 160 &lt;br /&gt;
Hello, world, I am 121 of 160 &lt;br /&gt;
Hello, world, I am 13 of 160 &lt;br /&gt;
Hello, world, I am 64 of 160 &lt;br /&gt;
Hello, world, I am 88 of 160 &lt;br /&gt;
Hello, world, I am 96 of 160 &lt;br /&gt;
Hello, world, I am 83 of 160 &lt;br /&gt;
Hello, world, I am 49 of 160 &lt;br /&gt;
Hello, world, I am 134 of 160 &lt;br /&gt;
Hello, world, I am 128 of 160 &lt;br /&gt;
Hello, world, I am 102 of 160 &lt;br /&gt;
Hello, world, I am 148 of 160 &lt;br /&gt;
Hello, world, I am 147 of 160 &lt;br /&gt;
Hello, world, I am 26 of 160 &lt;br /&gt;
Hello, world, I am 36 of 160 &lt;br /&gt;
Hello, world, I am 141 of 160 &lt;br /&gt;
Hello, world, I am 58 of 160 &lt;br /&gt;
Hello, world, I am 73 of 160 &lt;br /&gt;
Hello, world, I am 46 of 160 &lt;br /&gt;
Hello, world, I am 116 of 160 &lt;br /&gt;
Hello, world, I am 114 of 160 &lt;br /&gt;
Hello, world, I am 155 of 160 &lt;br /&gt;
Hello, world, I am 4 of 160 &lt;br /&gt;
Hello, world, I am 52 of 160 &lt;br /&gt;
Hello, world, I am 34 of 160 &lt;br /&gt;
Hello, world, I am 62 of 160 &lt;br /&gt;
Hello, world, I am 17 of 160 &lt;br /&gt;
Hello, world, I am 29 of 160 &lt;br /&gt;
Hello, world, I am 76 of 160 &lt;br /&gt;
Hello, world, I am 92 of 160 &lt;br /&gt;
Hello, world, I am 81 of 160 &lt;br /&gt;
Hello, world, I am 50 of 160 &lt;br /&gt;
Hello, world, I am 129 of 160 &lt;br /&gt;
Hello, world, I am 103 of 160 &lt;br /&gt;
Hello, world, I am 149 of 160 &lt;br /&gt;
Hello, world, I am 144 of 160 &lt;br /&gt;
Hello, world, I am 37 of 160 &lt;br /&gt;
Hello, world, I am 142 of 160 &lt;br /&gt;
Hello, world, I am 56 of 160 &lt;br /&gt;
Hello, world, I am 75 of 160 &lt;br /&gt;
Hello, world, I am 47 of 160 &lt;br /&gt;
Hello, world, I am 40 of 160 &lt;br /&gt;
Hello, world, I am 106 of 160 &lt;br /&gt;
Hello, world, I am 115 of 160 &lt;br /&gt;
Hello, world, I am 152 of 160 &lt;br /&gt;
Hello, world, I am 5 of 160 &lt;br /&gt;
Hello, world, I am 54 of 160 &lt;br /&gt;
Hello, world, I am 35 of 160 &lt;br /&gt;
Hello, world, I am 63 of 160 &lt;br /&gt;
Hello, world, I am 18 of 160 &lt;br /&gt;
Hello, world, I am 30 of 160 &lt;br /&gt;
Hello, world, I am 77 of 160 &lt;br /&gt;
Hello, world, I am 93 of 160 &lt;br /&gt;
Hello, world, I am 108 of 160 &lt;br /&gt;
Hello, world, I am 122 of 160 &lt;br /&gt;
Hello, world, I am 14 of 160 &lt;br /&gt;
Hello, world, I am 65 of 160 &lt;br /&gt;
Hello, world, I am 89 of 160 &lt;br /&gt;
Hello, world, I am 99 of 160 &lt;br /&gt;
Hello, world, I am 153 of 160 &lt;br /&gt;
Hello, world, I am 61 of 160 &lt;br /&gt;
Hello, world, I am 19 of 160 &lt;br /&gt;
Hello, world, I am 31 of 160 &lt;br /&gt;
Hello, world, I am 78 of 160 &lt;br /&gt;
Hello, world, I am 94 of 160 &lt;br /&gt;
Hello, world, I am 109 of 160 &lt;br /&gt;
Hello, world, I am 15 of 160 &lt;br /&gt;
Hello, world, I am 66 of 160 &lt;br /&gt;
Hello, world, I am 91 of 160 &lt;br /&gt;
Hello, world, I am 97 of 160 &lt;br /&gt;
Hello, world, I am 80 of 160 &lt;br /&gt;
Hello, world, I am 150 of 160 &lt;br /&gt;
Hello, world, I am 38 of 160 &lt;br /&gt;
Hello, world, I am 57 of 160 &lt;br /&gt;
Hello, world, I am 72 of 160 &lt;br /&gt;
Hello, world, I am 44 of 160 &lt;br /&gt;
Hello, world, I am 41 of 160 &lt;br /&gt;
Hello, world, I am 107 of 160 &lt;br /&gt;
Hello, world, I am 112 of 160 &lt;br /&gt;
Hello, world, I am 59 of 160 &lt;br /&gt;
Hello, world, I am 74 of 160 &lt;br /&gt;
Hello, world, I am 45 of 160 &lt;br /&gt;
Hello, world, I am 42 of 160 &lt;br /&gt;
Hello, world, I am 104 of 160 &lt;br /&gt;
Hello, world, I am 79 of 160 &lt;br /&gt;
Hello, world, I am 110 of 160 &lt;br /&gt;
Hello, world, I am 90 of 160 &lt;br /&gt;
Hello, world, I am 98 of 160 &lt;br /&gt;
Hello, world, I am 151 of 160 &lt;br /&gt;
Hello, world, I am 39 of 160 &lt;br /&gt;
Hello, world, I am 43 of 160 &lt;br /&gt;
Hello, world, I am 105 of 160 &lt;br /&gt;
Hello, world, I am 60 of 160 &lt;br /&gt;
Hello, world, I am 95 of 160 &lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=MPI_Hello_World&amp;diff=166</id>
		<title>MPI Hello World</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=MPI_Hello_World&amp;diff=166"/>
		<updated>2022-09-24T17:29:45Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* MPI Hello World */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== MPI Hello World ==&lt;br /&gt;
&lt;br /&gt;
Many parallel jobs are using MPI at the lowest level to manage parallel compute resources.&lt;br /&gt;
&lt;br /&gt;
This is a 'Hello World' program that will test the operation of sending jobs to remote workers.&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 /*&lt;br /&gt;
 * Sample MPI &amp;quot;hello world&amp;quot; application in C&lt;br /&gt;
 */&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;quot;mpi.h&amp;quot;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char* argv[])&lt;br /&gt;
{&lt;br /&gt;
    int rank, size, len;&lt;br /&gt;
    char version[MPI_MAX_LIBRARY_VERSION_STRING];&lt;br /&gt;
&lt;br /&gt;
    MPI_Init(&amp;amp;argc, &amp;amp;argv);&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;rank);&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;size);&lt;br /&gt;
    printf(&amp;quot;Hello, world, I am %d of %d \n&amp;quot;, rank, size);&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
create a folder and save this file as &amp;lt;tt&amp;gt; hello_mpi.c &amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can compile this program with&lt;br /&gt;
&lt;br /&gt;
 mpicc -g hello_mpi.c -o hello_mpi&lt;br /&gt;
&lt;br /&gt;
You now have an executable called &amp;lt;tt&amp;gt; hello_mpi &amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can run the command with &lt;br /&gt;
&lt;br /&gt;
 mpirun ./hello_mpi&lt;br /&gt;
&lt;br /&gt;
With no other arguments, mpirun will run all of the tasks on the local machine, usually one for each CPU core. You should see a hello world from each process, on each cpu core.&lt;br /&gt;
&lt;br /&gt;
Now, to run your mpi program on the cluster, you will need to create a hostfile.&lt;br /&gt;
First, lets create a simple hostfile that just runs four processes on the local machine&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
localhost slots=4 max_slots=8&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put that line in a file called &amp;lt;tt&amp;gt;localhost &amp;lt;/tt&amp;gt;. &lt;br /&gt;
Now run your program with that hostfile, using&lt;br /&gt;
 mpirun --hostfile localhost ./hello_mpi&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
Hello, world, I am 0 of 4 &lt;br /&gt;
Hello, world, I am 2 of 4 &lt;br /&gt;
Hello, world, I am 1 of 4 &lt;br /&gt;
Hello, world, I am 3 of 4 &lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now create a hostfile called &amp;lt;tt&amp;gt; cluster_hosts &amp;lt;/tt&amp;gt; with the following entries:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
pnode01 slots=4 max_slots=8&lt;br /&gt;
pnode02 slots=4 max_slots=8&lt;br /&gt;
pnode03 slots=4 max_slots=8&lt;br /&gt;
pnode04 slots=4 max_slots=8&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and keep adding lines until you get to &amp;lt;tt&amp;gt; pnode40 &amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The next step assumes you have set up your ssh keys as described in &lt;br /&gt;
With 40 nodes listed in your &amp;lt;tt&amp;gt; cluster_hosts &amp;lt;/tt&amp;gt; file, run your program again with&lt;br /&gt;
&lt;br /&gt;
 mpirun --hostfile cluster_hosts ./hello_mpi&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=MPI_Hello_World&amp;diff=165</id>
		<title>MPI Hello World</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=MPI_Hello_World&amp;diff=165"/>
		<updated>2022-09-24T17:25:59Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* MPI Hello World */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== MPI Hello World ==&lt;br /&gt;
&lt;br /&gt;
Many parallel jobs are using MPI at the lowest level to manage parallel compute resources.&lt;br /&gt;
&lt;br /&gt;
This is a 'Hello World' program that will test the operation of sending jobs to remote workers.&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 /*&lt;br /&gt;
 * Sample MPI &amp;quot;hello world&amp;quot; application in C&lt;br /&gt;
 */&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;quot;mpi.h&amp;quot;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char* argv[])&lt;br /&gt;
{&lt;br /&gt;
    int rank, size, len;&lt;br /&gt;
    char version[MPI_MAX_LIBRARY_VERSION_STRING];&lt;br /&gt;
&lt;br /&gt;
    MPI_Init(&amp;amp;argc, &amp;amp;argv);&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;rank);&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;size);&lt;br /&gt;
    printf(&amp;quot;Hello, world, I am %d of %d \n&amp;quot;, rank, size);&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
create a folder and save this file as &amp;lt;tt&amp;gt; hello_mpi.c &amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can compile this program with&lt;br /&gt;
&lt;br /&gt;
 mpicc -g hello_mpi.c -o hello_mpi&lt;br /&gt;
&lt;br /&gt;
You now have an executable called &amp;lt;tt&amp;gt; hello_mpi &amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can run the command with &lt;br /&gt;
&lt;br /&gt;
 mpirun ./hello_mpi&lt;br /&gt;
&lt;br /&gt;
With no other arguments, mpirun will run all of the tasks on the local machine, usually one for each CPU core. You should see a hello world from each process, on each cpu core.&lt;br /&gt;
&lt;br /&gt;
Now, to run your mpi program on the cluster, you will need to create a hostfile.&lt;br /&gt;
First, lets create a simple hostfile that just runs four processes on the local machine&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
localhost slots=4 max_slots=8&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put that line in a file called &amp;lt;tt&amp;gt;localhost &amp;lt;/tt&amp;gt;. &lt;br /&gt;
Now run your program with that hostfile, using&lt;br /&gt;
 mpirun --hostfile localhost ./hello_mpi&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
Hello, world, I am 0 of 4 &lt;br /&gt;
Hello, world, I am 2 of 4 &lt;br /&gt;
Hello, world, I am 1 of 4 &lt;br /&gt;
Hello, world, I am 3 of 4 &lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=MPI_Hello_World&amp;diff=164</id>
		<title>MPI Hello World</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=MPI_Hello_World&amp;diff=164"/>
		<updated>2022-09-24T17:10:40Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* MPI Hello World */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== MPI Hello World ==&lt;br /&gt;
&lt;br /&gt;
Many parallel jobs are using MPI at the lowest level to manage parallel compute resources.&lt;br /&gt;
&lt;br /&gt;
This is a 'Hello World' program that will test the operation of sending jobs to remote workers.&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 /*&lt;br /&gt;
 * Sample MPI &amp;quot;hello world&amp;quot; application in C&lt;br /&gt;
 */&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;quot;mpi.h&amp;quot;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char* argv[])&lt;br /&gt;
{&lt;br /&gt;
    int rank, size, len;&lt;br /&gt;
    char version[MPI_MAX_LIBRARY_VERSION_STRING];&lt;br /&gt;
&lt;br /&gt;
    MPI_Init(&amp;amp;argc, &amp;amp;argv);&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;rank);&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;size);&lt;br /&gt;
    printf(&amp;quot;Hello, world, I am %d of %d \n&amp;quot;, rank, size);&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
create a folder and save this file as &amp;lt;code&amp;gt;hello_mpi.c&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can compile this program with&lt;br /&gt;
&lt;br /&gt;
mpicc -g hello_mpi.c -o hello_mpi&lt;br /&gt;
&lt;br /&gt;
You now have an executable called hello_mpi&lt;br /&gt;
&lt;br /&gt;
You can run the command with &lt;br /&gt;
mpirun ./hello_mpi&lt;br /&gt;
&lt;br /&gt;
With no other arguments, mpirun will run all of the tasks on the local machine, usually one for each CPU core. You should see a hello world from each process, on each cpu core.&lt;br /&gt;
&lt;br /&gt;
Now, to run your mpi program on the cluster, you will need to create a hostfile.&lt;br /&gt;
First, lets create a simple hostfile that just runs four processes on the local machine&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
localhost slots=4 max_slots=8&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put that line in a file called 'localhost'. &lt;br /&gt;
Now run your program with that hostfile, using&lt;br /&gt;
mpirun --hostfile localhost ./hello_mpi&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
Hello, world, I am 0 of 4 &lt;br /&gt;
Hello, world, I am 2 of 4 &lt;br /&gt;
Hello, world, I am 1 of 4 &lt;br /&gt;
Hello, world, I am 3 of 4 &lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=MPI_Hello_World&amp;diff=163</id>
		<title>MPI Hello World</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=MPI_Hello_World&amp;diff=163"/>
		<updated>2022-09-24T17:06:52Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* MPI Hello World */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== MPI Hello World ==&lt;br /&gt;
&lt;br /&gt;
Many parallel jobs are using MPI at the lowest level to manage parallel compute resources.&lt;br /&gt;
&lt;br /&gt;
This is a 'Hello World' program that will test the operation of sending jobs to remote workers.&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 /*&lt;br /&gt;
 * Sample MPI &amp;quot;hello world&amp;quot; application in C&lt;br /&gt;
 */&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;quot;mpi.h&amp;quot;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char* argv[])&lt;br /&gt;
{&lt;br /&gt;
    int rank, size, len;&lt;br /&gt;
    char version[MPI_MAX_LIBRARY_VERSION_STRING];&lt;br /&gt;
&lt;br /&gt;
    MPI_Init(&amp;amp;argc, &amp;amp;argv);&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;rank);&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;size);&lt;br /&gt;
    printf(&amp;quot;Hello, world, I am %d of %d \n&amp;quot;, rank, size);&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
create a folder and save this file as hello_mpi.c&lt;br /&gt;
&lt;br /&gt;
You can compile this program with&lt;br /&gt;
&lt;br /&gt;
mpicc -g hello_mpi.c -o hello_mpi&lt;br /&gt;
&lt;br /&gt;
You now have an executable called hello_mpi&lt;br /&gt;
&lt;br /&gt;
You can run the command with &lt;br /&gt;
mpirun ./hello_mpi&lt;br /&gt;
&lt;br /&gt;
With no other arguments, mpirun will run all of the tasks on the local machine, usually one for each CPU core. You should see a hello world from each process, on each cpu core.&lt;br /&gt;
&lt;br /&gt;
Now, to run your mpi program on the cluster, you will need to create a hostfile.&lt;br /&gt;
First, lets create a simple hostfile that just runs four processes on the local machine&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
localhost slots=4 max_slots=8&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put that line in a file called 'localhost'. &lt;br /&gt;
Now run your program with that hostfile, using&lt;br /&gt;
mpirun --hostfile localhost ./hello_mpi&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
Hello, world, I am 0 of 4 &lt;br /&gt;
Hello, world, I am 2 of 4 &lt;br /&gt;
Hello, world, I am 1 of 4 &lt;br /&gt;
Hello, world, I am 3 of 4 &lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=MPI_Hello_World&amp;diff=162</id>
		<title>MPI Hello World</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=MPI_Hello_World&amp;diff=162"/>
		<updated>2022-09-24T17:04:48Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* MPI Hello World */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== MPI Hello World ==&lt;br /&gt;
&lt;br /&gt;
Many parallel jobs are using MPI at the lowest level to manage parallel compute resources.&lt;br /&gt;
&lt;br /&gt;
This is a 'Hello World' program that will test the operation of sending jobs to remote workers.&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 /*&lt;br /&gt;
 * Sample MPI &amp;quot;hello world&amp;quot; application in C&lt;br /&gt;
 */&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;quot;mpi.h&amp;quot;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char* argv[])&lt;br /&gt;
{&lt;br /&gt;
    int rank, size, len;&lt;br /&gt;
    char version[MPI_MAX_LIBRARY_VERSION_STRING];&lt;br /&gt;
&lt;br /&gt;
    MPI_Init(&amp;amp;argc, &amp;amp;argv);&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;rank);&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;size);&lt;br /&gt;
    printf(&amp;quot;Hello, world, I am %d of %d \n&amp;quot;, rank, size);&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
create a folder and save this file as helloworld_mpi.c&lt;br /&gt;
&lt;br /&gt;
You can compile this program with&lt;br /&gt;
&lt;br /&gt;
mpicc -g helloworld_mpi.c -o helloworld_mpi&lt;br /&gt;
&lt;br /&gt;
You now have an executable called helloworld_mpi&lt;br /&gt;
&lt;br /&gt;
You can run the command with &lt;br /&gt;
mpirun ./helloworld_mpi&lt;br /&gt;
&lt;br /&gt;
With no other arguments, mpirun will run all of the tasks on the local machine, usually one for each CPU core. You should see a hello world from each process, on each cpu core.&lt;br /&gt;
&lt;br /&gt;
Now, to run your mpi program on the cluster, you will need to create a hostfile.&lt;br /&gt;
First, lets create a simple hostfile that just runs four processes on the local machine&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
localhost slots=4 max_slots=8&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put that line in a file called 'localhost'. &lt;br /&gt;
Now run your program with that hostfile, using&lt;br /&gt;
mpirun --hostfile localhost ./helloworld_mpi&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
Hello, world, I am 2 of 4, (Open MPI v4.0.3, package: Debian OpenMPI, ident: 4.0.3, repo rev: v4.0.3, Mar 03, 2020, 87)&lt;br /&gt;
Hello, world, I am 0 of 4, (Open MPI v4.0.3, package: Debian OpenMPI, ident: 4.0.3, repo rev: v4.0.3, Mar 03, 2020, 87)&lt;br /&gt;
Hello, world, I am 1 of 4, (Open MPI v4.0.3, package: Debian OpenMPI, ident: 4.0.3, repo rev: v4.0.3, Mar 03, 2020, 87)&lt;br /&gt;
Hello, world, I am 3 of 4, (Open MPI v4.0.3, package: Debian OpenMPI, ident: 4.0.3, repo rev: v4.0.3, Mar 03, 2020, 87)&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=MPI_Hello_World&amp;diff=161</id>
		<title>MPI Hello World</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=MPI_Hello_World&amp;diff=161"/>
		<updated>2022-09-24T16:58:14Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* MPI Hello World */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== MPI Hello World ==&lt;br /&gt;
&lt;br /&gt;
Many parallel jobs are using MPI at the lowest level to manage parallel compute resources.&lt;br /&gt;
&lt;br /&gt;
This is a 'Hello World' program that will test the operation of sending jobs to remote workers.&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 /*&lt;br /&gt;
 * Sample MPI &amp;quot;hello world&amp;quot; application in C&lt;br /&gt;
 */&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;quot;mpi.h&amp;quot;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char* argv[])&lt;br /&gt;
{&lt;br /&gt;
    int rank, size, len;&lt;br /&gt;
    char version[MPI_MAX_LIBRARY_VERSION_STRING];&lt;br /&gt;
&lt;br /&gt;
    MPI_Init(&amp;amp;argc, &amp;amp;argv);&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;rank);&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;size);&lt;br /&gt;
    MPI_Get_library_version(version, &amp;amp;len);&lt;br /&gt;
    printf(&amp;quot;Hello, world, I am %d of %d, (%s, %d)\n&amp;quot;,&lt;br /&gt;
           rank, size, version, len);&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
create a folder and save this file as helloworld_mpi.c&lt;br /&gt;
&lt;br /&gt;
You can compile this program with&lt;br /&gt;
&lt;br /&gt;
mpicc -g helloworld_mpi.c -o helloworld_mpi&lt;br /&gt;
&lt;br /&gt;
You now have an executable called helloworld_mpi&lt;br /&gt;
&lt;br /&gt;
You can run the command with &lt;br /&gt;
mpirun ./helloworld_mpi&lt;br /&gt;
&lt;br /&gt;
With no other arguments, mpirun will run all of the tasks on the local machine, usually one for each CPU core. You should see a hello world from each process, on each cpu core.&lt;br /&gt;
&lt;br /&gt;
Now, to run your mpi program on the cluster, you will need to create a hostfile.&lt;br /&gt;
First, lets create a simple hostfile that just runs four processes on the local machine&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
localhost slots=4 max_slots=8&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put that line in a file called 'localhost'. &lt;br /&gt;
Now run your program with that hostfile, using&lt;br /&gt;
mpirun --hostfile localhost ./helloworld_mpi&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
Hello, world, I am 2 of 4, (Open MPI v4.0.3, package: Debian OpenMPI, ident: 4.0.3, repo rev: v4.0.3, Mar 03, 2020, 87)&lt;br /&gt;
Hello, world, I am 0 of 4, (Open MPI v4.0.3, package: Debian OpenMPI, ident: 4.0.3, repo rev: v4.0.3, Mar 03, 2020, 87)&lt;br /&gt;
Hello, world, I am 1 of 4, (Open MPI v4.0.3, package: Debian OpenMPI, ident: 4.0.3, repo rev: v4.0.3, Mar 03, 2020, 87)&lt;br /&gt;
Hello, world, I am 3 of 4, (Open MPI v4.0.3, package: Debian OpenMPI, ident: 4.0.3, repo rev: v4.0.3, Mar 03, 2020, 87)&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=MPI_Hello_World&amp;diff=160</id>
		<title>MPI Hello World</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=MPI_Hello_World&amp;diff=160"/>
		<updated>2022-09-24T16:57:43Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* MPI Hello World */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== MPI Hello World ==&lt;br /&gt;
&lt;br /&gt;
Many parallel jobs are using MPI at the lowest level to manage parallel compute resources.&lt;br /&gt;
&lt;br /&gt;
This is a 'Hello World' program that will test the operation of sending jobs to remote workers.&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 /*&lt;br /&gt;
 * Sample MPI &amp;quot;hello world&amp;quot; application in C&lt;br /&gt;
 */&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;quot;mpi.h&amp;quot;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char* argv[])&lt;br /&gt;
{&lt;br /&gt;
    int rank, size, len;&lt;br /&gt;
    char version[MPI_MAX_LIBRARY_VERSION_STRING];&lt;br /&gt;
&lt;br /&gt;
    MPI_Init(&amp;amp;argc, &amp;amp;argv);&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;rank);&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;size);&lt;br /&gt;
    MPI_Get_library_version(version, &amp;amp;len);&lt;br /&gt;
    printf(&amp;quot;Hello, world, I am %d of %d, (%s, %d)\n&amp;quot;,&lt;br /&gt;
           rank, size, version, len);&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
create a folder and save this file as helloworld_mpi.c&lt;br /&gt;
&lt;br /&gt;
You can compile this program with&lt;br /&gt;
&lt;br /&gt;
mpicc -g helloworld_mpi.c -o helloworld_mpi&lt;br /&gt;
&lt;br /&gt;
You now have an executable called helloworld_mpi&lt;br /&gt;
&lt;br /&gt;
You can run the command with &lt;br /&gt;
mpirun ./helloworld_mpi&lt;br /&gt;
&lt;br /&gt;
With no other arguments, mpirun will run all of the tasks on the local machine, usually one for each CPU core. You should see a hello world from each process, on each cpu core.&lt;br /&gt;
&lt;br /&gt;
Now, to run your mpi program on the cluster, you will need to create a hostfile.&lt;br /&gt;
First, lets create a simple hostfile that just runs four processes on the local machine&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
localhost slots=4 max_slots=8&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put that line in a file called 'localhost'. &lt;br /&gt;
Now run your program with that hostfile, using&lt;br /&gt;
mpirun --hostfile localhost ./helloworld_mpi&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
Hello, world, I am 2 of 4, (Open MPI v4.0.3, package: Debian OpenMPI, ident: 4.0.3, repo rev: v4.0.3, Mar 03, 2020, 87)&lt;br /&gt;
Hello, world, I am 0 of 4, (Open MPI v4.0.3, package: Debian OpenMPI, ident: 4.0.3, repo rev: v4.0.3, Mar 03, 2020, 87)&lt;br /&gt;
Hello, world, I am 1 of 4, (Open MPI v4.0.3, package: Debian OpenMPI, ident: 4.0.3, repo rev: v4.0.3, Mar 03, 2020, 87)&lt;br /&gt;
Hello, world, I am 3 of 4, (Open MPI v4.0.3, package: Debian OpenMPI, ident: 4.0.3, repo rev: v4.0.3, Mar 03, 2020, 87)&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=MPI_Hello_World&amp;diff=159</id>
		<title>MPI Hello World</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=MPI_Hello_World&amp;diff=159"/>
		<updated>2022-09-24T16:44:53Z</updated>

		<summary type="html">&lt;p&gt;Admin: Created page with &amp;quot;== MPI Hello World ==  Many parallel jobs are using MPI at the lowest level to manage parallel compute resources.  This is a 'Hello World' program that will test the operation...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== MPI Hello World ==&lt;br /&gt;
&lt;br /&gt;
Many parallel jobs are using MPI at the lowest level to manage parallel compute resources.&lt;br /&gt;
&lt;br /&gt;
This is a 'Hello World' program that will test the operation of sending jobs to remote workers.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 /*&lt;br /&gt;
 * Sample MPI &amp;quot;hello world&amp;quot; application in C&lt;br /&gt;
 */&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;quot;mpi.h&amp;quot;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char* argv[])&lt;br /&gt;
{&lt;br /&gt;
    int rank, size, len;&lt;br /&gt;
    char version[MPI_MAX_LIBRARY_VERSION_STRING];&lt;br /&gt;
&lt;br /&gt;
    MPI_Init(&amp;amp;argc, &amp;amp;argv);&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;rank);&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;size);&lt;br /&gt;
    MPI_Get_library_version(version, &amp;amp;len);&lt;br /&gt;
    printf(&amp;quot;Hello, world, I am %d of %d, (%s, %d)\n&amp;quot;,&lt;br /&gt;
           rank, size, version, len);&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Cluster_Info&amp;diff=158</id>
		<title>Cluster Info</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Cluster_Info&amp;diff=158"/>
		<updated>2022-09-24T16:37:34Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Math Cluster ==&lt;br /&gt;
The Math Department has an experimental computational cluster. It works fine for computation, but the main purpose is for configuring and testing distributed computing applications so that they can be run on larger clusters.&lt;br /&gt;
&lt;br /&gt;
The cluster is 40 machines with i7 processors. The first 32 are 6th-generation processors, and the last 8 are 7th generation processors.&lt;br /&gt;
&lt;br /&gt;
Jobs can be sent to all nodes in the cluster, or a subset of the nodes. &lt;br /&gt;
&lt;br /&gt;
At this time, the cluster is working with ssh and Mathematica. We're in the process of getting it working with other applications including MPI, Matlab, Maple, and Magma. Check back on this page because support is rapidly evolving.&lt;br /&gt;
&lt;br /&gt;
Also, if you can get your own application running on the cluster, please write up a short description of how that was done and send it to me, humphrey@cornell.edu, so that we can include it in our documentation.&lt;br /&gt;
&lt;br /&gt;
First Step: Setting up your [[Cluster SSH access]], including testing your access with pdsh commands.&lt;br /&gt;
&lt;br /&gt;
Next: Launching [[Mathematica Remote Kernels]].&lt;br /&gt;
&lt;br /&gt;
MPI: Trying out the MPI 'Hello World' program, which is a step to running many types of parallel jobs. [[MPI Hello World]]&lt;br /&gt;
&lt;br /&gt;
Here is some info on running remote workers in Magma: [[Magma Cluster]] This is a work in progress since we don't have a nice, working example yet.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Mathematica_Remote_Kernels&amp;diff=157</id>
		<title>Mathematica Remote Kernels</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Mathematica_Remote_Kernels&amp;diff=157"/>
		<updated>2022-08-05T21:29:04Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Power Consumption */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Launching Remote Kernels in Mathematica===&lt;br /&gt;
This page assumes that you've already followed the instructions in [[Cluster SSH access]] and you have all of that working.&lt;br /&gt;
&lt;br /&gt;
There are other ways to set this up, including lots of menus in Mathematica for the remote kernel settings. The instructions on this are vague and confusing, so for now we're just entering the information directly in the Mathematica code window, because it's simple and it works. Any suggestions for better ways to do this are welcome, and when we find better way to do it, we'll update the documentation.&lt;br /&gt;
&lt;br /&gt;
Also, since this is written by someone who is not very familiar with Mathematica syntax, the large 'LaunchKernels' command for 40 nodes shown below could be done much more easily, for instance, using a loop to generate the string for the machine list passed to the LaunchKernels command, or some other loop that will do the same thing without having to cut and paste that giant command.&lt;br /&gt;
&lt;br /&gt;
Here are the instructions:&lt;br /&gt;
&lt;br /&gt;
On your Linux desktop, start up Mathematica.&lt;br /&gt;
&lt;br /&gt;
We'll start by launching kernels on three of the other cruncher machines, since this is a simple example to start with.&lt;br /&gt;
&lt;br /&gt;
For this example, we're going to start 4 kernels each on ramsey, fibonacci, and boole. &lt;br /&gt;
&lt;br /&gt;
In the Mathematica window, type the following:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;LaunchKernels[{&amp;quot;ssh://ramsey/?4&amp;quot;,&amp;quot;ssh://fibonacci/?4&amp;quot;,&amp;quot;ssh://boole/?4&amp;quot;  }]&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, right-click that cell and choose 'Evaluate Cell'. The system will launch the remote kernels, and show its progress while it does that. Once all the kernels are launched, you can run calculations on them. To close those kernels, do&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;CloseKernels[]&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and evaluate the cell. The system will close the remote kernels. Please remember to do a CloseKernels[] at the end of your session. Closing Mathematica SHOULD close the remote kernels, but it's better to be sure.&lt;br /&gt;
&lt;br /&gt;
Now, we'll launch four kernels each on the 40 node cluster. There's a better way to do this than copying and pasting this big command, like using a loop to generate the node names, but for now this works. Note that it will take a while to launch all of the kernels because it launches them one at a time, and this command should give you 160 remote kernels.&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;LaunchKernels[{&amp;quot;ssh://pnode01/?4&amp;quot;,&amp;quot;ssh://pnode02/?4&amp;quot;,&amp;quot;ssh://pnode03/?4&amp;quot;,&amp;quot;ssh://pnode04/?4&amp;quot;,&amp;quot;ssh://pnode05/?4&amp;quot;,&amp;quot;ssh://pnode06/?4&amp;quot;,&amp;quot;ssh://pnode07/?4&amp;quot;,&amp;quot;ssh://pnode08/?4&amp;quot;,&amp;quot;ssh://pnode09/?4&amp;quot;,&amp;quot;ssh://pnode10/?4&amp;quot;,&amp;quot;ssh://pnode11/?4&amp;quot;,&amp;quot;ssh://pnode12/?4&amp;quot;,&amp;quot;ssh://pnode13/?4&amp;quot;,&amp;quot;ssh://pnode14/?4&amp;quot;,&amp;quot;ssh://pnode15/?4&amp;quot;,&amp;quot;ssh://pnode16/?4&amp;quot;,&amp;quot;ssh://pnode17/?4&amp;quot;,&amp;quot;ssh://pnode18/?4&amp;quot;,&amp;quot;ssh://pnode19/?4&amp;quot;,&amp;quot;ssh://pnode20/?4&amp;quot;,&amp;quot;ssh://pnode21/?4&amp;quot;,&amp;quot;ssh://pnode22/?4&amp;quot;,&amp;quot;ssh://pnode23/?4&amp;quot;,&amp;quot;ssh://pnode24/?4&amp;quot;,&amp;quot;ssh://pnode25/?4&amp;quot;,&amp;quot;ssh://pnode26/?4&amp;quot;,&amp;quot;ssh://pnode27/?4&amp;quot;,&amp;quot;ssh://pnode28/?4&amp;quot;,&amp;quot;ssh://pnode29/?4&amp;quot;,&amp;quot;ssh://pnode30/?4&amp;quot;,&amp;quot;ssh://pnode31/?4&amp;quot;,&amp;quot;ssh://pnode32/?4&amp;quot;,&amp;quot;ssh://pnode33/?4&amp;quot;,&amp;quot;ssh://pnode34/?4&amp;quot;,&amp;quot;ssh://pnode35/?4&amp;quot;,&amp;quot;ssh://pnode36/?4&amp;quot;,&amp;quot;ssh://pnode37/?4&amp;quot;,&amp;quot;ssh://pnode38/?4&amp;quot;,&amp;quot;ssh://pnode39/?4&amp;quot;,&amp;quot;ssh://pnode40/?4&amp;quot;}]&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy this command from this page and paste it into Mathematica. Note that if you're using X2go and you copy on the local machine, you may need to paste using the 'Paste' command under the 'Edit' menu, because different system handle cutting and pasting differently.&lt;br /&gt;
&lt;br /&gt;
Once you have pasted that big command in there, right-click the cell and evaluate it. You'll see the progress as all of the remote kernels are started up. Once it's done, you'll have 160 remote kernels.&lt;br /&gt;
&lt;br /&gt;
When you are finished using them, do a &lt;br /&gt;
&lt;br /&gt;
 CloseKernels[]&lt;br /&gt;
&lt;br /&gt;
so the system can go and shut them all down properly.&lt;br /&gt;
&lt;br /&gt;
==Tips==&lt;br /&gt;
&lt;br /&gt;
Note that there is nothing stopping you from running remote kernels on both the cluster nodes and the crunchers, but this is not recommended. These machines have cores that are very different speeds, so the faster machine may end up waiting for the slower machines to finish, so this may just be a waste of resources. It's best to run things on the cluster, OR the crunchers, but not both.&lt;br /&gt;
&lt;br /&gt;
The number of kernels you run on each machine may give you different outcomes depending on your job. So, if you run 8 kernels per node, maybe your job will run faster, maybe not. It's best to experiment with a subset of your job to find the optimum number of kernels per node for your job.&lt;br /&gt;
&lt;br /&gt;
For the cluster nodes, each one has 8 cpu threads, so ideally if no one else is using the cluster, your job should load up each node to a load of 8 so that you're making full use of each node's CPU. If it's less than 8, you're not using the whole CPU, and if it's more than 8, some of your kernels are waiting for resources.&lt;br /&gt;
&lt;br /&gt;
You can see the status of your the cluster nodes here: &lt;br /&gt;
&lt;br /&gt;
[http://graph.math.cornell.edu:3000/d/BPzrVrznz/cluster-pnodes Cluster Pnodes] (You must be on the Cornell network or VPN to use this link)&lt;br /&gt;
&lt;br /&gt;
Note that these graphs will usually show you the last 24 hours, so you can click on the little clock on the upper right and choose a different time interval, like the last hour, so you have a better picture of what is going on. The graphs update only once a minute, so be patient.&lt;br /&gt;
&lt;br /&gt;
If you're running remote kernels on the Ryzen crunchers, they have 32 CPU threads each, so you would want to load them up to 32 to make full use of the CPU, if it was zero before you started. Note that other people are using these machines, so be a good neighbor and don't hog the entire machine.&lt;br /&gt;
&lt;br /&gt;
The status of the crunchers is here: &lt;br /&gt;
&lt;br /&gt;
[http://graph.math.cornell.edu:3000/d/f8zUy_L7k/crunchers Crunchers] (You must be on the Cornell network or VPN to use this link)&lt;br /&gt;
&lt;br /&gt;
==Power Consumption==&lt;br /&gt;
If you really want to geek out, you can look at the server room power consumption when you're running your job.&lt;br /&gt;
&lt;br /&gt;
[http://graph.math.cornell.edu:3000/d/_wfSvpjmz/server-room-power Server Room Power] (You must be on the Cornell network or VPN to use this link)&lt;br /&gt;
&lt;br /&gt;
The orange and purple circuits are the cluster. The power usage is around 620W when nothing is going on, with an additional 100W being used by the two UPS units. It will go up when the machines get busy. If the orange circuit goes above 2200 watts or the purple circuit goes above 2800 watts, the circuit will shut down. At this time, the combined capacity of the two circuits should be more than the entire cluster can use. See if you can do enough math on the cluster to cause a shutdown.&lt;br /&gt;
&lt;br /&gt;
Note the room temperature! The room has a large cooler that uses Cornell Lake-source cooling to cool the room, but your job will still probably warm up the room by a few degrees.&lt;br /&gt;
&lt;br /&gt;
The blue circuit is the Ryzen GPU cruncher machines. That circuit also has a limit of 2700 watts total. There should be enough headroom there to be able to load up all of those machines including their GPUs without overloading the circuit. Again, you might be able to make it shut down. That's ok, as long as you tell the system administrator. It's better to cause this problem now so we can rearrange things to avoid it in the future. But, so far, there seems to be enough capacity on this circuit to handle all of the cruncher machines.&lt;br /&gt;
&lt;br /&gt;
==Network Bottlenecks==&lt;br /&gt;
&lt;br /&gt;
The crunchers are on a 10Gb/s network. The cluster nodes are each 1Gb/s, but their link as a group to the main network is 10Gb/s. There is a limitation at this time because the crunchers and the cluster are on different subnets, and the connection between the subnets is limited to 1G/s at this time. This should not impact your job, but if you go to crunchers and look at the machine where you are running your main Mathematica, and check the 'Network' graph, if that machine is maxing out at 1Gb/s during your job, then your job is being constrained by the subnet bottleneck. If this is happening, please email humphrey@cornell.edu and let me know about this. The bottleneck can be removed but we haven't gotten to that yet.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Magma_Cluster&amp;diff=155</id>
		<title>Magma Cluster</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Magma_Cluster&amp;diff=155"/>
		<updated>2022-07-25T20:57:57Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Magma on the Cluster ==&lt;br /&gt;
The Math Cluster should support distributed Magma jobs.&lt;br /&gt;
&lt;br /&gt;
We're still working on a good example, so please give feedback on this.&lt;br /&gt;
&lt;br /&gt;
Here is some documentation on starting remote workers [https://magma.maths.usyd.edu.au/magma/handbook/text/64 StartWorkers Documentation]&lt;br /&gt;
&lt;br /&gt;
To launch remote workers in Magma, you'll need to have setup your SSH keys: [[Cluster SSH access]]&lt;br /&gt;
&lt;br /&gt;
So far it appears that you can do this on machines where you would have ssh access to the cluster nodes.&lt;br /&gt;
&lt;br /&gt;
Here is an example of parallel factorization: [https://magma.maths.usyd.edu.au/magma/handbook/text/65 Integer Factorization]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Magma_Cluster&amp;diff=154</id>
		<title>Magma Cluster</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Magma_Cluster&amp;diff=154"/>
		<updated>2022-07-25T20:47:40Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Magma on the Cluster ==&lt;br /&gt;
The Math Cluster should support distributed Magma jobs.&lt;br /&gt;
&lt;br /&gt;
We're still working on a good example, so please give feedback on this.&lt;br /&gt;
&lt;br /&gt;
Here is some documentation on starting remote workers [https://magma.maths.usyd.edu.au/magma/handbook/text/64 StartWorkers Documentation]&lt;br /&gt;
&lt;br /&gt;
To launch remote workers in Magma, you'll need to have setup your SSH keys: [[Cluster SSH access]]&lt;br /&gt;
&lt;br /&gt;
So far it appears that you can do this on machines where you would have ssh access to the cluster nodes.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Magma_Cluster&amp;diff=153</id>
		<title>Magma Cluster</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Magma_Cluster&amp;diff=153"/>
		<updated>2022-07-25T20:30:31Z</updated>

		<summary type="html">&lt;p&gt;Admin: Created page with &amp;quot; == Magma on the Cluster == The Math Cluster should support distributed Magma jobs.  We're still working on a good example, so please give feedback on this.  Here is some docu...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Magma on the Cluster ==&lt;br /&gt;
The Math Cluster should support distributed Magma jobs.&lt;br /&gt;
&lt;br /&gt;
We're still working on a good example, so please give feedback on this.&lt;br /&gt;
&lt;br /&gt;
Here is some documentation on starting remote workers [https://magma.maths.usyd.edu.au/magma/handbook/text/64 StartWorkers Documentation]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Cluster_Info&amp;diff=152</id>
		<title>Cluster Info</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Cluster_Info&amp;diff=152"/>
		<updated>2022-07-25T20:28:17Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Math Cluster ==&lt;br /&gt;
The Math Department has an experimental computational cluster. It works fine for computation, but the main purpose is for configuring and testing distributed computing applications so that they can be run on larger clusters.&lt;br /&gt;
&lt;br /&gt;
The cluster is 40 machines with i7 processors. The first 32 are 6th-generation processors, and the last 8 are 7th generation processors.&lt;br /&gt;
&lt;br /&gt;
Jobs can be sent to all nodes in the cluster, or a subset of the nodes. &lt;br /&gt;
&lt;br /&gt;
At this time, the cluster is working with ssh and Mathematica. We're in the process of getting it working with other applications including MPI, Matlab, Maple, and Magma. Check back on this page because support is rapidly evolving.&lt;br /&gt;
&lt;br /&gt;
Also, if you can get your own application running on the cluster, please write up a short description of how that was done and send it to me, humphrey@cornell.edu, so that we can include it in our documentation.&lt;br /&gt;
&lt;br /&gt;
First Step: Setting up your [[Cluster SSH access]], including testing your access with pdsh commands.&lt;br /&gt;
&lt;br /&gt;
Next: Launching [[Mathematica Remote Kernels]].&lt;br /&gt;
&lt;br /&gt;
Here is some info on running remote workers in Magma: [[Magma Cluster]] This is a work in progress since we don't have a nice, working example yet.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Mathematica_Remote_Kernels&amp;diff=151</id>
		<title>Mathematica Remote Kernels</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Mathematica_Remote_Kernels&amp;diff=151"/>
		<updated>2022-07-25T19:33:58Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Launching Remote Kernels in Mathematica */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Launching Remote Kernels in Mathematica===&lt;br /&gt;
This page assumes that you've already followed the instructions in [[Cluster SSH access]] and you have all of that working.&lt;br /&gt;
&lt;br /&gt;
There are other ways to set this up, including lots of menus in Mathematica for the remote kernel settings. The instructions on this are vague and confusing, so for now we're just entering the information directly in the Mathematica code window, because it's simple and it works. Any suggestions for better ways to do this are welcome, and when we find better way to do it, we'll update the documentation.&lt;br /&gt;
&lt;br /&gt;
Also, since this is written by someone who is not very familiar with Mathematica syntax, the large 'LaunchKernels' command for 40 nodes shown below could be done much more easily, for instance, using a loop to generate the string for the machine list passed to the LaunchKernels command, or some other loop that will do the same thing without having to cut and paste that giant command.&lt;br /&gt;
&lt;br /&gt;
Here are the instructions:&lt;br /&gt;
&lt;br /&gt;
On your Linux desktop, start up Mathematica.&lt;br /&gt;
&lt;br /&gt;
We'll start by launching kernels on three of the other cruncher machines, since this is a simple example to start with.&lt;br /&gt;
&lt;br /&gt;
For this example, we're going to start 4 kernels each on ramsey, fibonacci, and boole. &lt;br /&gt;
&lt;br /&gt;
In the Mathematica window, type the following:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;LaunchKernels[{&amp;quot;ssh://ramsey/?4&amp;quot;,&amp;quot;ssh://fibonacci/?4&amp;quot;,&amp;quot;ssh://boole/?4&amp;quot;  }]&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, right-click that cell and choose 'Evaluate Cell'. The system will launch the remote kernels, and show its progress while it does that. Once all the kernels are launched, you can run calculations on them. To close those kernels, do&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;CloseKernels[]&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and evaluate the cell. The system will close the remote kernels. Please remember to do a CloseKernels[] at the end of your session. Closing Mathematica SHOULD close the remote kernels, but it's better to be sure.&lt;br /&gt;
&lt;br /&gt;
Now, we'll launch four kernels each on the 40 node cluster. There's a better way to do this than copying and pasting this big command, like using a loop to generate the node names, but for now this works. Note that it will take a while to launch all of the kernels because it launches them one at a time, and this command should give you 160 remote kernels.&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;LaunchKernels[{&amp;quot;ssh://pnode01/?4&amp;quot;,&amp;quot;ssh://pnode02/?4&amp;quot;,&amp;quot;ssh://pnode03/?4&amp;quot;,&amp;quot;ssh://pnode04/?4&amp;quot;,&amp;quot;ssh://pnode05/?4&amp;quot;,&amp;quot;ssh://pnode06/?4&amp;quot;,&amp;quot;ssh://pnode07/?4&amp;quot;,&amp;quot;ssh://pnode08/?4&amp;quot;,&amp;quot;ssh://pnode09/?4&amp;quot;,&amp;quot;ssh://pnode10/?4&amp;quot;,&amp;quot;ssh://pnode11/?4&amp;quot;,&amp;quot;ssh://pnode12/?4&amp;quot;,&amp;quot;ssh://pnode13/?4&amp;quot;,&amp;quot;ssh://pnode14/?4&amp;quot;,&amp;quot;ssh://pnode15/?4&amp;quot;,&amp;quot;ssh://pnode16/?4&amp;quot;,&amp;quot;ssh://pnode17/?4&amp;quot;,&amp;quot;ssh://pnode18/?4&amp;quot;,&amp;quot;ssh://pnode19/?4&amp;quot;,&amp;quot;ssh://pnode20/?4&amp;quot;,&amp;quot;ssh://pnode21/?4&amp;quot;,&amp;quot;ssh://pnode22/?4&amp;quot;,&amp;quot;ssh://pnode23/?4&amp;quot;,&amp;quot;ssh://pnode24/?4&amp;quot;,&amp;quot;ssh://pnode25/?4&amp;quot;,&amp;quot;ssh://pnode26/?4&amp;quot;,&amp;quot;ssh://pnode27/?4&amp;quot;,&amp;quot;ssh://pnode28/?4&amp;quot;,&amp;quot;ssh://pnode29/?4&amp;quot;,&amp;quot;ssh://pnode30/?4&amp;quot;,&amp;quot;ssh://pnode31/?4&amp;quot;,&amp;quot;ssh://pnode32/?4&amp;quot;,&amp;quot;ssh://pnode33/?4&amp;quot;,&amp;quot;ssh://pnode34/?4&amp;quot;,&amp;quot;ssh://pnode35/?4&amp;quot;,&amp;quot;ssh://pnode36/?4&amp;quot;,&amp;quot;ssh://pnode37/?4&amp;quot;,&amp;quot;ssh://pnode38/?4&amp;quot;,&amp;quot;ssh://pnode39/?4&amp;quot;,&amp;quot;ssh://pnode40/?4&amp;quot;}]&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy this command from this page and paste it into Mathematica. Note that if you're using X2go and you copy on the local machine, you may need to paste using the 'Paste' command under the 'Edit' menu, because different system handle cutting and pasting differently.&lt;br /&gt;
&lt;br /&gt;
Once you have pasted that big command in there, right-click the cell and evaluate it. You'll see the progress as all of the remote kernels are started up. Once it's done, you'll have 160 remote kernels.&lt;br /&gt;
&lt;br /&gt;
When you are finished using them, do a &lt;br /&gt;
&lt;br /&gt;
 CloseKernels[]&lt;br /&gt;
&lt;br /&gt;
so the system can go and shut them all down properly.&lt;br /&gt;
&lt;br /&gt;
==Tips==&lt;br /&gt;
&lt;br /&gt;
Note that there is nothing stopping you from running remote kernels on both the cluster nodes and the crunchers, but this is not recommended. These machines have cores that are very different speeds, so the faster machine may end up waiting for the slower machines to finish, so this may just be a waste of resources. It's best to run things on the cluster, OR the crunchers, but not both.&lt;br /&gt;
&lt;br /&gt;
The number of kernels you run on each machine may give you different outcomes depending on your job. So, if you run 8 kernels per node, maybe your job will run faster, maybe not. It's best to experiment with a subset of your job to find the optimum number of kernels per node for your job.&lt;br /&gt;
&lt;br /&gt;
For the cluster nodes, each one has 8 cpu threads, so ideally if no one else is using the cluster, your job should load up each node to a load of 8 so that you're making full use of each node's CPU. If it's less than 8, you're not using the whole CPU, and if it's more than 8, some of your kernels are waiting for resources.&lt;br /&gt;
&lt;br /&gt;
You can see the status of your the cluster nodes here: &lt;br /&gt;
&lt;br /&gt;
[http://graph.math.cornell.edu:3000/d/BPzrVrznz/cluster-pnodes Cluster Pnodes] (You must be on the Cornell network or VPN to use this link)&lt;br /&gt;
&lt;br /&gt;
Note that these graphs will usually show you the last 24 hours, so you can click on the little clock on the upper right and choose a different time interval, like the last hour, so you have a better picture of what is going on. The graphs update only once a minute, so be patient.&lt;br /&gt;
&lt;br /&gt;
If you're running remote kernels on the Ryzen crunchers, they have 32 CPU threads each, so you would want to load them up to 32 to make full use of the CPU, if it was zero before you started. Note that other people are using these machines, so be a good neighbor and don't hog the entire machine.&lt;br /&gt;
&lt;br /&gt;
The status of the crunchers is here: &lt;br /&gt;
&lt;br /&gt;
[http://graph.math.cornell.edu:3000/d/f8zUy_L7k/crunchers Crunchers] (You must be on the Cornell network or VPN to use this link)&lt;br /&gt;
&lt;br /&gt;
==Power Consumption==&lt;br /&gt;
If you really want to geek out, you can look at the server room power consumption when you're running your job.&lt;br /&gt;
&lt;br /&gt;
[http://graph.math.cornell.edu:3000/d/_wfSvpjmz/server-room-power Server Room Power] (You must be on the Cornell network or VPN to use this link)&lt;br /&gt;
&lt;br /&gt;
The purple circuit is the cluster. The power usage is around 650W when nothing is going on. It will go up when the machines get busy. Note that if the power on the purple circuit goes above 2700 watts, the cluster may shut down. I'm worried about going over this limit, but so far it hasn't happened. Can you send the cluster enough math to kill it? Try it and let me know! The solution is easy, we can put some of the nodes on another circuit. For now they're all on the same one so we can more easily measure the power usage of the cluster.&lt;br /&gt;
&lt;br /&gt;
Note the room temperature! The room has a large cooler that uses Cornell Lake-source cooling to cool the room, but your job will still probably warm up the room by a few degrees.&lt;br /&gt;
&lt;br /&gt;
The blue circuit is the Ryzen GPU cruncher machines. That circuit also has a limit of 2700 watts total. There should be enough headroom there to be able to load up all of those machines including their GPUs without overloading the circuit. Again, you might be able to make it shut down. That's ok, as long as you tell the system administrator. It's better to cause this problem now so we can rearrange things to avoid it in the future. But, so far, there seems to be enough capacity on this circuit to handle all of the cruncher machines.&lt;br /&gt;
&lt;br /&gt;
==Network Bottlenecks==&lt;br /&gt;
&lt;br /&gt;
The crunchers are on a 10Gb/s network. The cluster nodes are each 1Gb/s, but their link as a group to the main network is 10Gb/s. There is a limitation at this time because the crunchers and the cluster are on different subnets, and the connection between the subnets is limited to 1G/s at this time. This should not impact your job, but if you go to crunchers and look at the machine where you are running your main Mathematica, and check the 'Network' graph, if that machine is maxing out at 1Gb/s during your job, then your job is being constrained by the subnet bottleneck. If this is happening, please email humphrey@cornell.edu and let me know about this. The bottleneck can be removed but we haven't gotten to that yet.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=MachineList&amp;diff=150</id>
		<title>MachineList</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=MachineList&amp;diff=150"/>
		<updated>2022-07-25T19:32:17Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Math Department Linux Machines */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Math Department Linux Machines ==&lt;br /&gt;
This is a list of department machines that you may use remotely.&lt;br /&gt;
All of these machines have the standard set of packages.&lt;br /&gt;
This list is not complete and it is changing constantly, so check back from time to time.&lt;br /&gt;
&lt;br /&gt;
You can connect to the machines using their complete domain name, such as squid2.math.cornell.edu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Dedicated Computation Machines'''&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
|Hostname ||Processor ||Cores / Threads ||RAM  ||GPU ||Net &lt;br /&gt;
|- &lt;br /&gt;
|ramsey || AMD Ryzen 9 5950x || 16/32 || 128GB || RTX 3080Ti || 10Gb &lt;br /&gt;
|- &lt;br /&gt;
|fibonacci||  AMD Ryzen 9 5950x || 16/32 || 128GB || RTX 3080Ti || 10Gb &lt;br /&gt;
|- &lt;br /&gt;
|boole||  AMD Ryzen 9 5950x || 16/32 || 128GB || RTX 3080Ti || 10Gb &lt;br /&gt;
|- &lt;br /&gt;
&lt;br /&gt;
|squid2|| i7-6700k CPU @ 4.0GHz || 4/8 || 64GB || RTX 2080 Super || 10Gb&lt;br /&gt;
|- &lt;br /&gt;
|kraken|| Quad Opteron || 64/64 || 512GB || RTX 2080 || 1Gb&lt;br /&gt;
|- &lt;br /&gt;
|heaviside|| Xeon  E5-2640 || 12/24 || 256GB || N/A || 10Gb&lt;br /&gt;
|- &lt;br /&gt;
|hopper||Xeon  E5-2640 || 16/32 ||256GB || N/A || 10Gb&lt;br /&gt;
|- &lt;br /&gt;
|nautilus || Xeon X5560 || 8/16 || 96GB || N/A || 1Gb&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Virtual Machines'''&lt;br /&gt;
&lt;br /&gt;
These are virtual machines made available with extra resources from the department servers.&amp;lt;br&amp;gt;&lt;br /&gt;
NOTE: At this time these machines are not available due to upgrades on their physical systems.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
|Hostname ||Processor ||vCores ||RAM  ||GPU ||Net&lt;br /&gt;
|- &lt;br /&gt;
|conway || VM on AMD Epyc Milan || 14 || 64GB || N/A ||10Gb&lt;br /&gt;
|- &lt;br /&gt;
|dynkin || VM on AMD Epyc Milan || 14 || 64GB || N/A ||10Gb&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shared Workstations'''&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
|Hostname ||Processor ||Cores / Threads ||RAM  ||GPU &lt;br /&gt;
|- &lt;br /&gt;
|aio01 || i5-6600 CPU @ 3.30GHz || 4/4 || 16GB || N/A&lt;br /&gt;
|-&lt;br /&gt;
|hank || i7-7700 CPU @ 3.60GHz || 4/8 || 16GB || N/A&lt;br /&gt;
|-&lt;br /&gt;
|feynman || i7-7500 CPU @ 3.40GHz ||4/8 || 16GB || N/A&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Private Machines'''&lt;br /&gt;
&lt;br /&gt;
These machines are the property of faculty members and may only be used with their permission.&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
|Hostname ||Processor ||Cores / Threads ||RAM  ||GPU ||Net ||Owner&lt;br /&gt;
|- &lt;br /&gt;
|leo || Xeon E5-2698 || 20/40 || 256GB || N/A || 10Gb || A. Townsend&lt;br /&gt;
|- &lt;br /&gt;
|wooster ||  AMD Ryzen 9 5950x || 16/32 || 128GB || RTX 3080Ti || 1Gb || D. Barbasch&lt;br /&gt;
|- &lt;br /&gt;
|zeno ||  AMD Ryzen 9 5950x || 16/32 || 128GB || RTX 3080Ti || 10Gb || A. Vladimirsky&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Mathematica_Remote_Kernels&amp;diff=149</id>
		<title>Mathematica Remote Kernels</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Mathematica_Remote_Kernels&amp;diff=149"/>
		<updated>2022-07-21T16:11:47Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Tips */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Launching Remote Kernels in Mathematica===&lt;br /&gt;
This page assumes that you've already followed the instructions in [[Cluster SSH access]] and you have all of that working.&lt;br /&gt;
&lt;br /&gt;
There are other ways to set this up, including lots of menus in Mathematica for the remote kernel settings. The instructions on this are vague and confusing, so for now we're just entering the information directly in the Mathematica code window, because it's simple and it works. Any suggestions for better ways to do this are welcome, and when we find better way to do it, we'll update the documentation.&lt;br /&gt;
&lt;br /&gt;
Also, since this is written by someone who is not very familiar with Mathematica syntax, the large 'LaunchKernels' command for 40 nodes shown below could be done much more easily, for instance, using a loop to generate the string for the machine list passed to the LaunchKernels command, or some other loop that will do the same thing without having to cut and paste that giant command.&lt;br /&gt;
&lt;br /&gt;
Here are the instructions:&lt;br /&gt;
&lt;br /&gt;
On your Linux desktop, start up Mathematica.&lt;br /&gt;
&lt;br /&gt;
We'll start by launching kernels on two of the other cruncher machines, since this is a simple example to start with.&lt;br /&gt;
&lt;br /&gt;
For this example, we're going to start 4 kernels each on ramsey and fibonacci. &lt;br /&gt;
&lt;br /&gt;
In the Mathematica window, type the following:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;LaunchKernels[{&amp;quot;ssh://ramsey/?4&amp;quot;,&amp;quot;ssh://fibonacci/?4&amp;quot;}]&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, right-click that cell and choose 'Evaluate Cell'. The system will launch the remote kernels, and show its progress while it does that. Once all the kernels are launched, you can run calculations on them. To close those kernels, do&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;CloseKernels[]&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and evaluate the cell. The system will close the remote kernels. Please remember to do a CloseKernels[] at the end of your session. Closing Mathematica SHOULD close the remote kernels, but it's better to be sure.&lt;br /&gt;
&lt;br /&gt;
Now, we'll launch four kernels each on the 40 node cluster. There's a better way to do this than copying and pasting this big command, like using a loop to generate the node names, but for now this works. Note that it will take a while to launch all of the kernels because it launches them one at a time, and this command should give you 160 remote kernels.&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;LaunchKernels[{&amp;quot;ssh://pnode01/?4&amp;quot;,&amp;quot;ssh://pnode02/?4&amp;quot;,&amp;quot;ssh://pnode03/?4&amp;quot;,&amp;quot;ssh://pnode04/?4&amp;quot;,&amp;quot;ssh://pnode05/?4&amp;quot;,&amp;quot;ssh://pnode06/?4&amp;quot;,&amp;quot;ssh://pnode07/?4&amp;quot;,&amp;quot;ssh://pnode08/?4&amp;quot;,&amp;quot;ssh://pnode09/?4&amp;quot;,&amp;quot;ssh://pnode10/?4&amp;quot;,&amp;quot;ssh://pnode11/?4&amp;quot;,&amp;quot;ssh://pnode12/?4&amp;quot;,&amp;quot;ssh://pnode13/?4&amp;quot;,&amp;quot;ssh://pnode14/?4&amp;quot;,&amp;quot;ssh://pnode15/?4&amp;quot;,&amp;quot;ssh://pnode16/?4&amp;quot;,&amp;quot;ssh://pnode17/?4&amp;quot;,&amp;quot;ssh://pnode18/?4&amp;quot;,&amp;quot;ssh://pnode19/?4&amp;quot;,&amp;quot;ssh://pnode20/?4&amp;quot;,&amp;quot;ssh://pnode21/?4&amp;quot;,&amp;quot;ssh://pnode22/?4&amp;quot;,&amp;quot;ssh://pnode23/?4&amp;quot;,&amp;quot;ssh://pnode24/?4&amp;quot;,&amp;quot;ssh://pnode25/?4&amp;quot;,&amp;quot;ssh://pnode26/?4&amp;quot;,&amp;quot;ssh://pnode27/?4&amp;quot;,&amp;quot;ssh://pnode28/?4&amp;quot;,&amp;quot;ssh://pnode29/?4&amp;quot;,&amp;quot;ssh://pnode30/?4&amp;quot;,&amp;quot;ssh://pnode31/?4&amp;quot;,&amp;quot;ssh://pnode32/?4&amp;quot;,&amp;quot;ssh://pnode33/?4&amp;quot;,&amp;quot;ssh://pnode34/?4&amp;quot;,&amp;quot;ssh://pnode35/?4&amp;quot;,&amp;quot;ssh://pnode36/?4&amp;quot;,&amp;quot;ssh://pnode37/?4&amp;quot;,&amp;quot;ssh://pnode38/?4&amp;quot;,&amp;quot;ssh://pnode39/?4&amp;quot;,&amp;quot;ssh://pnode40/?4&amp;quot;}]&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy this command from this page and paste it into Mathematica. Note that if you're using X2go and you copy on the local machine, you may need to paste using the 'Paste' command under the 'Edit' menu, because different system handle cutting and pasting differently.&lt;br /&gt;
&lt;br /&gt;
Once you have pasted that big command in there, right-click the cell and evaluate it. You'll see the progress as all of the remote kernels are started up. Once it's done, you'll have 160 remote kernels.&lt;br /&gt;
&lt;br /&gt;
When you are finished using them, do a &lt;br /&gt;
&lt;br /&gt;
 CloseKernels[]&lt;br /&gt;
&lt;br /&gt;
so the system can go and shut them all down properly.&lt;br /&gt;
&lt;br /&gt;
==Tips==&lt;br /&gt;
&lt;br /&gt;
Note that there is nothing stopping you from running remote kernels on both the cluster nodes and the crunchers, but this is not recommended. These machines have cores that are very different speeds, so the faster machine may end up waiting for the slower machines to finish, so this may just be a waste of resources. It's best to run things on the cluster, OR the crunchers, but not both.&lt;br /&gt;
&lt;br /&gt;
The number of kernels you run on each machine may give you different outcomes depending on your job. So, if you run 8 kernels per node, maybe your job will run faster, maybe not. It's best to experiment with a subset of your job to find the optimum number of kernels per node for your job.&lt;br /&gt;
&lt;br /&gt;
For the cluster nodes, each one has 8 cpu threads, so ideally if no one else is using the cluster, your job should load up each node to a load of 8 so that you're making full use of each node's CPU. If it's less than 8, you're not using the whole CPU, and if it's more than 8, some of your kernels are waiting for resources.&lt;br /&gt;
&lt;br /&gt;
You can see the status of your the cluster nodes here: &lt;br /&gt;
&lt;br /&gt;
[http://graph.math.cornell.edu:3000/d/BPzrVrznz/cluster-pnodes Cluster Pnodes] (You must be on the Cornell network or VPN to use this link)&lt;br /&gt;
&lt;br /&gt;
Note that these graphs will usually show you the last 24 hours, so you can click on the little clock on the upper right and choose a different time interval, like the last hour, so you have a better picture of what is going on. The graphs update only once a minute, so be patient.&lt;br /&gt;
&lt;br /&gt;
If you're running remote kernels on the Ryzen crunchers, they have 32 CPU threads each, so you would want to load them up to 32 to make full use of the CPU, if it was zero before you started. Note that other people are using these machines, so be a good neighbor and don't hog the entire machine.&lt;br /&gt;
&lt;br /&gt;
The status of the crunchers is here: &lt;br /&gt;
&lt;br /&gt;
[http://graph.math.cornell.edu:3000/d/f8zUy_L7k/crunchers Crunchers] (You must be on the Cornell network or VPN to use this link)&lt;br /&gt;
&lt;br /&gt;
==Power Consumption==&lt;br /&gt;
If you really want to geek out, you can look at the server room power consumption when you're running your job.&lt;br /&gt;
&lt;br /&gt;
[http://graph.math.cornell.edu:3000/d/_wfSvpjmz/server-room-power Server Room Power] (You must be on the Cornell network or VPN to use this link)&lt;br /&gt;
&lt;br /&gt;
The purple circuit is the cluster. The power usage is around 650W when nothing is going on. It will go up when the machines get busy. Note that if the power on the purple circuit goes above 2700 watts, the cluster may shut down. I'm worried about going over this limit, but so far it hasn't happened. Can you send the cluster enough math to kill it? Try it and let me know! The solution is easy, we can put some of the nodes on another circuit. For now they're all on the same one so we can more easily measure the power usage of the cluster.&lt;br /&gt;
&lt;br /&gt;
Note the room temperature! The room has a large cooler that uses Cornell Lake-source cooling to cool the room, but your job will still probably warm up the room by a few degrees.&lt;br /&gt;
&lt;br /&gt;
The blue circuit is the Ryzen GPU cruncher machines. That circuit also has a limit of 2700 watts total. There should be enough headroom there to be able to load up all of those machines including their GPUs without overloading the circuit. Again, you might be able to make it shut down. That's ok, as long as you tell the system administrator. It's better to cause this problem now so we can rearrange things to avoid it in the future. But, so far, there seems to be enough capacity on this circuit to handle all of the cruncher machines.&lt;br /&gt;
&lt;br /&gt;
==Network Bottlenecks==&lt;br /&gt;
&lt;br /&gt;
The crunchers are on a 10Gb/s network. The cluster nodes are each 1Gb/s, but their link as a group to the main network is 10Gb/s. There is a limitation at this time because the crunchers and the cluster are on different subnets, and the connection between the subnets is limited to 1G/s at this time. This should not impact your job, but if you go to crunchers and look at the machine where you are running your main Mathematica, and check the 'Network' graph, if that machine is maxing out at 1Gb/s during your job, then your job is being constrained by the subnet bottleneck. If this is happening, please email humphrey@cornell.edu and let me know about this. The bottleneck can be removed but we haven't gotten to that yet.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Mathematica_Remote_Kernels&amp;diff=148</id>
		<title>Mathematica Remote Kernels</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Mathematica_Remote_Kernels&amp;diff=148"/>
		<updated>2022-07-21T16:03:47Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Power Consumption */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Launching Remote Kernels in Mathematica===&lt;br /&gt;
This page assumes that you've already followed the instructions in [[Cluster SSH access]] and you have all of that working.&lt;br /&gt;
&lt;br /&gt;
There are other ways to set this up, including lots of menus in Mathematica for the remote kernel settings. The instructions on this are vague and confusing, so for now we're just entering the information directly in the Mathematica code window, because it's simple and it works. Any suggestions for better ways to do this are welcome, and when we find better way to do it, we'll update the documentation.&lt;br /&gt;
&lt;br /&gt;
Also, since this is written by someone who is not very familiar with Mathematica syntax, the large 'LaunchKernels' command for 40 nodes shown below could be done much more easily, for instance, using a loop to generate the string for the machine list passed to the LaunchKernels command, or some other loop that will do the same thing without having to cut and paste that giant command.&lt;br /&gt;
&lt;br /&gt;
Here are the instructions:&lt;br /&gt;
&lt;br /&gt;
On your Linux desktop, start up Mathematica.&lt;br /&gt;
&lt;br /&gt;
We'll start by launching kernels on two of the other cruncher machines, since this is a simple example to start with.&lt;br /&gt;
&lt;br /&gt;
For this example, we're going to start 4 kernels each on ramsey and fibonacci. &lt;br /&gt;
&lt;br /&gt;
In the Mathematica window, type the following:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;LaunchKernels[{&amp;quot;ssh://ramsey/?4&amp;quot;,&amp;quot;ssh://fibonacci/?4&amp;quot;}]&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, right-click that cell and choose 'Evaluate Cell'. The system will launch the remote kernels, and show its progress while it does that. Once all the kernels are launched, you can run calculations on them. To close those kernels, do&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;CloseKernels[]&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and evaluate the cell. The system will close the remote kernels. Please remember to do a CloseKernels[] at the end of your session. Closing Mathematica SHOULD close the remote kernels, but it's better to be sure.&lt;br /&gt;
&lt;br /&gt;
Now, we'll launch four kernels each on the 40 node cluster. There's a better way to do this than copying and pasting this big command, like using a loop to generate the node names, but for now this works. Note that it will take a while to launch all of the kernels because it launches them one at a time, and this command should give you 160 remote kernels.&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;LaunchKernels[{&amp;quot;ssh://pnode01/?4&amp;quot;,&amp;quot;ssh://pnode02/?4&amp;quot;,&amp;quot;ssh://pnode03/?4&amp;quot;,&amp;quot;ssh://pnode04/?4&amp;quot;,&amp;quot;ssh://pnode05/?4&amp;quot;,&amp;quot;ssh://pnode06/?4&amp;quot;,&amp;quot;ssh://pnode07/?4&amp;quot;,&amp;quot;ssh://pnode08/?4&amp;quot;,&amp;quot;ssh://pnode09/?4&amp;quot;,&amp;quot;ssh://pnode10/?4&amp;quot;,&amp;quot;ssh://pnode11/?4&amp;quot;,&amp;quot;ssh://pnode12/?4&amp;quot;,&amp;quot;ssh://pnode13/?4&amp;quot;,&amp;quot;ssh://pnode14/?4&amp;quot;,&amp;quot;ssh://pnode15/?4&amp;quot;,&amp;quot;ssh://pnode16/?4&amp;quot;,&amp;quot;ssh://pnode17/?4&amp;quot;,&amp;quot;ssh://pnode18/?4&amp;quot;,&amp;quot;ssh://pnode19/?4&amp;quot;,&amp;quot;ssh://pnode20/?4&amp;quot;,&amp;quot;ssh://pnode21/?4&amp;quot;,&amp;quot;ssh://pnode22/?4&amp;quot;,&amp;quot;ssh://pnode23/?4&amp;quot;,&amp;quot;ssh://pnode24/?4&amp;quot;,&amp;quot;ssh://pnode25/?4&amp;quot;,&amp;quot;ssh://pnode26/?4&amp;quot;,&amp;quot;ssh://pnode27/?4&amp;quot;,&amp;quot;ssh://pnode28/?4&amp;quot;,&amp;quot;ssh://pnode29/?4&amp;quot;,&amp;quot;ssh://pnode30/?4&amp;quot;,&amp;quot;ssh://pnode31/?4&amp;quot;,&amp;quot;ssh://pnode32/?4&amp;quot;,&amp;quot;ssh://pnode33/?4&amp;quot;,&amp;quot;ssh://pnode34/?4&amp;quot;,&amp;quot;ssh://pnode35/?4&amp;quot;,&amp;quot;ssh://pnode36/?4&amp;quot;,&amp;quot;ssh://pnode37/?4&amp;quot;,&amp;quot;ssh://pnode38/?4&amp;quot;,&amp;quot;ssh://pnode39/?4&amp;quot;,&amp;quot;ssh://pnode40/?4&amp;quot;}]&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy this command from this page and paste it into Mathematica. Note that if you're using X2go and you copy on the local machine, you may need to paste using the 'Paste' command under the 'Edit' menu, because different system handle cutting and pasting differently.&lt;br /&gt;
&lt;br /&gt;
Once you have pasted that big command in there, right-click the cell and evaluate it. You'll see the progress as all of the remote kernels are started up. Once it's done, you'll have 160 remote kernels.&lt;br /&gt;
&lt;br /&gt;
When you are finished using them, do a &lt;br /&gt;
&lt;br /&gt;
 CloseKernels[]&lt;br /&gt;
&lt;br /&gt;
so the system can go and shut them all down properly.&lt;br /&gt;
&lt;br /&gt;
==Tips==&lt;br /&gt;
&lt;br /&gt;
Note that there is nothing stopping you from running remote kernels on both the cluster nodes and the crunchers, but this is not recommended. These machines have cores that are very different speeds, so the faster machine may end up waiting for the slower machines to finish, so this may just be a waste of resources. It's best to run things on the cluster, OR the crunchers, but not both.&lt;br /&gt;
&lt;br /&gt;
The number of kernels you run on each machine may give you different outcomes depending on your job. So, if you run 8 kernels per node, maybe your job will run faster, maybe not. It's best to experiment with a subset of your job to find the optimum number of kernels per node for your job.&lt;br /&gt;
&lt;br /&gt;
For the cluster nodes, each one has 8 cpu threads, so ideally if no one else is using the cluster, your job should load up each node to a load of 8 so that you're making full use of each node's CPU. If it's less than 8, you're not using the whole CPU, and if it's more than 8, some of your kernels are waiting for resources.&lt;br /&gt;
&lt;br /&gt;
You can see the status of your the cluster nodes here: &lt;br /&gt;
&lt;br /&gt;
[http://graph.math.cornell.edu:3000/d/BPzrVrznz/cluster-pnodes Cluster Pnodes] (You must be on the Cornell network or VPN to use this link)&lt;br /&gt;
&lt;br /&gt;
If you're running remote kernels on the Ryzen crunchers, they have 32 CPU threads each, so you would want to load them up to 32 to make full use of the CPU, if it was zero before you started. Note that other people are using these machines, so be a good neighbor and don't hog the entire machine.&lt;br /&gt;
&lt;br /&gt;
The status of the crunchers is here: &lt;br /&gt;
&lt;br /&gt;
[http://graph.math.cornell.edu:3000/d/f8zUy_L7k/crunchers Crunchers] (You must be on the Cornell network or VPN to use this link)&lt;br /&gt;
&lt;br /&gt;
==Power Consumption==&lt;br /&gt;
If you really want to geek out, you can look at the server room power consumption when you're running your job.&lt;br /&gt;
&lt;br /&gt;
[http://graph.math.cornell.edu:3000/d/_wfSvpjmz/server-room-power Server Room Power] (You must be on the Cornell network or VPN to use this link)&lt;br /&gt;
&lt;br /&gt;
The purple circuit is the cluster. The power usage is around 650W when nothing is going on. It will go up when the machines get busy. Note that if the power on the purple circuit goes above 2700 watts, the cluster may shut down. I'm worried about going over this limit, but so far it hasn't happened. Can you send the cluster enough math to kill it? Try it and let me know! The solution is easy, we can put some of the nodes on another circuit. For now they're all on the same one so we can more easily measure the power usage of the cluster.&lt;br /&gt;
&lt;br /&gt;
Note the room temperature! The room has a large cooler that uses Cornell Lake-source cooling to cool the room, but your job will still probably warm up the room by a few degrees.&lt;br /&gt;
&lt;br /&gt;
The blue circuit is the Ryzen GPU cruncher machines. That circuit also has a limit of 2700 watts total. There should be enough headroom there to be able to load up all of those machines including their GPUs without overloading the circuit. Again, you might be able to make it shut down. That's ok, as long as you tell the system administrator. It's better to cause this problem now so we can rearrange things to avoid it in the future. But, so far, there seems to be enough capacity on this circuit to handle all of the cruncher machines.&lt;br /&gt;
&lt;br /&gt;
==Network Bottlenecks==&lt;br /&gt;
&lt;br /&gt;
The crunchers are on a 10Gb/s network. The cluster nodes are each 1Gb/s, but their link as a group to the main network is 10Gb/s. There is a limitation at this time because the crunchers and the cluster are on different subnets, and the connection between the subnets is limited to 1G/s at this time. This should not impact your job, but if you go to crunchers and look at the machine where you are running your main Mathematica, and check the 'Network' graph, if that machine is maxing out at 1Gb/s during your job, then your job is being constrained by the subnet bottleneck. If this is happening, please email humphrey@cornell.edu and let me know about this. The bottleneck can be removed but we haven't gotten to that yet.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Mathematica_Remote_Kernels&amp;diff=147</id>
		<title>Mathematica Remote Kernels</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Mathematica_Remote_Kernels&amp;diff=147"/>
		<updated>2022-07-21T16:03:37Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Tips */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Launching Remote Kernels in Mathematica===&lt;br /&gt;
This page assumes that you've already followed the instructions in [[Cluster SSH access]] and you have all of that working.&lt;br /&gt;
&lt;br /&gt;
There are other ways to set this up, including lots of menus in Mathematica for the remote kernel settings. The instructions on this are vague and confusing, so for now we're just entering the information directly in the Mathematica code window, because it's simple and it works. Any suggestions for better ways to do this are welcome, and when we find better way to do it, we'll update the documentation.&lt;br /&gt;
&lt;br /&gt;
Also, since this is written by someone who is not very familiar with Mathematica syntax, the large 'LaunchKernels' command for 40 nodes shown below could be done much more easily, for instance, using a loop to generate the string for the machine list passed to the LaunchKernels command, or some other loop that will do the same thing without having to cut and paste that giant command.&lt;br /&gt;
&lt;br /&gt;
Here are the instructions:&lt;br /&gt;
&lt;br /&gt;
On your Linux desktop, start up Mathematica.&lt;br /&gt;
&lt;br /&gt;
We'll start by launching kernels on two of the other cruncher machines, since this is a simple example to start with.&lt;br /&gt;
&lt;br /&gt;
For this example, we're going to start 4 kernels each on ramsey and fibonacci. &lt;br /&gt;
&lt;br /&gt;
In the Mathematica window, type the following:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;LaunchKernels[{&amp;quot;ssh://ramsey/?4&amp;quot;,&amp;quot;ssh://fibonacci/?4&amp;quot;}]&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, right-click that cell and choose 'Evaluate Cell'. The system will launch the remote kernels, and show its progress while it does that. Once all the kernels are launched, you can run calculations on them. To close those kernels, do&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;CloseKernels[]&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and evaluate the cell. The system will close the remote kernels. Please remember to do a CloseKernels[] at the end of your session. Closing Mathematica SHOULD close the remote kernels, but it's better to be sure.&lt;br /&gt;
&lt;br /&gt;
Now, we'll launch four kernels each on the 40 node cluster. There's a better way to do this than copying and pasting this big command, like using a loop to generate the node names, but for now this works. Note that it will take a while to launch all of the kernels because it launches them one at a time, and this command should give you 160 remote kernels.&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;LaunchKernels[{&amp;quot;ssh://pnode01/?4&amp;quot;,&amp;quot;ssh://pnode02/?4&amp;quot;,&amp;quot;ssh://pnode03/?4&amp;quot;,&amp;quot;ssh://pnode04/?4&amp;quot;,&amp;quot;ssh://pnode05/?4&amp;quot;,&amp;quot;ssh://pnode06/?4&amp;quot;,&amp;quot;ssh://pnode07/?4&amp;quot;,&amp;quot;ssh://pnode08/?4&amp;quot;,&amp;quot;ssh://pnode09/?4&amp;quot;,&amp;quot;ssh://pnode10/?4&amp;quot;,&amp;quot;ssh://pnode11/?4&amp;quot;,&amp;quot;ssh://pnode12/?4&amp;quot;,&amp;quot;ssh://pnode13/?4&amp;quot;,&amp;quot;ssh://pnode14/?4&amp;quot;,&amp;quot;ssh://pnode15/?4&amp;quot;,&amp;quot;ssh://pnode16/?4&amp;quot;,&amp;quot;ssh://pnode17/?4&amp;quot;,&amp;quot;ssh://pnode18/?4&amp;quot;,&amp;quot;ssh://pnode19/?4&amp;quot;,&amp;quot;ssh://pnode20/?4&amp;quot;,&amp;quot;ssh://pnode21/?4&amp;quot;,&amp;quot;ssh://pnode22/?4&amp;quot;,&amp;quot;ssh://pnode23/?4&amp;quot;,&amp;quot;ssh://pnode24/?4&amp;quot;,&amp;quot;ssh://pnode25/?4&amp;quot;,&amp;quot;ssh://pnode26/?4&amp;quot;,&amp;quot;ssh://pnode27/?4&amp;quot;,&amp;quot;ssh://pnode28/?4&amp;quot;,&amp;quot;ssh://pnode29/?4&amp;quot;,&amp;quot;ssh://pnode30/?4&amp;quot;,&amp;quot;ssh://pnode31/?4&amp;quot;,&amp;quot;ssh://pnode32/?4&amp;quot;,&amp;quot;ssh://pnode33/?4&amp;quot;,&amp;quot;ssh://pnode34/?4&amp;quot;,&amp;quot;ssh://pnode35/?4&amp;quot;,&amp;quot;ssh://pnode36/?4&amp;quot;,&amp;quot;ssh://pnode37/?4&amp;quot;,&amp;quot;ssh://pnode38/?4&amp;quot;,&amp;quot;ssh://pnode39/?4&amp;quot;,&amp;quot;ssh://pnode40/?4&amp;quot;}]&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy this command from this page and paste it into Mathematica. Note that if you're using X2go and you copy on the local machine, you may need to paste using the 'Paste' command under the 'Edit' menu, because different system handle cutting and pasting differently.&lt;br /&gt;
&lt;br /&gt;
Once you have pasted that big command in there, right-click the cell and evaluate it. You'll see the progress as all of the remote kernels are started up. Once it's done, you'll have 160 remote kernels.&lt;br /&gt;
&lt;br /&gt;
When you are finished using them, do a &lt;br /&gt;
&lt;br /&gt;
 CloseKernels[]&lt;br /&gt;
&lt;br /&gt;
so the system can go and shut them all down properly.&lt;br /&gt;
&lt;br /&gt;
==Tips==&lt;br /&gt;
&lt;br /&gt;
Note that there is nothing stopping you from running remote kernels on both the cluster nodes and the crunchers, but this is not recommended. These machines have cores that are very different speeds, so the faster machine may end up waiting for the slower machines to finish, so this may just be a waste of resources. It's best to run things on the cluster, OR the crunchers, but not both.&lt;br /&gt;
&lt;br /&gt;
The number of kernels you run on each machine may give you different outcomes depending on your job. So, if you run 8 kernels per node, maybe your job will run faster, maybe not. It's best to experiment with a subset of your job to find the optimum number of kernels per node for your job.&lt;br /&gt;
&lt;br /&gt;
For the cluster nodes, each one has 8 cpu threads, so ideally if no one else is using the cluster, your job should load up each node to a load of 8 so that you're making full use of each node's CPU. If it's less than 8, you're not using the whole CPU, and if it's more than 8, some of your kernels are waiting for resources.&lt;br /&gt;
&lt;br /&gt;
You can see the status of your the cluster nodes here: &lt;br /&gt;
&lt;br /&gt;
[http://graph.math.cornell.edu:3000/d/BPzrVrznz/cluster-pnodes Cluster Pnodes] (You must be on the Cornell network or VPN to use this link)&lt;br /&gt;
&lt;br /&gt;
If you're running remote kernels on the Ryzen crunchers, they have 32 CPU threads each, so you would want to load them up to 32 to make full use of the CPU, if it was zero before you started. Note that other people are using these machines, so be a good neighbor and don't hog the entire machine.&lt;br /&gt;
&lt;br /&gt;
The status of the crunchers is here: &lt;br /&gt;
&lt;br /&gt;
[http://graph.math.cornell.edu:3000/d/f8zUy_L7k/crunchers Crunchers] (You must be on the Cornell network or VPN to use this link)&lt;br /&gt;
&lt;br /&gt;
==Power Consumption==&lt;br /&gt;
If you really want to geek out, you can look at the server room power consumption when you're running your job.&lt;br /&gt;
&lt;br /&gt;
[http://graph.math.cornell.edu:3000/d/_wfSvpjmz/server-room-power Server Room Power]&lt;br /&gt;
&lt;br /&gt;
The purple circuit is the cluster. The power usage is around 650W when nothing is going on. It will go up when the machines get busy. Note that if the power on the purple circuit goes above 2700 watts, the cluster may shut down. I'm worried about going over this limit, but so far it hasn't happened. Can you send the cluster enough math to kill it? Try it and let me know! The solution is easy, we can put some of the nodes on another circuit. For now they're all on the same one so we can more easily measure the power usage of the cluster.&lt;br /&gt;
&lt;br /&gt;
Note the room temperature! The room has a large cooler that uses Cornell Lake-source cooling to cool the room, but your job will still probably warm up the room by a few degrees.&lt;br /&gt;
&lt;br /&gt;
The blue circuit is the Ryzen GPU cruncher machines. That circuit also has a limit of 2700 watts total. There should be enough headroom there to be able to load up all of those machines including their GPUs without overloading the circuit. Again, you might be able to make it shut down. That's ok, as long as you tell the system administrator. It's better to cause this problem now so we can rearrange things to avoid it in the future. But, so far, there seems to be enough capacity on this circuit to handle all of the cruncher machines.&lt;br /&gt;
&lt;br /&gt;
==Network Bottlenecks==&lt;br /&gt;
&lt;br /&gt;
The crunchers are on a 10Gb/s network. The cluster nodes are each 1Gb/s, but their link as a group to the main network is 10Gb/s. There is a limitation at this time because the crunchers and the cluster are on different subnets, and the connection between the subnets is limited to 1G/s at this time. This should not impact your job, but if you go to crunchers and look at the machine where you are running your main Mathematica, and check the 'Network' graph, if that machine is maxing out at 1Gb/s during your job, then your job is being constrained by the subnet bottleneck. If this is happening, please email humphrey@cornell.edu and let me know about this. The bottleneck can be removed but we haven't gotten to that yet.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Mathematica_Remote_Kernels&amp;diff=146</id>
		<title>Mathematica Remote Kernels</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Mathematica_Remote_Kernels&amp;diff=146"/>
		<updated>2022-07-21T16:02:56Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Tips */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Launching Remote Kernels in Mathematica===&lt;br /&gt;
This page assumes that you've already followed the instructions in [[Cluster SSH access]] and you have all of that working.&lt;br /&gt;
&lt;br /&gt;
There are other ways to set this up, including lots of menus in Mathematica for the remote kernel settings. The instructions on this are vague and confusing, so for now we're just entering the information directly in the Mathematica code window, because it's simple and it works. Any suggestions for better ways to do this are welcome, and when we find better way to do it, we'll update the documentation.&lt;br /&gt;
&lt;br /&gt;
Also, since this is written by someone who is not very familiar with Mathematica syntax, the large 'LaunchKernels' command for 40 nodes shown below could be done much more easily, for instance, using a loop to generate the string for the machine list passed to the LaunchKernels command, or some other loop that will do the same thing without having to cut and paste that giant command.&lt;br /&gt;
&lt;br /&gt;
Here are the instructions:&lt;br /&gt;
&lt;br /&gt;
On your Linux desktop, start up Mathematica.&lt;br /&gt;
&lt;br /&gt;
We'll start by launching kernels on two of the other cruncher machines, since this is a simple example to start with.&lt;br /&gt;
&lt;br /&gt;
For this example, we're going to start 4 kernels each on ramsey and fibonacci. &lt;br /&gt;
&lt;br /&gt;
In the Mathematica window, type the following:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;LaunchKernels[{&amp;quot;ssh://ramsey/?4&amp;quot;,&amp;quot;ssh://fibonacci/?4&amp;quot;}]&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, right-click that cell and choose 'Evaluate Cell'. The system will launch the remote kernels, and show its progress while it does that. Once all the kernels are launched, you can run calculations on them. To close those kernels, do&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;CloseKernels[]&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and evaluate the cell. The system will close the remote kernels. Please remember to do a CloseKernels[] at the end of your session. Closing Mathematica SHOULD close the remote kernels, but it's better to be sure.&lt;br /&gt;
&lt;br /&gt;
Now, we'll launch four kernels each on the 40 node cluster. There's a better way to do this than copying and pasting this big command, like using a loop to generate the node names, but for now this works. Note that it will take a while to launch all of the kernels because it launches them one at a time, and this command should give you 160 remote kernels.&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;LaunchKernels[{&amp;quot;ssh://pnode01/?4&amp;quot;,&amp;quot;ssh://pnode02/?4&amp;quot;,&amp;quot;ssh://pnode03/?4&amp;quot;,&amp;quot;ssh://pnode04/?4&amp;quot;,&amp;quot;ssh://pnode05/?4&amp;quot;,&amp;quot;ssh://pnode06/?4&amp;quot;,&amp;quot;ssh://pnode07/?4&amp;quot;,&amp;quot;ssh://pnode08/?4&amp;quot;,&amp;quot;ssh://pnode09/?4&amp;quot;,&amp;quot;ssh://pnode10/?4&amp;quot;,&amp;quot;ssh://pnode11/?4&amp;quot;,&amp;quot;ssh://pnode12/?4&amp;quot;,&amp;quot;ssh://pnode13/?4&amp;quot;,&amp;quot;ssh://pnode14/?4&amp;quot;,&amp;quot;ssh://pnode15/?4&amp;quot;,&amp;quot;ssh://pnode16/?4&amp;quot;,&amp;quot;ssh://pnode17/?4&amp;quot;,&amp;quot;ssh://pnode18/?4&amp;quot;,&amp;quot;ssh://pnode19/?4&amp;quot;,&amp;quot;ssh://pnode20/?4&amp;quot;,&amp;quot;ssh://pnode21/?4&amp;quot;,&amp;quot;ssh://pnode22/?4&amp;quot;,&amp;quot;ssh://pnode23/?4&amp;quot;,&amp;quot;ssh://pnode24/?4&amp;quot;,&amp;quot;ssh://pnode25/?4&amp;quot;,&amp;quot;ssh://pnode26/?4&amp;quot;,&amp;quot;ssh://pnode27/?4&amp;quot;,&amp;quot;ssh://pnode28/?4&amp;quot;,&amp;quot;ssh://pnode29/?4&amp;quot;,&amp;quot;ssh://pnode30/?4&amp;quot;,&amp;quot;ssh://pnode31/?4&amp;quot;,&amp;quot;ssh://pnode32/?4&amp;quot;,&amp;quot;ssh://pnode33/?4&amp;quot;,&amp;quot;ssh://pnode34/?4&amp;quot;,&amp;quot;ssh://pnode35/?4&amp;quot;,&amp;quot;ssh://pnode36/?4&amp;quot;,&amp;quot;ssh://pnode37/?4&amp;quot;,&amp;quot;ssh://pnode38/?4&amp;quot;,&amp;quot;ssh://pnode39/?4&amp;quot;,&amp;quot;ssh://pnode40/?4&amp;quot;}]&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy this command from this page and paste it into Mathematica. Note that if you're using X2go and you copy on the local machine, you may need to paste using the 'Paste' command under the 'Edit' menu, because different system handle cutting and pasting differently.&lt;br /&gt;
&lt;br /&gt;
Once you have pasted that big command in there, right-click the cell and evaluate it. You'll see the progress as all of the remote kernels are started up. Once it's done, you'll have 160 remote kernels.&lt;br /&gt;
&lt;br /&gt;
When you are finished using them, do a &lt;br /&gt;
&lt;br /&gt;
 CloseKernels[]&lt;br /&gt;
&lt;br /&gt;
so the system can go and shut them all down properly.&lt;br /&gt;
&lt;br /&gt;
==Tips==&lt;br /&gt;
&lt;br /&gt;
Note that there is nothing stopping you from running remote kernels on both the cluster nodes and the crunchers, but this is not recommended. These machines have cores that are very different speeds, so the faster machine may end up waiting for the slower machines to finish, so this may just be a waste of resources. It's best to run things on the cluster, OR the crunchers, but not both.&lt;br /&gt;
&lt;br /&gt;
The number of kernels you run on each machine may give you different outcomes depending on your job. So, if you run 8 kernels per node, maybe your job will run faster, maybe not. It's best to experiment with a subset of your job to find the optimum number of kernels per node for your job.&lt;br /&gt;
&lt;br /&gt;
For the cluster nodes, each one has 8 cpu threads, so ideally if no one else is using the cluster, your job should load up each node to a load of 8 so that you're making full use of each node's CPU. If it's less than 8, you're not using the whole CPU, and if it's more than 8, some of your kernels are waiting for resources.&lt;br /&gt;
&lt;br /&gt;
You can see the status of your the cluster nodes here: (You must be on the Cornell network or VPN to use this link)&lt;br /&gt;
&lt;br /&gt;
[http://graph.math.cornell.edu:3000/d/BPzrVrznz/cluster-pnodes Cluster Pnodes]&lt;br /&gt;
&lt;br /&gt;
If you're running remote kernels on the Ryzen crunchers, they have 32 CPU threads each, so you would want to load them up to 32 to make full use of the CPU, if it was zero before you started. Note that other people are using these machines, so be a good neighbor and don't hog the entire machine.&lt;br /&gt;
&lt;br /&gt;
The status of the crunchers is here: (You must be on the Cornell network or VPN to use this link)&lt;br /&gt;
&lt;br /&gt;
[http://graph.math.cornell.edu:3000/d/f8zUy_L7k/crunchers Crunchers]&lt;br /&gt;
&lt;br /&gt;
==Power Consumption==&lt;br /&gt;
If you really want to geek out, you can look at the server room power consumption when you're running your job.&lt;br /&gt;
&lt;br /&gt;
[http://graph.math.cornell.edu:3000/d/_wfSvpjmz/server-room-power Server Room Power]&lt;br /&gt;
&lt;br /&gt;
The purple circuit is the cluster. The power usage is around 650W when nothing is going on. It will go up when the machines get busy. Note that if the power on the purple circuit goes above 2700 watts, the cluster may shut down. I'm worried about going over this limit, but so far it hasn't happened. Can you send the cluster enough math to kill it? Try it and let me know! The solution is easy, we can put some of the nodes on another circuit. For now they're all on the same one so we can more easily measure the power usage of the cluster.&lt;br /&gt;
&lt;br /&gt;
Note the room temperature! The room has a large cooler that uses Cornell Lake-source cooling to cool the room, but your job will still probably warm up the room by a few degrees.&lt;br /&gt;
&lt;br /&gt;
The blue circuit is the Ryzen GPU cruncher machines. That circuit also has a limit of 2700 watts total. There should be enough headroom there to be able to load up all of those machines including their GPUs without overloading the circuit. Again, you might be able to make it shut down. That's ok, as long as you tell the system administrator. It's better to cause this problem now so we can rearrange things to avoid it in the future. But, so far, there seems to be enough capacity on this circuit to handle all of the cruncher machines.&lt;br /&gt;
&lt;br /&gt;
==Network Bottlenecks==&lt;br /&gt;
&lt;br /&gt;
The crunchers are on a 10Gb/s network. The cluster nodes are each 1Gb/s, but their link as a group to the main network is 10Gb/s. There is a limitation at this time because the crunchers and the cluster are on different subnets, and the connection between the subnets is limited to 1G/s at this time. This should not impact your job, but if you go to crunchers and look at the machine where you are running your main Mathematica, and check the 'Network' graph, if that machine is maxing out at 1Gb/s during your job, then your job is being constrained by the subnet bottleneck. If this is happening, please email humphrey@cornell.edu and let me know about this. The bottleneck can be removed but we haven't gotten to that yet.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Mathematica_Remote_Kernels&amp;diff=145</id>
		<title>Mathematica Remote Kernels</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Mathematica_Remote_Kernels&amp;diff=145"/>
		<updated>2022-07-21T16:00:10Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Tips */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Launching Remote Kernels in Mathematica===&lt;br /&gt;
This page assumes that you've already followed the instructions in [[Cluster SSH access]] and you have all of that working.&lt;br /&gt;
&lt;br /&gt;
There are other ways to set this up, including lots of menus in Mathematica for the remote kernel settings. The instructions on this are vague and confusing, so for now we're just entering the information directly in the Mathematica code window, because it's simple and it works. Any suggestions for better ways to do this are welcome, and when we find better way to do it, we'll update the documentation.&lt;br /&gt;
&lt;br /&gt;
Also, since this is written by someone who is not very familiar with Mathematica syntax, the large 'LaunchKernels' command for 40 nodes shown below could be done much more easily, for instance, using a loop to generate the string for the machine list passed to the LaunchKernels command, or some other loop that will do the same thing without having to cut and paste that giant command.&lt;br /&gt;
&lt;br /&gt;
Here are the instructions:&lt;br /&gt;
&lt;br /&gt;
On your Linux desktop, start up Mathematica.&lt;br /&gt;
&lt;br /&gt;
We'll start by launching kernels on two of the other cruncher machines, since this is a simple example to start with.&lt;br /&gt;
&lt;br /&gt;
For this example, we're going to start 4 kernels each on ramsey and fibonacci. &lt;br /&gt;
&lt;br /&gt;
In the Mathematica window, type the following:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;LaunchKernels[{&amp;quot;ssh://ramsey/?4&amp;quot;,&amp;quot;ssh://fibonacci/?4&amp;quot;}]&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, right-click that cell and choose 'Evaluate Cell'. The system will launch the remote kernels, and show its progress while it does that. Once all the kernels are launched, you can run calculations on them. To close those kernels, do&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;CloseKernels[]&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and evaluate the cell. The system will close the remote kernels. Please remember to do a CloseKernels[] at the end of your session. Closing Mathematica SHOULD close the remote kernels, but it's better to be sure.&lt;br /&gt;
&lt;br /&gt;
Now, we'll launch four kernels each on the 40 node cluster. There's a better way to do this than copying and pasting this big command, like using a loop to generate the node names, but for now this works. Note that it will take a while to launch all of the kernels because it launches them one at a time, and this command should give you 160 remote kernels.&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;LaunchKernels[{&amp;quot;ssh://pnode01/?4&amp;quot;,&amp;quot;ssh://pnode02/?4&amp;quot;,&amp;quot;ssh://pnode03/?4&amp;quot;,&amp;quot;ssh://pnode04/?4&amp;quot;,&amp;quot;ssh://pnode05/?4&amp;quot;,&amp;quot;ssh://pnode06/?4&amp;quot;,&amp;quot;ssh://pnode07/?4&amp;quot;,&amp;quot;ssh://pnode08/?4&amp;quot;,&amp;quot;ssh://pnode09/?4&amp;quot;,&amp;quot;ssh://pnode10/?4&amp;quot;,&amp;quot;ssh://pnode11/?4&amp;quot;,&amp;quot;ssh://pnode12/?4&amp;quot;,&amp;quot;ssh://pnode13/?4&amp;quot;,&amp;quot;ssh://pnode14/?4&amp;quot;,&amp;quot;ssh://pnode15/?4&amp;quot;,&amp;quot;ssh://pnode16/?4&amp;quot;,&amp;quot;ssh://pnode17/?4&amp;quot;,&amp;quot;ssh://pnode18/?4&amp;quot;,&amp;quot;ssh://pnode19/?4&amp;quot;,&amp;quot;ssh://pnode20/?4&amp;quot;,&amp;quot;ssh://pnode21/?4&amp;quot;,&amp;quot;ssh://pnode22/?4&amp;quot;,&amp;quot;ssh://pnode23/?4&amp;quot;,&amp;quot;ssh://pnode24/?4&amp;quot;,&amp;quot;ssh://pnode25/?4&amp;quot;,&amp;quot;ssh://pnode26/?4&amp;quot;,&amp;quot;ssh://pnode27/?4&amp;quot;,&amp;quot;ssh://pnode28/?4&amp;quot;,&amp;quot;ssh://pnode29/?4&amp;quot;,&amp;quot;ssh://pnode30/?4&amp;quot;,&amp;quot;ssh://pnode31/?4&amp;quot;,&amp;quot;ssh://pnode32/?4&amp;quot;,&amp;quot;ssh://pnode33/?4&amp;quot;,&amp;quot;ssh://pnode34/?4&amp;quot;,&amp;quot;ssh://pnode35/?4&amp;quot;,&amp;quot;ssh://pnode36/?4&amp;quot;,&amp;quot;ssh://pnode37/?4&amp;quot;,&amp;quot;ssh://pnode38/?4&amp;quot;,&amp;quot;ssh://pnode39/?4&amp;quot;,&amp;quot;ssh://pnode40/?4&amp;quot;}]&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy this command from this page and paste it into Mathematica. Note that if you're using X2go and you copy on the local machine, you may need to paste using the 'Paste' command under the 'Edit' menu, because different system handle cutting and pasting differently.&lt;br /&gt;
&lt;br /&gt;
Once you have pasted that big command in there, right-click the cell and evaluate it. You'll see the progress as all of the remote kernels are started up. Once it's done, you'll have 160 remote kernels.&lt;br /&gt;
&lt;br /&gt;
When you are finished using them, do a &lt;br /&gt;
&lt;br /&gt;
 CloseKernels[]&lt;br /&gt;
&lt;br /&gt;
so the system can go and shut them all down properly.&lt;br /&gt;
&lt;br /&gt;
==Tips==&lt;br /&gt;
&lt;br /&gt;
Note that there is nothing stopping you from running remote kernels on both the cluster nodes and the crunchers, but this is not recommended. These machines have cores that are very different speeds, so the faster machine may end up waiting for the slower machines to finish, so this may just be a waste of resources. It's best to run things on the cluster, OR the crunchers, but not both.&lt;br /&gt;
&lt;br /&gt;
The number of kernels you run on each machine may give you different outcomes depending on your job. So, if you run 8 kernels per node, maybe your job will run faster, maybe not. It's best to experiment with a subset of your job to find the optimum number of kernels per node for your job.&lt;br /&gt;
&lt;br /&gt;
For the cluster nodes, each one has 8 cpu threads, so ideally if no one else is using the cluster, your job should load up each node to a load of 8 so that you're making full use of each node's CPU. If it's less than 8, you're not using the whole CPU, and if it's more than 8, some of your kernels are waiting for resources.&lt;br /&gt;
&lt;br /&gt;
You can see the status of your the cluster nodes here:&lt;br /&gt;
&lt;br /&gt;
[http://graph.math.cornell.edu:3000/d/BPzrVrznz/cluster-pnodes Cluster Pnodes]&lt;br /&gt;
&lt;br /&gt;
If you're running remote kernels on the Ryzen crunchers, they have 32 CPU threads each, so you would want to load them up to 32 to make full use of the CPU, if it was zero before you started. Note that other people are using these machines, so be a good neighbor and don't hog the entire machine.&lt;br /&gt;
&lt;br /&gt;
The status of the crunchers is here: &lt;br /&gt;
&lt;br /&gt;
[http://graph.math.cornell.edu:3000/d/f8zUy_L7k/crunchers Crunchers]&lt;br /&gt;
&lt;br /&gt;
==Power Consumption==&lt;br /&gt;
If you really want to geek out, you can look at the server room power consumption when you're running your job.&lt;br /&gt;
&lt;br /&gt;
[http://graph.math.cornell.edu:3000/d/_wfSvpjmz/server-room-power Server Room Power]&lt;br /&gt;
&lt;br /&gt;
The purple circuit is the cluster. The power usage is around 650W when nothing is going on. It will go up when the machines get busy. Note that if the power on the purple circuit goes above 2700 watts, the cluster may shut down. I'm worried about going over this limit, but so far it hasn't happened. Can you send the cluster enough math to kill it? Try it and let me know! The solution is easy, we can put some of the nodes on another circuit. For now they're all on the same one so we can more easily measure the power usage of the cluster.&lt;br /&gt;
&lt;br /&gt;
Note the room temperature! The room has a large cooler that uses Cornell Lake-source cooling to cool the room, but your job will still probably warm up the room by a few degrees.&lt;br /&gt;
&lt;br /&gt;
The blue circuit is the Ryzen GPU cruncher machines. That circuit also has a limit of 2700 watts total. There should be enough headroom there to be able to load up all of those machines including their GPUs without overloading the circuit. Again, you might be able to make it shut down. That's ok, as long as you tell the system administrator. It's better to cause this problem now so we can rearrange things to avoid it in the future. But, so far, there seems to be enough capacity on this circuit to handle all of the cruncher machines.&lt;br /&gt;
&lt;br /&gt;
==Network Bottlenecks==&lt;br /&gt;
&lt;br /&gt;
The crunchers are on a 10Gb/s network. The cluster nodes are each 1Gb/s, but their link as a group to the main network is 10Gb/s. There is a limitation at this time because the crunchers and the cluster are on different subnets, and the connection between the subnets is limited to 1G/s at this time. This should not impact your job, but if you go to crunchers and look at the machine where you are running your main Mathematica, and check the 'Network' graph, if that machine is maxing out at 1Gb/s during your job, then your job is being constrained by the subnet bottleneck. If this is happening, please email humphrey@cornell.edu and let me know about this. The bottleneck can be removed but we haven't gotten to that yet.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Mathematica_Remote_Kernels&amp;diff=144</id>
		<title>Mathematica Remote Kernels</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Mathematica_Remote_Kernels&amp;diff=144"/>
		<updated>2022-07-21T15:59:54Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Power Consumption */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Launching Remote Kernels in Mathematica===&lt;br /&gt;
This page assumes that you've already followed the instructions in [[Cluster SSH access]] and you have all of that working.&lt;br /&gt;
&lt;br /&gt;
There are other ways to set this up, including lots of menus in Mathematica for the remote kernel settings. The instructions on this are vague and confusing, so for now we're just entering the information directly in the Mathematica code window, because it's simple and it works. Any suggestions for better ways to do this are welcome, and when we find better way to do it, we'll update the documentation.&lt;br /&gt;
&lt;br /&gt;
Also, since this is written by someone who is not very familiar with Mathematica syntax, the large 'LaunchKernels' command for 40 nodes shown below could be done much more easily, for instance, using a loop to generate the string for the machine list passed to the LaunchKernels command, or some other loop that will do the same thing without having to cut and paste that giant command.&lt;br /&gt;
&lt;br /&gt;
Here are the instructions:&lt;br /&gt;
&lt;br /&gt;
On your Linux desktop, start up Mathematica.&lt;br /&gt;
&lt;br /&gt;
We'll start by launching kernels on two of the other cruncher machines, since this is a simple example to start with.&lt;br /&gt;
&lt;br /&gt;
For this example, we're going to start 4 kernels each on ramsey and fibonacci. &lt;br /&gt;
&lt;br /&gt;
In the Mathematica window, type the following:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;LaunchKernels[{&amp;quot;ssh://ramsey/?4&amp;quot;,&amp;quot;ssh://fibonacci/?4&amp;quot;}]&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, right-click that cell and choose 'Evaluate Cell'. The system will launch the remote kernels, and show its progress while it does that. Once all the kernels are launched, you can run calculations on them. To close those kernels, do&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;CloseKernels[]&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and evaluate the cell. The system will close the remote kernels. Please remember to do a CloseKernels[] at the end of your session. Closing Mathematica SHOULD close the remote kernels, but it's better to be sure.&lt;br /&gt;
&lt;br /&gt;
Now, we'll launch four kernels each on the 40 node cluster. There's a better way to do this than copying and pasting this big command, like using a loop to generate the node names, but for now this works. Note that it will take a while to launch all of the kernels because it launches them one at a time, and this command should give you 160 remote kernels.&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;LaunchKernels[{&amp;quot;ssh://pnode01/?4&amp;quot;,&amp;quot;ssh://pnode02/?4&amp;quot;,&amp;quot;ssh://pnode03/?4&amp;quot;,&amp;quot;ssh://pnode04/?4&amp;quot;,&amp;quot;ssh://pnode05/?4&amp;quot;,&amp;quot;ssh://pnode06/?4&amp;quot;,&amp;quot;ssh://pnode07/?4&amp;quot;,&amp;quot;ssh://pnode08/?4&amp;quot;,&amp;quot;ssh://pnode09/?4&amp;quot;,&amp;quot;ssh://pnode10/?4&amp;quot;,&amp;quot;ssh://pnode11/?4&amp;quot;,&amp;quot;ssh://pnode12/?4&amp;quot;,&amp;quot;ssh://pnode13/?4&amp;quot;,&amp;quot;ssh://pnode14/?4&amp;quot;,&amp;quot;ssh://pnode15/?4&amp;quot;,&amp;quot;ssh://pnode16/?4&amp;quot;,&amp;quot;ssh://pnode17/?4&amp;quot;,&amp;quot;ssh://pnode18/?4&amp;quot;,&amp;quot;ssh://pnode19/?4&amp;quot;,&amp;quot;ssh://pnode20/?4&amp;quot;,&amp;quot;ssh://pnode21/?4&amp;quot;,&amp;quot;ssh://pnode22/?4&amp;quot;,&amp;quot;ssh://pnode23/?4&amp;quot;,&amp;quot;ssh://pnode24/?4&amp;quot;,&amp;quot;ssh://pnode25/?4&amp;quot;,&amp;quot;ssh://pnode26/?4&amp;quot;,&amp;quot;ssh://pnode27/?4&amp;quot;,&amp;quot;ssh://pnode28/?4&amp;quot;,&amp;quot;ssh://pnode29/?4&amp;quot;,&amp;quot;ssh://pnode30/?4&amp;quot;,&amp;quot;ssh://pnode31/?4&amp;quot;,&amp;quot;ssh://pnode32/?4&amp;quot;,&amp;quot;ssh://pnode33/?4&amp;quot;,&amp;quot;ssh://pnode34/?4&amp;quot;,&amp;quot;ssh://pnode35/?4&amp;quot;,&amp;quot;ssh://pnode36/?4&amp;quot;,&amp;quot;ssh://pnode37/?4&amp;quot;,&amp;quot;ssh://pnode38/?4&amp;quot;,&amp;quot;ssh://pnode39/?4&amp;quot;,&amp;quot;ssh://pnode40/?4&amp;quot;}]&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy this command from this page and paste it into Mathematica. Note that if you're using X2go and you copy on the local machine, you may need to paste using the 'Paste' command under the 'Edit' menu, because different system handle cutting and pasting differently.&lt;br /&gt;
&lt;br /&gt;
Once you have pasted that big command in there, right-click the cell and evaluate it. You'll see the progress as all of the remote kernels are started up. Once it's done, you'll have 160 remote kernels.&lt;br /&gt;
&lt;br /&gt;
When you are finished using them, do a &lt;br /&gt;
&lt;br /&gt;
 CloseKernels[]&lt;br /&gt;
&lt;br /&gt;
so the system can go and shut them all down properly.&lt;br /&gt;
&lt;br /&gt;
==Tips==&lt;br /&gt;
&lt;br /&gt;
Note that there is nothing stopping you from running remote kernels on both the cluster nodes and the crunchers, but this is not recommended. These machines have cores that are very different speeds, so the faster machine may end up waiting for the slower machines to finish, so this may just be a waste of resources. It's best to run things on the cluster, OR the crunchers, but not both.&lt;br /&gt;
&lt;br /&gt;
The number of kernels you run on each machine may give you different outcomes depending on your job. So, if you run 8 kernels per node, maybe your job will run faster, maybe not. It's best to experiment with a subset of your job to find the optimum number of kernels per node for your job.&lt;br /&gt;
&lt;br /&gt;
For the cluster nodes, each one has 8 cpu threads, so ideally if no one else is using the cluster, your job should load up each node to a load of 8 so that you're making full use of each node's CPU. If it's less than 8, you're not using the whole CPU, and if it's more than 8, some of your kernels are waiting for resources.&lt;br /&gt;
&lt;br /&gt;
You can see the status of your the cluster nodes here:&lt;br /&gt;
[http://graph.math.cornell.edu:3000/d/BPzrVrznz/cluster-pnodes Cluster Pnodes]&lt;br /&gt;
&lt;br /&gt;
If you're running remote kernels on the Ryzen crunchers, they have 32 CPU threads each, so you would want to load them up to 32 to make full use of the CPU, if it was zero before you started. Note that other people are using these machines, so be a good neighbor and don't hog the entire machine.&lt;br /&gt;
&lt;br /&gt;
The status of the crunchers is here: [http://graph.math.cornell.edu:3000/d/f8zUy_L7k/crunchers Crunchers]&lt;br /&gt;
&lt;br /&gt;
==Power Consumption==&lt;br /&gt;
If you really want to geek out, you can look at the server room power consumption when you're running your job.&lt;br /&gt;
&lt;br /&gt;
[http://graph.math.cornell.edu:3000/d/_wfSvpjmz/server-room-power Server Room Power]&lt;br /&gt;
&lt;br /&gt;
The purple circuit is the cluster. The power usage is around 650W when nothing is going on. It will go up when the machines get busy. Note that if the power on the purple circuit goes above 2700 watts, the cluster may shut down. I'm worried about going over this limit, but so far it hasn't happened. Can you send the cluster enough math to kill it? Try it and let me know! The solution is easy, we can put some of the nodes on another circuit. For now they're all on the same one so we can more easily measure the power usage of the cluster.&lt;br /&gt;
&lt;br /&gt;
Note the room temperature! The room has a large cooler that uses Cornell Lake-source cooling to cool the room, but your job will still probably warm up the room by a few degrees.&lt;br /&gt;
&lt;br /&gt;
The blue circuit is the Ryzen GPU cruncher machines. That circuit also has a limit of 2700 watts total. There should be enough headroom there to be able to load up all of those machines including their GPUs without overloading the circuit. Again, you might be able to make it shut down. That's ok, as long as you tell the system administrator. It's better to cause this problem now so we can rearrange things to avoid it in the future. But, so far, there seems to be enough capacity on this circuit to handle all of the cruncher machines.&lt;br /&gt;
&lt;br /&gt;
==Network Bottlenecks==&lt;br /&gt;
&lt;br /&gt;
The crunchers are on a 10Gb/s network. The cluster nodes are each 1Gb/s, but their link as a group to the main network is 10Gb/s. There is a limitation at this time because the crunchers and the cluster are on different subnets, and the connection between the subnets is limited to 1G/s at this time. This should not impact your job, but if you go to crunchers and look at the machine where you are running your main Mathematica, and check the 'Network' graph, if that machine is maxing out at 1Gb/s during your job, then your job is being constrained by the subnet bottleneck. If this is happening, please email humphrey@cornell.edu and let me know about this. The bottleneck can be removed but we haven't gotten to that yet.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Mathematica_Remote_Kernels&amp;diff=143</id>
		<title>Mathematica Remote Kernels</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Mathematica_Remote_Kernels&amp;diff=143"/>
		<updated>2022-07-21T15:59:25Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Launching Remote Kernels in Mathematica */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Launching Remote Kernels in Mathematica===&lt;br /&gt;
This page assumes that you've already followed the instructions in [[Cluster SSH access]] and you have all of that working.&lt;br /&gt;
&lt;br /&gt;
There are other ways to set this up, including lots of menus in Mathematica for the remote kernel settings. The instructions on this are vague and confusing, so for now we're just entering the information directly in the Mathematica code window, because it's simple and it works. Any suggestions for better ways to do this are welcome, and when we find better way to do it, we'll update the documentation.&lt;br /&gt;
&lt;br /&gt;
Also, since this is written by someone who is not very familiar with Mathematica syntax, the large 'LaunchKernels' command for 40 nodes shown below could be done much more easily, for instance, using a loop to generate the string for the machine list passed to the LaunchKernels command, or some other loop that will do the same thing without having to cut and paste that giant command.&lt;br /&gt;
&lt;br /&gt;
Here are the instructions:&lt;br /&gt;
&lt;br /&gt;
On your Linux desktop, start up Mathematica.&lt;br /&gt;
&lt;br /&gt;
We'll start by launching kernels on two of the other cruncher machines, since this is a simple example to start with.&lt;br /&gt;
&lt;br /&gt;
For this example, we're going to start 4 kernels each on ramsey and fibonacci. &lt;br /&gt;
&lt;br /&gt;
In the Mathematica window, type the following:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;LaunchKernels[{&amp;quot;ssh://ramsey/?4&amp;quot;,&amp;quot;ssh://fibonacci/?4&amp;quot;}]&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, right-click that cell and choose 'Evaluate Cell'. The system will launch the remote kernels, and show its progress while it does that. Once all the kernels are launched, you can run calculations on them. To close those kernels, do&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;CloseKernels[]&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and evaluate the cell. The system will close the remote kernels. Please remember to do a CloseKernels[] at the end of your session. Closing Mathematica SHOULD close the remote kernels, but it's better to be sure.&lt;br /&gt;
&lt;br /&gt;
Now, we'll launch four kernels each on the 40 node cluster. There's a better way to do this than copying and pasting this big command, like using a loop to generate the node names, but for now this works. Note that it will take a while to launch all of the kernels because it launches them one at a time, and this command should give you 160 remote kernels.&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;LaunchKernels[{&amp;quot;ssh://pnode01/?4&amp;quot;,&amp;quot;ssh://pnode02/?4&amp;quot;,&amp;quot;ssh://pnode03/?4&amp;quot;,&amp;quot;ssh://pnode04/?4&amp;quot;,&amp;quot;ssh://pnode05/?4&amp;quot;,&amp;quot;ssh://pnode06/?4&amp;quot;,&amp;quot;ssh://pnode07/?4&amp;quot;,&amp;quot;ssh://pnode08/?4&amp;quot;,&amp;quot;ssh://pnode09/?4&amp;quot;,&amp;quot;ssh://pnode10/?4&amp;quot;,&amp;quot;ssh://pnode11/?4&amp;quot;,&amp;quot;ssh://pnode12/?4&amp;quot;,&amp;quot;ssh://pnode13/?4&amp;quot;,&amp;quot;ssh://pnode14/?4&amp;quot;,&amp;quot;ssh://pnode15/?4&amp;quot;,&amp;quot;ssh://pnode16/?4&amp;quot;,&amp;quot;ssh://pnode17/?4&amp;quot;,&amp;quot;ssh://pnode18/?4&amp;quot;,&amp;quot;ssh://pnode19/?4&amp;quot;,&amp;quot;ssh://pnode20/?4&amp;quot;,&amp;quot;ssh://pnode21/?4&amp;quot;,&amp;quot;ssh://pnode22/?4&amp;quot;,&amp;quot;ssh://pnode23/?4&amp;quot;,&amp;quot;ssh://pnode24/?4&amp;quot;,&amp;quot;ssh://pnode25/?4&amp;quot;,&amp;quot;ssh://pnode26/?4&amp;quot;,&amp;quot;ssh://pnode27/?4&amp;quot;,&amp;quot;ssh://pnode28/?4&amp;quot;,&amp;quot;ssh://pnode29/?4&amp;quot;,&amp;quot;ssh://pnode30/?4&amp;quot;,&amp;quot;ssh://pnode31/?4&amp;quot;,&amp;quot;ssh://pnode32/?4&amp;quot;,&amp;quot;ssh://pnode33/?4&amp;quot;,&amp;quot;ssh://pnode34/?4&amp;quot;,&amp;quot;ssh://pnode35/?4&amp;quot;,&amp;quot;ssh://pnode36/?4&amp;quot;,&amp;quot;ssh://pnode37/?4&amp;quot;,&amp;quot;ssh://pnode38/?4&amp;quot;,&amp;quot;ssh://pnode39/?4&amp;quot;,&amp;quot;ssh://pnode40/?4&amp;quot;}]&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy this command from this page and paste it into Mathematica. Note that if you're using X2go and you copy on the local machine, you may need to paste using the 'Paste' command under the 'Edit' menu, because different system handle cutting and pasting differently.&lt;br /&gt;
&lt;br /&gt;
Once you have pasted that big command in there, right-click the cell and evaluate it. You'll see the progress as all of the remote kernels are started up. Once it's done, you'll have 160 remote kernels.&lt;br /&gt;
&lt;br /&gt;
When you are finished using them, do a &lt;br /&gt;
&lt;br /&gt;
 CloseKernels[]&lt;br /&gt;
&lt;br /&gt;
so the system can go and shut them all down properly.&lt;br /&gt;
&lt;br /&gt;
==Tips==&lt;br /&gt;
&lt;br /&gt;
Note that there is nothing stopping you from running remote kernels on both the cluster nodes and the crunchers, but this is not recommended. These machines have cores that are very different speeds, so the faster machine may end up waiting for the slower machines to finish, so this may just be a waste of resources. It's best to run things on the cluster, OR the crunchers, but not both.&lt;br /&gt;
&lt;br /&gt;
The number of kernels you run on each machine may give you different outcomes depending on your job. So, if you run 8 kernels per node, maybe your job will run faster, maybe not. It's best to experiment with a subset of your job to find the optimum number of kernels per node for your job.&lt;br /&gt;
&lt;br /&gt;
For the cluster nodes, each one has 8 cpu threads, so ideally if no one else is using the cluster, your job should load up each node to a load of 8 so that you're making full use of each node's CPU. If it's less than 8, you're not using the whole CPU, and if it's more than 8, some of your kernels are waiting for resources.&lt;br /&gt;
&lt;br /&gt;
You can see the status of your the cluster nodes here:&lt;br /&gt;
[http://graph.math.cornell.edu:3000/d/BPzrVrznz/cluster-pnodes Cluster Pnodes]&lt;br /&gt;
&lt;br /&gt;
If you're running remote kernels on the Ryzen crunchers, they have 32 CPU threads each, so you would want to load them up to 32 to make full use of the CPU, if it was zero before you started. Note that other people are using these machines, so be a good neighbor and don't hog the entire machine.&lt;br /&gt;
&lt;br /&gt;
The status of the crunchers is here: [http://graph.math.cornell.edu:3000/d/f8zUy_L7k/crunchers Crunchers]&lt;br /&gt;
&lt;br /&gt;
==Power Consumption==&lt;br /&gt;
If you really want to geek out, you can look at the server room power consumption when you're running your job.&lt;br /&gt;
[http://graph.math.cornell.edu:3000/d/_wfSvpjmz/server-room-power Server Room Power]&lt;br /&gt;
&lt;br /&gt;
The purple circuit is the cluster. The power usage is around 650W when nothing is going on. It will go up when the machines get busy. Note that if the power on the purple circuit goes above 2700 watts, the cluster may shut down. I'm worried about going over this limit, but so far it hasn't happened. Can you send the cluster enough math to kill it? Try it and let me know! The solution is easy, we can put some of the nodes on another circuit. For now they're all on the same one so we can more easily measure the power usage of the cluster.&lt;br /&gt;
&lt;br /&gt;
Note the room temperature! The room has a large cooler that uses Cornell Lake-source cooling to cool the room, but your job will still probably warm up the room by a few degrees.&lt;br /&gt;
&lt;br /&gt;
The blue circuit is the Ryzen GPU cruncher machines. That circuit also has a limit of 2700 watts total. There should be enough headroom there to be able to load up all of those machines including their GPUs without overloading the circuit. Again, you might be able to make it shut down. That's ok, as long as you tell the system administrator. It's better to cause this problem now so we can rearrange things to avoid it in the future. But, so far, there seems to be enough capacity on this circuit to handle all of the cruncher machines.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Network Bottlenecks==&lt;br /&gt;
&lt;br /&gt;
The crunchers are on a 10Gb/s network. The cluster nodes are each 1Gb/s, but their link as a group to the main network is 10Gb/s. There is a limitation at this time because the crunchers and the cluster are on different subnets, and the connection between the subnets is limited to 1G/s at this time. This should not impact your job, but if you go to crunchers and look at the machine where you are running your main Mathematica, and check the 'Network' graph, if that machine is maxing out at 1Gb/s during your job, then your job is being constrained by the subnet bottleneck. If this is happening, please email humphrey@cornell.edu and let me know about this. The bottleneck can be removed but we haven't gotten to that yet.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Mathematica_Remote_Kernels&amp;diff=142</id>
		<title>Mathematica Remote Kernels</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Mathematica_Remote_Kernels&amp;diff=142"/>
		<updated>2022-07-21T15:28:02Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Launching Remote Kernels in Mathematica */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Launching Remote Kernels in Mathematica===&lt;br /&gt;
This page assumes that you've already followed the instructions in [[Cluster SSH access]] and you have all of that working.&lt;br /&gt;
&lt;br /&gt;
There are other ways to set this up, including lots of menus in Mathematica for the remote kernel settings. The instructions on this are vague and confusing, so for now we're just entering the information directly in the Mathematica code window, because it's simple and it works. Any suggestions for better ways to do this are welcome, and when we find better way to do it, we'll update the documentation.&lt;br /&gt;
&lt;br /&gt;
Also, since this is written by someone who is not very familiar with Mathematica syntax, the large 'LaunchKernels' command for 40 nodes shown below could be done much more easily, for instance, using a loop to generate the string for the machine list passed to the LaunchKernels command, or some other loop that will do the same thing without having to cut and paste that giant command.&lt;br /&gt;
&lt;br /&gt;
Here are the instructions:&lt;br /&gt;
&lt;br /&gt;
On your Linux desktop, start up Mathematica.&lt;br /&gt;
&lt;br /&gt;
We'll start by launching kernels on two of the other cruncher machines, since this is a simple example to start with.&lt;br /&gt;
&lt;br /&gt;
For this example, we're going to start 4 kernels each on ramsey and fibonacci. &lt;br /&gt;
&lt;br /&gt;
In the Mathematica window, type the following:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;LaunchKernels[{&amp;quot;ssh://ramsey/?4&amp;quot;,&amp;quot;ssh://fibonacci/?4&amp;quot;}]&amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Mathematica_Remote_Kernels&amp;diff=141</id>
		<title>Mathematica Remote Kernels</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Mathematica_Remote_Kernels&amp;diff=141"/>
		<updated>2022-07-21T15:26:48Z</updated>

		<summary type="html">&lt;p&gt;Admin: Created page with &amp;quot;===Launching Remote Kernels in Mathematica=== This page assumes that you've already followed the instructions in Cluster SSH access and you have all of that working.  Ther...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Launching Remote Kernels in Mathematica===&lt;br /&gt;
This page assumes that you've already followed the instructions in [[Cluster SSH access]] and you have all of that working.&lt;br /&gt;
&lt;br /&gt;
There are other ways to set this up, including lots of menus in Mathematica for the remote kernel settings. The instructions on this are vague and confusing, so for now we're just entering the information directly in the Mathematica code window, because it's simple and it works. Any suggestions for better ways to do this are welcome, and when we find better way to do it, we'll update the documentation.&lt;br /&gt;
&lt;br /&gt;
Also, since this is written by someone who is not very familiar with Mathematica syntax, the large 'LaunchKernels' command for 40 nodes shown below could be done much more easily, for instance, using a loop to generate the string for the machine list passed to the LaunchKernels command, or some other loop that will do the same thing without having to cut and paste that giant command.&lt;br /&gt;
&lt;br /&gt;
Here are the instructions:&lt;br /&gt;
&lt;br /&gt;
On your Linux desktop, start up Mathematica.&lt;br /&gt;
&lt;br /&gt;
We'll start by launching kernels on two of the other cruncher machines, since this is a simple example to start with.&lt;br /&gt;
&lt;br /&gt;
For this example, we're going to start 4 kernels each on ramsey and fibonacci. &lt;br /&gt;
&lt;br /&gt;
In the Mathematica window, type the following:&lt;br /&gt;
&lt;br /&gt;
 LaunchKernels[{&amp;quot;ssh://ramsey/?4&amp;quot;,&amp;quot;ssh://fibonacci/?4&amp;quot;}]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Cluster_Info&amp;diff=140</id>
		<title>Cluster Info</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Cluster_Info&amp;diff=140"/>
		<updated>2022-07-21T14:57:26Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Math Cluster ==&lt;br /&gt;
The Math Department has an experimental computational cluster. It works fine for computation, but the main purpose is for configuring and testing distributed computing applications so that they can be run on larger clusters.&lt;br /&gt;
&lt;br /&gt;
The cluster is 40 machines with i7 processors. The first 32 are 6th-generation processors, and the last 8 are 7th generation processors.&lt;br /&gt;
&lt;br /&gt;
Jobs can be sent to all nodes in the cluster, or a subset of the nodes. &lt;br /&gt;
&lt;br /&gt;
At this time, the cluster is working with ssh and Mathematica. We're in the process of getting it working with other applications including MPI, Matlab, Maple, and Magma. Check back on this page because support is rapidly evolving.&lt;br /&gt;
&lt;br /&gt;
Also, if you can get your own application running on the cluster, please write up a short description of how that was done and send it to me, humphrey@cornell.edu, so that we can include it in our documentation.&lt;br /&gt;
&lt;br /&gt;
First Step: Setting up your [[Cluster SSH access]], including testing your access with pdsh commands.&lt;br /&gt;
&lt;br /&gt;
Next: Launching [[Mathematica Remote Kernels]].&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Cluster_SSH_access&amp;diff=139</id>
		<title>Cluster SSH access</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Cluster_SSH_access&amp;diff=139"/>
		<updated>2022-07-21T14:56:19Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Cluster SSH Access ==&lt;br /&gt;
&lt;br /&gt;
Most cluster applications will be controlled using the SSH protocol. &lt;br /&gt;
&lt;br /&gt;
In order for this to work, you will have to set up SSH keys in your math account so that you can securely access the cluster without entering your password for each operation.&lt;br /&gt;
&lt;br /&gt;
The first step is creating a public and private SSH key for your account. This part is pretty simple. At the command prompt, type&lt;br /&gt;
&lt;br /&gt;
 ssh-keygen&lt;br /&gt;
&lt;br /&gt;
Hit enter for all the prompts. It will prompt you to enter a 'passphrase', just hit enter for that so that there is no passphrase.&lt;br /&gt;
&lt;br /&gt;
Once this has run, in your account there will be a hidden directory (filenames starting with a dot are hidden) called .ssh&lt;br /&gt;
&lt;br /&gt;
Inside this folder, there are two files, id_rsa and id_rsa.pub  id_rsa is your secret key. This key should not be shared with anyone, it is the key that gives you access.&lt;br /&gt;
id_rsa.pub is the public key associated with your private key. It can be shared and emailed because it can't be used to give access by itself, only to give access to someone who has the related private key.&lt;br /&gt;
&lt;br /&gt;
Now that you have the keys created, you must give access to your account to the public key. There will be a file in your .ssh directory called authorized_keys . It may or may not already exist. To append your public key to your autorized_keys file, type the following command:&lt;br /&gt;
&lt;br /&gt;
 cat ~/.ssh/id_rsa.pub &amp;gt;&amp;gt; ~/.ssh/authorized_keys&lt;br /&gt;
&lt;br /&gt;
Now you have ssh access to your own account on other machines without a password, but we're not ready for cluster access just yet, we still have some more steps. However, you can test your access now by doing&lt;br /&gt;
&lt;br /&gt;
 ssh ramsey&lt;br /&gt;
&lt;br /&gt;
The system may prompt you to accept the host key for ramsey if it's not already in your known_hosts file. Say yes to the prompt.&lt;br /&gt;
&lt;br /&gt;
You should now be logged in to ramsey, without having to have typed your password.&lt;br /&gt;
&lt;br /&gt;
Exit out of ramsey by typing 'exit'. You should now be back at the machine where you started to go on to the next step.&lt;br /&gt;
&lt;br /&gt;
==Setting up your known_hosts file.==&lt;br /&gt;
&lt;br /&gt;
When you connect to a new host using ssh, the system will check if the hosts public hostkey is in your .ssh/known_hosts file. If it is not, the system prompts you to accept the key. If a key is already there you will be logged in. However, if a key is there for that host and it does not match the one that the remote host has sent, you will receive a warning and the command will not continue. &lt;br /&gt;
&lt;br /&gt;
For accessing a cluster, it would be really tedious to have to connect to each host and then say 'yes' to the prompt in order to get the keys into your known_hosts file. So, to avoid this, we have a script that you can run which will remove any outdated keys from your known_hosts file and set up the file so you have automatic access to the cluster and other calculation machines.&lt;br /&gt;
&lt;br /&gt;
At the command prompt, type&lt;br /&gt;
&lt;br /&gt;
 setup_known_hosts&lt;br /&gt;
&lt;br /&gt;
You should see the progress of your known_hosts file being set up. Now your keys are ready for cluster access.&lt;br /&gt;
&lt;br /&gt;
==Testing Cluster Access==&lt;br /&gt;
&lt;br /&gt;
To test your access to the cluster, you'll use the pdsh command. pdsh uses ssh to send commands in parallel to a group of machines. You have to remember to only send commands that are not going to run for a long time or prompt you for any input, because that will cause your pdsh command to hang as the commands on the other end wait for input that never comes. So, you can send commands like 'uptime' or 'date' that will return output and then exit.&lt;br /&gt;
&lt;br /&gt;
'date' is a good example, it will display the system time on the remote machine and then exit. It's a good idea to run it to make sure all of the remote machines have synchronized clocks. Type the following command: (you may want to stretch the terminal window vertically to see all 40 lines of output.)&lt;br /&gt;
&lt;br /&gt;
 pdsh -R ssh -w pnode[01-40] 'date'&lt;br /&gt;
&lt;br /&gt;
This will display the date from each of the remote machines. This should run without errors and come back to a command prompt. If it has errors or does not finish, there may be a problem with either your keys or connectivity to the cluster.&lt;br /&gt;
&lt;br /&gt;
Don't worry if they differ by one second, because the time may have changed during the command. If they differ by more than a second, there may be a problem, but it should not effect most commands.&lt;br /&gt;
&lt;br /&gt;
To see how busy the remote machines are, you can do:&lt;br /&gt;
&lt;br /&gt;
 pdsh -R ssh -w pnode[01-40] 'uptime'&lt;br /&gt;
&lt;br /&gt;
which will tell you how long each machine has been up and its system load.&lt;br /&gt;
&lt;br /&gt;
You can use this same command to run things on a subset of nodes, like this command to check the load on the last 8 nodes:&lt;br /&gt;
&lt;br /&gt;
 pdsh -R ssh -w pnode[32-40] 'uptime'&lt;br /&gt;
&lt;br /&gt;
Or other computation hosts where you have keys set up, such as fibonacci and ramsey:&lt;br /&gt;
&lt;br /&gt;
 pdsh -R ssh -w fibonacci,ramsey 'uptime'&lt;br /&gt;
&lt;br /&gt;
Once you have these commands working, you're ready to configure your distributed application.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Cluster_SSH_access&amp;diff=138</id>
		<title>Cluster SSH access</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Cluster_SSH_access&amp;diff=138"/>
		<updated>2022-07-21T14:51:46Z</updated>

		<summary type="html">&lt;p&gt;Admin: Created page with &amp;quot; == Cluster SSH Access ==  Most cluster applications will be controlled using the SSH protocol.   In order for this to work, you will have to set up SSH keys in your math acco...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Cluster SSH Access ==&lt;br /&gt;
&lt;br /&gt;
Most cluster applications will be controlled using the SSH protocol. &lt;br /&gt;
&lt;br /&gt;
In order for this to work, you will have to set up SSH keys in your math account so that you can securely access the cluster without entering your password for each operation.&lt;br /&gt;
&lt;br /&gt;
The first step is creating a public and private SSH key for your account. This part is pretty simple. At the command prompt, type&lt;br /&gt;
&lt;br /&gt;
 ssh-keygen&lt;br /&gt;
&lt;br /&gt;
Hit enter for all the prompts. It will prompt you to enter a 'passphrase', just hit enter for that so that there is no passphrase.&lt;br /&gt;
&lt;br /&gt;
Once this has run, in your account there will be a hidden directory (filenames starting with a dot are hidden) called .ssh&lt;br /&gt;
&lt;br /&gt;
Inside this folder, there are two files, id_rsa and id_rsa.pub  id_rsa is your secret key. This key should not be shared with anyone, it is the key that gives you access.&lt;br /&gt;
id_rsa.pub is the public key associated with your private key. It can be shared and emailed because it can't be used to give access by itself, only to give access to someone who has the related private key.&lt;br /&gt;
&lt;br /&gt;
Now that you have the keys created, you must give access to your account to the public key. There will be a file in your .ssh directory called authorized_keys . It may or may not already exist. To append your public key to your autorized_keys file, type the following command:&lt;br /&gt;
&lt;br /&gt;
 cat ~/.ssh/id_rsa.pub &amp;gt;&amp;gt; ~/.ssh/authorized_keys&lt;br /&gt;
&lt;br /&gt;
Now you have ssh access to your own account on other machines without a password, but we're not ready for cluster access just yet, we still have some more steps. However, you can test your access now by doing&lt;br /&gt;
&lt;br /&gt;
 ssh ramsey&lt;br /&gt;
&lt;br /&gt;
The system may prompt you to accept the host key for ramsey if it's not already in your known_hosts file. Say yes to the prompt.&lt;br /&gt;
&lt;br /&gt;
You should now be logged in to ramsey, without having to have typed your password.&lt;br /&gt;
&lt;br /&gt;
Exit out of ramsey by typing 'exit'. You should now be back at the machine where you started to go on to the next step.&lt;br /&gt;
&lt;br /&gt;
==Setting up your known_hosts file.==&lt;br /&gt;
&lt;br /&gt;
When you connect to a new host using ssh, the system will check if the hosts public hostkey is in your .ssh/known_hosts file. If it is not, the system prompts you to accept the key. If a key is already there you will be logged in. However, if a key is there for that host and it does not match the one that the remote host has sent, you will receive a warning and the command will not continue. &lt;br /&gt;
&lt;br /&gt;
For accessing a cluster, it would be really tedious to have to connect to each host and then say 'yes' to the prompt in order to get the keys into your known_hosts file. So, to avoid this, we have a script that you can run which will remove any outdated keys from your known_hosts file and set up the file so you have automatic access to the cluster and other calculation machines.&lt;br /&gt;
&lt;br /&gt;
At the command prompt, type&lt;br /&gt;
&lt;br /&gt;
 setup_known_hosts&lt;br /&gt;
&lt;br /&gt;
You should see the progress of your known_hosts file being set up. Now your keys are ready for cluster access.&lt;br /&gt;
&lt;br /&gt;
==Testing Cluster Access==&lt;br /&gt;
&lt;br /&gt;
To test your access to the cluster, you'll use the pdsh command. pdsh uses ssh to send commands in parallel to a group of machines. You have to remember to only send commands that are not going to run for a long time or prompt you for any input, because that will cause your pdsh command to hang as the commands on the other end wait for input that never comes. So, you can send commands like 'uptime' or 'date' that will return output and then exit.&lt;br /&gt;
&lt;br /&gt;
'date' is a good example, it will display the system time on the remote machine and then exit. It's a good idea to run it to make sure all of the remote machines have synchronized clocks. Type the following command: (you may want to stretch the terminal window vertically to see all 40 lines of output.)&lt;br /&gt;
&lt;br /&gt;
 pdsh -R ssh -w pnode[01-40] 'date'&lt;br /&gt;
&lt;br /&gt;
This will display the date from each of the remote machines. Don't worry if they differ by one second, because the time may have changed during the command. If they differ by more than a second, there may be a problem, but it should not effect most commands.&lt;br /&gt;
&lt;br /&gt;
To see how busy the remote machines are, you can do:&lt;br /&gt;
&lt;br /&gt;
 pdsh -R ssh -w pnode[01-40] 'uptime'&lt;br /&gt;
&lt;br /&gt;
which will tell you how long each machine has been up and its system load.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Cluster_Info&amp;diff=137</id>
		<title>Cluster Info</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Cluster_Info&amp;diff=137"/>
		<updated>2022-07-21T14:27:51Z</updated>

		<summary type="html">&lt;p&gt;Admin: Created page with &amp;quot; == Math Cluster == The Math Department has an experimental computational cluster. It works fine for computation, but the main purpose is for configuring and testing distribut...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Math Cluster ==&lt;br /&gt;
The Math Department has an experimental computational cluster. It works fine for computation, but the main purpose is for configuring and testing distributed computing applications so that they can be run on larger clusters.&lt;br /&gt;
&lt;br /&gt;
The cluster is 40 machines with i7 processors. The first 32 are 6th-generation processors, and the last 8 are 7th generation processors.&lt;br /&gt;
&lt;br /&gt;
Jobs can be sent to all nodes in the cluster, or a subset of the nodes. &lt;br /&gt;
&lt;br /&gt;
At this time, the cluster is working with ssh and Mathematica. We're in the process of getting it working with other applications including MPI, Matlab, Maple, and Magma. Check back on this page because support is rapidly evolving.&lt;br /&gt;
&lt;br /&gt;
Also, if you can get your own application running on the cluster, please write up a short description of how that was done and send it to me, humphrey@cornell.edu, so that we can include it in our documentation.&lt;br /&gt;
&lt;br /&gt;
First Step: Setting up your [[Cluster SSH access]].&lt;br /&gt;
&lt;br /&gt;
Testing your access with [[pdsh commands]].&lt;br /&gt;
&lt;br /&gt;
Next: Launching [[Mathematica Remote Kernels]].&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Main_Page&amp;diff=136</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Main_Page&amp;diff=136"/>
		<updated>2022-07-21T14:21:33Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Cornell Math Department Computer Systems&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Computer Info]] page. All about our systems and software.&lt;br /&gt;
&lt;br /&gt;
[[Cluster Info]] page. About the math cluster and how to use it with various applications.&lt;br /&gt;
&lt;br /&gt;
All About [[Math Department Email]]&lt;br /&gt;
&lt;br /&gt;
How to connect to the Math Department with the [[Cornell VPN]].&lt;br /&gt;
&lt;br /&gt;
How to connect to the Math Department systems: [[Connecting]]&lt;br /&gt;
&lt;br /&gt;
How Staff can connect to their work machines from home. [[Staff Connect]]&lt;br /&gt;
&lt;br /&gt;
Here is the [[MachineList]]&lt;br /&gt;
&lt;br /&gt;
[[Instructions]] for department equipment.&lt;br /&gt;
&lt;br /&gt;
For troubleshooting information, see the admin pages [https://admin.math.cornell.edu/mw Here]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Consult the [//meta.wikimedia.org/wiki/Help:Contents User's Guide] for information on using the wiki software.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Main_Page&amp;diff=135</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Main_Page&amp;diff=135"/>
		<updated>2022-07-21T14:20:38Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Cornell Math Department Computer Systems&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Computer Info]] page. All about our systems and software.&lt;br /&gt;
&lt;br /&gt;
All About [[Math Department Email]]&lt;br /&gt;
&lt;br /&gt;
How to connect to the Math Department with the [[Cornell VPN]].&lt;br /&gt;
&lt;br /&gt;
How to connect to the Math Department systems: [[Connecting]]&lt;br /&gt;
&lt;br /&gt;
How Staff can connect to their work machines from home. [[Staff Connect]]&lt;br /&gt;
&lt;br /&gt;
Here is the [[MachineList]]&lt;br /&gt;
&lt;br /&gt;
[[Instructions]] for department equipment.&lt;br /&gt;
&lt;br /&gt;
For troubleshooting information, see the admin pages [https://admin.math.cornell.edu/mw Here]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Consult the [//meta.wikimedia.org/wiki/Help:Contents User's Guide] for information on using the wiki software.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=MachineList&amp;diff=134</id>
		<title>MachineList</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=MachineList&amp;diff=134"/>
		<updated>2022-06-26T16:22:37Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Math Department Linux Machines ==&lt;br /&gt;
This is a list of department machines that you may use remotely.&lt;br /&gt;
All of these machines have the standard set of packages.&lt;br /&gt;
This list is not complete and it is changing constantly, so check back from time to time.&lt;br /&gt;
&lt;br /&gt;
You can connect to the machines using their complete domain name, such as squid2.math.cornell.edu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Dedicated Computation Machines'''&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
|Hostname ||Processor ||Cores / Threads ||RAM  ||GPU ||Net &lt;br /&gt;
|- &lt;br /&gt;
|ramsey || AMD Ryzen 9 5950x || 16/32 || 128GB || RTX 3080Ti || 10Gb &lt;br /&gt;
|- &lt;br /&gt;
|fibonacci||  AMD Ryzen 9 5950x || 16/32 || 128GB || RTX 3080Ti || 10Gb &lt;br /&gt;
|- &lt;br /&gt;
|squid2|| i7-6700k CPU @ 4.0GHz || 4/8 || 64GB || RTX 2080 Super || 10Gb&lt;br /&gt;
|- &lt;br /&gt;
|kraken|| Quad Opteron || 64/64 || 512GB || RTX 2080 || 1Gb&lt;br /&gt;
|- &lt;br /&gt;
|heaviside|| Xeon  E5-2640 || 12/24 || 256GB || N/A || 10Gb&lt;br /&gt;
|- &lt;br /&gt;
|hopper||Xeon  E5-2640 || 16/32 ||256GB || N/A || 10Gb&lt;br /&gt;
|- &lt;br /&gt;
|nautilus || Xeon X5560 || 8/16 || 96GB || N/A || 1Gb&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Virtual Machines'''&lt;br /&gt;
&lt;br /&gt;
These are virtual machines made available with extra resources from the department servers.&amp;lt;br&amp;gt;&lt;br /&gt;
NOTE: At this time these machines are not available due to upgrades on their physical systems.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
|Hostname ||Processor ||vCores ||RAM  ||GPU ||Net&lt;br /&gt;
|- &lt;br /&gt;
|conway || VM on AMD Epyc Milan || 14 || 64GB || N/A ||10Gb&lt;br /&gt;
|- &lt;br /&gt;
|dynkin || VM on AMD Epyc Milan || 14 || 64GB || N/A ||10Gb&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shared Workstations'''&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
|Hostname ||Processor ||Cores / Threads ||RAM  ||GPU &lt;br /&gt;
|- &lt;br /&gt;
|aio01 || i5-6600 CPU @ 3.30GHz || 4/4 || 16GB || N/A&lt;br /&gt;
|-&lt;br /&gt;
|hank || i7-7700 CPU @ 3.60GHz || 4/8 || 16GB || N/A&lt;br /&gt;
|-&lt;br /&gt;
|feynman || i7-7500 CPU @ 3.40GHz ||4/8 || 16GB || N/A&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Private Machines'''&lt;br /&gt;
&lt;br /&gt;
These machines are the property of faculty members and may only be used with their permission.&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
|Hostname ||Processor ||Cores / Threads ||RAM  ||GPU ||Net ||Owner&lt;br /&gt;
|- &lt;br /&gt;
|leo || Xeon E5-2698 || 20/40 || 256GB || N/A || 10Gb || A. Townsend&lt;br /&gt;
|- &lt;br /&gt;
|wooster ||  AMD Ryzen 9 5950x || 16/32 || 128GB || RTX 3080Ti || 1Gb || D. Barbasch&lt;br /&gt;
|- &lt;br /&gt;
|zeno ||  AMD Ryzen 9 5950x || 16/32 || 128GB || RTX 3080Ti || 10Gb || A. Vladimirsky&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=MachineList&amp;diff=133</id>
		<title>MachineList</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=MachineList&amp;diff=133"/>
		<updated>2022-06-26T16:15:43Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Math Department Linux Machines ==&lt;br /&gt;
This is a list of department machines that you may use remotely.&lt;br /&gt;
All of these machines have the standard set of packages.&lt;br /&gt;
This list is not complete and it is changing constantly, so check back from time to time.&lt;br /&gt;
&lt;br /&gt;
You can connect to the machines using their complete domain name, such as squid2.math.cornell.edu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Dedicated Computation Machines'''&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
|Hostname ||Processor ||Cores / Threads ||RAM  ||GPU ||Net &lt;br /&gt;
|- &lt;br /&gt;
|ramsey || AMD Ryzen 9 5950x || 16/32 || 128GB || RTX 3080Ti || 10Gb &lt;br /&gt;
|- &lt;br /&gt;
|fibonacci||  AMD Ryzen 9 5950x || 16/32 || 128GB || RTX 3080Ti || 10Gb &lt;br /&gt;
|- &lt;br /&gt;
|squid2|| i7-6700k CPU @ 4.0GHz || 4/8 || 64GB || RTX 2080 Super || 10Gb&lt;br /&gt;
|- &lt;br /&gt;
|kraken|| Quad Opteron || 64/64 || 512GB || RTX 2080 || 1Gb&lt;br /&gt;
|- &lt;br /&gt;
|heaviside|| Xeon  E5-2640 || 12/24 || 256GB || N/A || 10Gb&lt;br /&gt;
|- &lt;br /&gt;
|hopper||Xeon  E5-2640 || 16/32 ||256GB || N/A || 10Gb&lt;br /&gt;
|- &lt;br /&gt;
|nautilus || Xeon X5560 || 8/16 || 96GB || N/A || 1Gb&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shared Workstations'''&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
|Hostname ||Processor ||Cores / Threads ||RAM  ||GPU &lt;br /&gt;
|- &lt;br /&gt;
|aio01 || i5-6600 CPU @ 3.30GHz || 4/4 || 16GB || N/A&lt;br /&gt;
|-&lt;br /&gt;
|hank || i7-7700 CPU @ 3.60GHz || 4/8 || 16GB || N/A&lt;br /&gt;
|-&lt;br /&gt;
|feynman || i7-7500 CPU @ 3.40GHz ||4/8 || 16GB || N/A&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Private Machines'''&lt;br /&gt;
&lt;br /&gt;
These machines are the property of faculty members and may only be used with their permission.&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
|Hostname ||Processor ||Cores / Threads ||RAM  ||GPU ||Net ||Owner&lt;br /&gt;
|- &lt;br /&gt;
|leo || Xeon E5-2698 || 20/40 || 256GB || N/A || 10Gb || A. Townsend&lt;br /&gt;
|- &lt;br /&gt;
|wooster ||  AMD Ryzen 9 5950x || 16/32 || 128GB || RTX 3080Ti || 1Gb || D. Barbasch&lt;br /&gt;
|- &lt;br /&gt;
|zeno ||  AMD Ryzen 9 5950x || 16/32 || 128GB || RTX 3080Ti || 10Gb || A. Vladimirsky&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=MachineList&amp;diff=132</id>
		<title>MachineList</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=MachineList&amp;diff=132"/>
		<updated>2022-06-26T16:14:44Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Math Department Linux Machines ==&lt;br /&gt;
This is a list of department machines that you may use remotely.&lt;br /&gt;
All of these machines have the standard set of packages.&lt;br /&gt;
This list is not complete and it is changing constantly, so check back from time to time.&lt;br /&gt;
&lt;br /&gt;
You can connect to the machines using their complete domain name, such as squid2.math.cornell.edu&lt;br /&gt;
&lt;br /&gt;
'''Dedicated Computation Machines'''&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
|Hostname ||Processor ||Cores / Threads ||RAM  ||GPU ||Net &lt;br /&gt;
|- &lt;br /&gt;
|ramsey || AMD Ryzen 9 5950x || 16/32 || 128GB || RTX 3080Ti || 10Gb &lt;br /&gt;
|- &lt;br /&gt;
|fibonacci||  AMD Ryzen 9 5950x || 16/32 || 128GB || RTX 3080Ti || 10Gb &lt;br /&gt;
|- &lt;br /&gt;
|squid2|| i7-6700k CPU @ 4.0GHz || 4/8 || 64GB || RTX 2080 Super || 10Gb&lt;br /&gt;
|- &lt;br /&gt;
|kraken|| Quad Opteron || 64/64 || 512GB || RTX 2080 || 1Gb&lt;br /&gt;
|- &lt;br /&gt;
|heaviside|| Xeon  E5-2640 || 12/24 || 256GB || N/A || 10Gb&lt;br /&gt;
|- &lt;br /&gt;
|hopper||Xeon  E5-2640 || 16/32 ||256GB || N/A || 10Gb&lt;br /&gt;
|- &lt;br /&gt;
|nautilus || Xeon X5560 || 8/16 || 96GB || N/A || 1Gb&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shared Workstations'''&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
|Hostname ||Processor ||Cores / Threads ||RAM  ||GPU &lt;br /&gt;
|- &lt;br /&gt;
|aio01 || i5-6600 CPU @ 3.30GHz || 4/4 || 16GB || N/A&lt;br /&gt;
|-&lt;br /&gt;
|hank || i7-7700 CPU @ 3.60GHz || 4/8 || 16GB || N/A&lt;br /&gt;
|-&lt;br /&gt;
|feynman || i7-7500 CPU @ 3.40GHz ||4/8 || 16GB || N/A&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Private Machines'''&lt;br /&gt;
These machines are the property of faculty members and may only be used with their permission.&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
|Hostname ||Processor ||Cores / Threads ||RAM  ||GPU ||Net ||Owner&lt;br /&gt;
|- &lt;br /&gt;
|leo || Xeon E5-2698 || 20/40 || 256GB || N/A || 10Gb || A. Townsend&lt;br /&gt;
|- &lt;br /&gt;
|wooster ||  AMD Ryzen 9 5950x || 16/32 || 128GB || RTX 3080Ti || 1Gb || D. Barbasch&lt;br /&gt;
|- &lt;br /&gt;
|zeno ||  AMD Ryzen 9 5950x || 16/32 || 128GB || RTX 3080Ti || 10Gb || A. Vladimirsky&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=MachineList&amp;diff=131</id>
		<title>MachineList</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=MachineList&amp;diff=131"/>
		<updated>2022-06-21T16:57:25Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Math Department Linux Machines ==&lt;br /&gt;
This is a list of department machines that you may use remotely.&lt;br /&gt;
All of these machines have the standard set of packages.&lt;br /&gt;
This list is not complete and it is changing constantly, so check back from time to time.&lt;br /&gt;
&lt;br /&gt;
You can connect to the machines using their complete domain name, such as squid2.math.cornell.edu&lt;br /&gt;
&lt;br /&gt;
'''Dedicated Computation Machines'''&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
|Hostname ||Processor ||Cores / Threads ||RAM  ||GPU ||Net &lt;br /&gt;
|- &lt;br /&gt;
|ramsey || AMD Ryzen 9 5950x || 16/32 || 64GB || RTX 3080Ti || 10Gb &lt;br /&gt;
|- &lt;br /&gt;
|fibonacci||  AMD Ryzen 9 5950x || 16/32 || 64GB || RTX 3080Ti || 10Gb &lt;br /&gt;
|- &lt;br /&gt;
|squid2|| i7-6700k CPU @ 4.0GHz || 4/8 || 64GB || RTX 2080 Super || 10Gb&lt;br /&gt;
|- &lt;br /&gt;
|kraken|| Quad Opteron || 64/64 || 512GB || RTX 2080 || 1Gb&lt;br /&gt;
|- &lt;br /&gt;
|heaviside|| Xeon  E5-2640 || 12/24 || 256GB || N/A || 10Gb&lt;br /&gt;
|- &lt;br /&gt;
|hopper||Xeon  E5-2640 || 16/32 ||256GB || N/A || 10Gb&lt;br /&gt;
|- &lt;br /&gt;
|nautilus || Xeon X5560 || 8/16 || 96GB || N/A || 1Gb&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shared Workstations'''&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
|Hostname ||Processor ||Cores / Threads ||RAM  ||GPU &lt;br /&gt;
|- &lt;br /&gt;
|aio01 || i5-6600 CPU @ 3.30GHz || 4/4 || 16GB || N/A&lt;br /&gt;
|-&lt;br /&gt;
|hank || i7-7700 CPU @ 3.60GHz || 4/8 || 16GB || N/A&lt;br /&gt;
|-&lt;br /&gt;
|feynman || i7-7500 CPU @ 3.40GHz ||4/8 || 16GB || N/A&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=MachineList&amp;diff=130</id>
		<title>MachineList</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=MachineList&amp;diff=130"/>
		<updated>2022-06-21T16:56:46Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Math Department Linux Machines ==&lt;br /&gt;
This is a list of department machines that you may use remotely.&lt;br /&gt;
All of these machines have the standard set of packages.&lt;br /&gt;
This list is not complete and it is changing constantly, so check back from time to time.&lt;br /&gt;
&lt;br /&gt;
You can connect to the machines using their complete domain name, such as squid2.math.cornell.edu&lt;br /&gt;
&lt;br /&gt;
'''Dedicated Computation Machines'''&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
|Hostname ||Processor ||Cores / Threads ||RAM  ||GPU ||Net &lt;br /&gt;
|- &lt;br /&gt;
|ramsey || AMD Ryzen 9 5950x || 16/32 || 64GB || RTX 3080Ti || 10Gb &lt;br /&gt;
|- &lt;br /&gt;
|fibonacci||  AMD Ryzen 9 5950x || 16/32 || 64GB || RTX 3080Ti || 10Gb &lt;br /&gt;
|- &lt;br /&gt;
|squid2|| i7-6700k CPU @ 4.0GHz || 4/8 || 64GB || RTX 2080 Super || 10Gb&lt;br /&gt;
|- &lt;br /&gt;
|kraken|| Quad Opteron || 64/64 || 512GB || RTX 2080 || 10Gb&lt;br /&gt;
|- &lt;br /&gt;
|heaviside|| Xeon  E5-2640 || 12/24 || 256GB || N/A || 10Gb&lt;br /&gt;
|- &lt;br /&gt;
|hopper||Xeon  E5-2640 || 16/32 ||256GB || N/A || 10Gb&lt;br /&gt;
|- &lt;br /&gt;
|nautilus || Xeon X5560 || 8/16 || 96GB || N/A || 1Gb&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shared Workstations'''&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
|Hostname ||Processor ||Cores / Threads ||RAM  ||GPU &lt;br /&gt;
|- &lt;br /&gt;
|aio01 || i5-6600 CPU @ 3.30GHz || 4/4 || 16GB || N/A&lt;br /&gt;
|-&lt;br /&gt;
|hank || i7-7700 CPU @ 3.60GHz || 4/8 || 16GB || N/A&lt;br /&gt;
|-&lt;br /&gt;
|feynman || i7-7500 CPU @ 3.40GHz ||4/8 || 16GB || N/A&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Computer_Facilites&amp;diff=129</id>
		<title>Computer Facilites</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Computer_Facilites&amp;diff=129"/>
		<updated>2022-06-06T16:04:42Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Computer Facilities Overview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Computer Facilities Overview ==&lt;br /&gt;
&lt;br /&gt;
'''Facilities'''&lt;br /&gt;
&lt;br /&gt;
The Math Department has Windows, Macintosh, and Linux systems. The Windows systems are administered by the College or Arts &amp;amp; Sciences. They have the standard Cornell software packages, and you can log in using your Cornell netid and password.&lt;br /&gt;
&lt;br /&gt;
The Macintosh and Linux systems are administered by the Math Department. You can log in to them using your Math Department user account.&lt;br /&gt;
&lt;br /&gt;
'''Systems and Software'''&lt;br /&gt;
&lt;br /&gt;
The Linux systems are running Ubuntu 20.04 LTS. They have the standard software packages for software development, desktop applications as well as software for LaTEX and the composition of web content. If there is something that you need which is not on the systems, please let us know, we'd be happy to include it. &lt;br /&gt;
&lt;br /&gt;
We also have Linux machines that are dedicated computation machines. You can see the list of machines at [[MachineList]]. You can connect to the machines remotely by following the instructions here: [[Connecting]].&lt;br /&gt;
&lt;br /&gt;
'''Mathematics Software'''&lt;br /&gt;
&lt;br /&gt;
The Math Department has many mathematics applications available on the department Linux machines. These include:&lt;br /&gt;
&lt;br /&gt;
* Matlab (Type &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;matlab &amp;amp;&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt; in a terminal window)&lt;br /&gt;
* Wolfram Mathematica (Type &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;mathematica &amp;amp;&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt; in a terminal window)&lt;br /&gt;
* Maple (Type &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;maple&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt; in a terminal window for the text-based system, type &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt; xmaple &amp;amp; &amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt; for the graphical system.)&lt;br /&gt;
* Magma (type &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt; magma &amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt; in a terminal window. Magma is a text-based system.)&lt;br /&gt;
* GAP (type &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;gap&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt; in a terminal window. Gap is text-based.)&lt;br /&gt;
&lt;br /&gt;
and many others.&lt;br /&gt;
&lt;br /&gt;
'''Home Directories'''&lt;br /&gt;
Home directories for Math Department accounts on the Linux systems use storage on a central file server. All files in home directories are backed up nightly.&lt;br /&gt;
&lt;br /&gt;
'''Email'''&lt;br /&gt;
The Math Department used to host its own email server. This has been phased out and all email is now handled by the central Cornell email system, on Office365.&lt;br /&gt;
&lt;br /&gt;
'''Web Publishing'''&lt;br /&gt;
The Math Department has web servers for hosting web sites for individuals, programs, and classes. Those who need websites for classes are strongly encouraged to use Cornell's [https://login.canvas.cornell.edu Canvas] system. If you have evaluated Canvas and have determined that it will not meet your needs, then contact us at mathit@cornell.edu so we can get you set up with a self-serve website for your class.&lt;br /&gt;
&lt;br /&gt;
[[Handy Links]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=MachineList&amp;diff=128</id>
		<title>MachineList</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=MachineList&amp;diff=128"/>
		<updated>2022-06-06T16:03:05Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Math Department Linux Machines */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Math Department Linux Machines ==&lt;br /&gt;
This is a list of department machines that you may use remotely.&lt;br /&gt;
All of these machines have the standard set of packages.&lt;br /&gt;
This list is not complete and it is changing constantly, so check back from time to time.&lt;br /&gt;
&lt;br /&gt;
You can connect to the machines using their complete domain name, such as squid2.math.cornell.edu&lt;br /&gt;
&lt;br /&gt;
'''Dedicated Computation Machines'''&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
|Hostname ||Processor ||Cores / Threads ||RAM  ||GPU &lt;br /&gt;
|- &lt;br /&gt;
|ramsey || AMD Ryzen 9 5950x || 16/32 || 64GB || RTX 3080Ti &lt;br /&gt;
|- &lt;br /&gt;
|fibonacci|| i7-6700 CPU @ 3.40GHz || 4/8 || 64GB || N/A&lt;br /&gt;
|- &lt;br /&gt;
|squid2|| i7-6700k CPU @ 4.0GHz || 4/8 || 64GB || RTX 2080 Super&lt;br /&gt;
|- &lt;br /&gt;
|heaviside|| Xeon  E5-2640 || 12/24 || 256GB || N/A&lt;br /&gt;
|- &lt;br /&gt;
|hopper||Xeon  E5-2640 || 16/32 ||256GB || N/A&lt;br /&gt;
|- &lt;br /&gt;
|nautilus || Xeon X5560 || 8/16 || 96GB || N/A&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shared Workstations'''&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
|Hostname ||Processor ||Cores / Threads ||RAM  ||GPU &lt;br /&gt;
|- &lt;br /&gt;
|blanch || i7-4790 CPU @ 3.60GHz || 4/8 || 8GB || N/A&lt;br /&gt;
|-&lt;br /&gt;
|pacioli || i7-3770 CPU @ 3.40GHz || 4/8 || 16GB || N/A&lt;br /&gt;
|-&lt;br /&gt;
|kleene || i7-3770 CPU @ 3.40GHz ||4/8 || 16GB || N/A&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=MachineList&amp;diff=124</id>
		<title>MachineList</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=MachineList&amp;diff=124"/>
		<updated>2021-10-08T17:05:50Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Math Department Linux Machines ==&lt;br /&gt;
This is a list of department machines that you may use remotely.&lt;br /&gt;
All of these machines have the standard set of packages.&lt;br /&gt;
This list is not complete and it is changing constantly, so check back from time to time.&lt;br /&gt;
&lt;br /&gt;
You can connect to the machines using their complete domain name, such as squid2.math.cornell.edu&lt;br /&gt;
&lt;br /&gt;
'''Dedicated Computation Machines'''&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
|Hostname ||Processor ||Cores / Threads ||RAM  ||GPU &lt;br /&gt;
|- &lt;br /&gt;
|ramsey || i7-6700 CPU @ 3.40GHz || 4/4 || 64GB || N/A&lt;br /&gt;
|- &lt;br /&gt;
|fibonacci|| i7-6700 CPU @ 3.40GHz || 4/8 || 64GB || N/A&lt;br /&gt;
|- &lt;br /&gt;
|heaviside|| Xeon  E5-2640 || 12/24 || 256GB || N/A&lt;br /&gt;
|- &lt;br /&gt;
|hopper||Xeon  E5-2640 || 16/32 ||256GB || N/A&lt;br /&gt;
|- &lt;br /&gt;
|nautilus || Xeon X5560 || 8/16 || 96GB || N/A&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shared Workstations'''&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
|Hostname ||Processor ||Cores / Threads ||RAM  ||GPU &lt;br /&gt;
|- &lt;br /&gt;
|blanch || i7-4790 CPU @ 3.60GHz || 4/8 || 8GB || N/A&lt;br /&gt;
|-&lt;br /&gt;
|pacioli || i7-3770 CPU @ 3.40GHz || 4/8 || 16GB || N/A&lt;br /&gt;
|-&lt;br /&gt;
|kleene || i7-3770 CPU @ 3.40GHz ||4/8 || 16GB || N/A&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=File:PAmodebutton.jpg&amp;diff=123</id>
		<title>File:PAmodebutton.jpg</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=File:PAmodebutton.jpg&amp;diff=123"/>
		<updated>2021-09-03T19:10:03Z</updated>

		<summary type="html">&lt;p&gt;Admin: Picture of the mode button on the back of the PA speaker.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Picture of the mode button on the back of the PA speaker.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Wireless_PA&amp;diff=122</id>
		<title>Wireless PA</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Wireless_PA&amp;diff=122"/>
		<updated>2021-09-03T19:08:57Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Instructions for the Wireless PA system. ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT!''' &lt;br /&gt;
&lt;br /&gt;
If you connect to this speaker using Bluetooth, you MUST remove the device from your computer when you are finished. Not just 'disconnect', but remove or 'forget' the device, otherwise the next user may have problems. (see below)&lt;br /&gt;
&lt;br /&gt;
To use the wireless PA system, turn on the power on both the speaker and the wireless microphone receiver.&lt;br /&gt;
&lt;br /&gt;
The power switch for the speaker is on the back of the unit, above the power cord.&lt;br /&gt;
&lt;br /&gt;
[[File:PAPowerButton.jpg]]&lt;br /&gt;
&lt;br /&gt;
The power switch for the receiver is on the front, on the right side.&lt;br /&gt;
To turn on the receiver, briefly hold down the power button.&lt;br /&gt;
&lt;br /&gt;
[[File:PARecPowerButton.jpg]]&lt;br /&gt;
&lt;br /&gt;
Once the speaker and the receiver are powered on, turn on the wireless microphone. &lt;br /&gt;
The power switch is on the top. Remember to turn it off when you are done, or the next person will find that the battery is dead.&lt;br /&gt;
&lt;br /&gt;
The wireless microphone will work with other microphone elements. The element may not plug in easily, but it does fit the unit, so you may need to press it in to connect it.&lt;br /&gt;
&lt;br /&gt;
To use the element that is with the unit, clip it to your shirt or to the outside of your mask.&lt;br /&gt;
&lt;br /&gt;
If you are too close to the speaker, you may get feedback. Adjust the microphone volume on the front of the receiver to get the best level. There is also another volume control on the back of the speaker, but avoid adjusting that. If you are not getting any sound you may want to check the second volume control on the back.&lt;br /&gt;
&lt;br /&gt;
== Using Bluetooth ==&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT!''' &lt;br /&gt;
&lt;br /&gt;
If you connect to this speaker using Bluetooth, you MUST remove the device from your computer when you are finished. Not just 'disconnect', but remove or 'forget' the device, otherwise the next user may have problems.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
</feed>