<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://e.math.cornell.edu/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Sl2625</id>
	<title>mathpub - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://e.math.cornell.edu/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Sl2625"/>
	<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php/Special:Contributions/Sl2625"/>
	<updated>2026-04-28T03:55:13Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.35.7</generator>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=282</id>
		<title>Slurm Quick Start</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=282"/>
		<updated>2022-11-29T16:08:20Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: /* Run Script and Check Output */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page explains how to get the configure and run a batch job on the SLURM cluster from the head node.&lt;br /&gt;
&lt;br /&gt;
== 1. Configuring SLURM ==&lt;br /&gt;
Slurm has 2 entities: a slurmctld controller node and multiple slurmd host nodes where we can run jobs in parallel.&lt;br /&gt;
&lt;br /&gt;
To start the slurm controller on a machine we need to give this command from root:&lt;br /&gt;
 $ systemctl start slurmctld.service&lt;br /&gt;
After running this, we can verify that slurm controller is running by viewing the log with the command:&lt;br /&gt;
 $ tail /var/log/slurmctld.log&lt;br /&gt;
The nodes should be configured to run with the controller. You can see the information about available nodes using this command:&lt;br /&gt;
 $ sinfo&lt;br /&gt;
[[File:Sinfo.png|left|alt=|frameless|451x451px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In case the nodes show up in STATE &amp;quot;down&amp;quot;, run the following command to put them in idle state (then check again using sinfo command):&lt;br /&gt;
 $ scontrol update nodename=pnode[01-64] state=idle&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 2. Running a Slurm Batch Job on Multiple Nodes ==&lt;br /&gt;
We can create python scripts and run these scripts in parallel on the cluster nodes. This is a simple example where we will print the task number from each node.&lt;br /&gt;
&lt;br /&gt;
=== Create a Python Script ===&lt;br /&gt;
First we create a Python script which prints the system task number:&lt;br /&gt;
 #!/usr/bin/python&lt;br /&gt;
 # import sys library (needed for accepted command line args)&lt;br /&gt;
 import sys&lt;br /&gt;
 # print task number&lt;br /&gt;
 print('Hello! I am a task number: ', sys.argv[1])&lt;br /&gt;
We will save this python script as hello-parallel.py&lt;br /&gt;
&lt;br /&gt;
=== Create Slurm Script ===&lt;br /&gt;
Next, we need to create a Slurm script to run the python program we just created:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 # Example of running python script with a job array&lt;br /&gt;
 &lt;br /&gt;
 #SBATCH -J hello&lt;br /&gt;
 #SBATCH -p debug&lt;br /&gt;
 #SBATCH --array=1-10                    # how many tasks in the array&lt;br /&gt;
 #SBATCH -c 1                            # one CPU core per task&lt;br /&gt;
 #SBATCH -t 10:00&lt;br /&gt;
 #SBATCH -o hello-%j-%a.out&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 # Run python script with a command line argument&lt;br /&gt;
 srun python3 hello-parallel.py $SLURM_ARRAY_TASK_ID&lt;br /&gt;
We will save this Slurm script as hello-parallel.slurm&lt;br /&gt;
&lt;br /&gt;
The first few lines of this file (with #SBATCH) are used to configure different parameters we want for the execution of the python script on the cluster.&lt;br /&gt;
&lt;br /&gt;
For example, -J specifies job name, -p specifies the partition on which the cluster nodes are (for us the partition is named debug), --array specifies how many tasks we want, -c specifies number of CPU cores per task.&lt;br /&gt;
&lt;br /&gt;
For more details on other command line options for sbatch for configuring the cluster, please visit [https://slurm.schedmd.com/sbatch.html Slurm Sbatch Documentation]&lt;br /&gt;
&lt;br /&gt;
=== Run Script and Check Output ===&lt;br /&gt;
Now we are ready to run the script on the cluster, specifically on 10 nodes of the cluster. To run the slurm script, simply give the command:&lt;br /&gt;
 $ sbatch hello-parallel.slurm&lt;br /&gt;
This should run our python script on 10 different nodes and generate output files in the same location. The output files will have an extension .out. &lt;br /&gt;
[[File:Slurm op.png|left|frameless|709x709px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We can view the output using the less command:&lt;br /&gt;
[[File:Slrm.png|left|frameless|447x447px]]&lt;br /&gt;
[[File:Slurm output.png|left|frameless]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more information: [https://slurm.schedmd.com/ Slurm Documentation]&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=281</id>
		<title>Slurm Quick Start</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=281"/>
		<updated>2022-11-29T16:07:58Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: /* Run Script and Check Output */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page explains how to get the configure and run a batch job on the SLURM cluster from the head node.&lt;br /&gt;
&lt;br /&gt;
== 1. Configuring SLURM ==&lt;br /&gt;
Slurm has 2 entities: a slurmctld controller node and multiple slurmd host nodes where we can run jobs in parallel.&lt;br /&gt;
&lt;br /&gt;
To start the slurm controller on a machine we need to give this command from root:&lt;br /&gt;
 $ systemctl start slurmctld.service&lt;br /&gt;
After running this, we can verify that slurm controller is running by viewing the log with the command:&lt;br /&gt;
 $ tail /var/log/slurmctld.log&lt;br /&gt;
The nodes should be configured to run with the controller. You can see the information about available nodes using this command:&lt;br /&gt;
 $ sinfo&lt;br /&gt;
[[File:Sinfo.png|left|alt=|frameless|451x451px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In case the nodes show up in STATE &amp;quot;down&amp;quot;, run the following command to put them in idle state (then check again using sinfo command):&lt;br /&gt;
 $ scontrol update nodename=pnode[01-64] state=idle&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 2. Running a Slurm Batch Job on Multiple Nodes ==&lt;br /&gt;
We can create python scripts and run these scripts in parallel on the cluster nodes. This is a simple example where we will print the task number from each node.&lt;br /&gt;
&lt;br /&gt;
=== Create a Python Script ===&lt;br /&gt;
First we create a Python script which prints the system task number:&lt;br /&gt;
 #!/usr/bin/python&lt;br /&gt;
 # import sys library (needed for accepted command line args)&lt;br /&gt;
 import sys&lt;br /&gt;
 # print task number&lt;br /&gt;
 print('Hello! I am a task number: ', sys.argv[1])&lt;br /&gt;
We will save this python script as hello-parallel.py&lt;br /&gt;
&lt;br /&gt;
=== Create Slurm Script ===&lt;br /&gt;
Next, we need to create a Slurm script to run the python program we just created:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 # Example of running python script with a job array&lt;br /&gt;
 &lt;br /&gt;
 #SBATCH -J hello&lt;br /&gt;
 #SBATCH -p debug&lt;br /&gt;
 #SBATCH --array=1-10                    # how many tasks in the array&lt;br /&gt;
 #SBATCH -c 1                            # one CPU core per task&lt;br /&gt;
 #SBATCH -t 10:00&lt;br /&gt;
 #SBATCH -o hello-%j-%a.out&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 # Run python script with a command line argument&lt;br /&gt;
 srun python3 hello-parallel.py $SLURM_ARRAY_TASK_ID&lt;br /&gt;
We will save this Slurm script as hello-parallel.slurm&lt;br /&gt;
&lt;br /&gt;
The first few lines of this file (with #SBATCH) are used to configure different parameters we want for the execution of the python script on the cluster.&lt;br /&gt;
&lt;br /&gt;
For example, -J specifies job name, -p specifies the partition on which the cluster nodes are (for us the partition is named debug), --array specifies how many tasks we want, -c specifies number of CPU cores per task.&lt;br /&gt;
&lt;br /&gt;
For more details on other command line options for sbatch for configuring the cluster, please visit [https://slurm.schedmd.com/sbatch.html Slurm Sbatch Documentation]&lt;br /&gt;
&lt;br /&gt;
=== Run Script and Check Output ===&lt;br /&gt;
Now we are ready to run the script on the cluster, specifically on 10 nodes of the cluster. To run the slurm script, simply give the command:&lt;br /&gt;
 $ sbatch hello-parallel.slurm&lt;br /&gt;
This should run our python script on 10 different nodes and generate output files in the same location. The output files will have an extension .out. &lt;br /&gt;
[[File:Slurm op.png|left|frameless|709x709px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We can view the output using the less command:&lt;br /&gt;
[[File:Slrm.png|left|frameless|447x447px]]&lt;br /&gt;
[[File:Slurm output.png|left|frameless]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For more information: [https://slurm.schedmd.com/ Slurm Documentation]&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=280</id>
		<title>Slurm Quick Start</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=280"/>
		<updated>2022-11-29T16:07:29Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: /* Run Script and Check Output */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page explains how to get the configure and run a batch job on the SLURM cluster from the head node.&lt;br /&gt;
&lt;br /&gt;
== 1. Configuring SLURM ==&lt;br /&gt;
Slurm has 2 entities: a slurmctld controller node and multiple slurmd host nodes where we can run jobs in parallel.&lt;br /&gt;
&lt;br /&gt;
To start the slurm controller on a machine we need to give this command from root:&lt;br /&gt;
 $ systemctl start slurmctld.service&lt;br /&gt;
After running this, we can verify that slurm controller is running by viewing the log with the command:&lt;br /&gt;
 $ tail /var/log/slurmctld.log&lt;br /&gt;
The nodes should be configured to run with the controller. You can see the information about available nodes using this command:&lt;br /&gt;
 $ sinfo&lt;br /&gt;
[[File:Sinfo.png|left|alt=|frameless|451x451px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In case the nodes show up in STATE &amp;quot;down&amp;quot;, run the following command to put them in idle state (then check again using sinfo command):&lt;br /&gt;
 $ scontrol update nodename=pnode[01-64] state=idle&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 2. Running a Slurm Batch Job on Multiple Nodes ==&lt;br /&gt;
We can create python scripts and run these scripts in parallel on the cluster nodes. This is a simple example where we will print the task number from each node.&lt;br /&gt;
&lt;br /&gt;
=== Create a Python Script ===&lt;br /&gt;
First we create a Python script which prints the system task number:&lt;br /&gt;
 #!/usr/bin/python&lt;br /&gt;
 # import sys library (needed for accepted command line args)&lt;br /&gt;
 import sys&lt;br /&gt;
 # print task number&lt;br /&gt;
 print('Hello! I am a task number: ', sys.argv[1])&lt;br /&gt;
We will save this python script as hello-parallel.py&lt;br /&gt;
&lt;br /&gt;
=== Create Slurm Script ===&lt;br /&gt;
Next, we need to create a Slurm script to run the python program we just created:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 # Example of running python script with a job array&lt;br /&gt;
 &lt;br /&gt;
 #SBATCH -J hello&lt;br /&gt;
 #SBATCH -p debug&lt;br /&gt;
 #SBATCH --array=1-10                    # how many tasks in the array&lt;br /&gt;
 #SBATCH -c 1                            # one CPU core per task&lt;br /&gt;
 #SBATCH -t 10:00&lt;br /&gt;
 #SBATCH -o hello-%j-%a.out&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 # Run python script with a command line argument&lt;br /&gt;
 srun python3 hello-parallel.py $SLURM_ARRAY_TASK_ID&lt;br /&gt;
We will save this Slurm script as hello-parallel.slurm&lt;br /&gt;
&lt;br /&gt;
The first few lines of this file (with #SBATCH) are used to configure different parameters we want for the execution of the python script on the cluster.&lt;br /&gt;
&lt;br /&gt;
For example, -J specifies job name, -p specifies the partition on which the cluster nodes are (for us the partition is named debug), --array specifies how many tasks we want, -c specifies number of CPU cores per task.&lt;br /&gt;
&lt;br /&gt;
For more details on other command line options for sbatch for configuring the cluster, please visit [https://slurm.schedmd.com/sbatch.html Slurm Sbatch Documentation]&lt;br /&gt;
&lt;br /&gt;
=== Run Script and Check Output ===&lt;br /&gt;
Now we are ready to run the script on the cluster, specifically on 10 nodes of the cluster. To run the slurm script, simply give the command:&lt;br /&gt;
 $ sbatch hello-parallel.slurm&lt;br /&gt;
This should run our python script on 10 different nodes and generate output files in the same location. The output files will have an extension .out. &lt;br /&gt;
[[File:Slurm op.png|left|frameless|709x709px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We can view the output using the less command:&lt;br /&gt;
[[File:Slrm.png|left|frameless|447x447px]]&lt;br /&gt;
[[File:Slurm output.png|left|frameless]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more information: [https://slurm.schedmd.com/ Slurm Documentation]&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=279</id>
		<title>Slurm Quick Start</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=279"/>
		<updated>2022-11-29T16:07:11Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: /* Run Script and Check Output */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page explains how to get the configure and run a batch job on the SLURM cluster from the head node.&lt;br /&gt;
&lt;br /&gt;
== 1. Configuring SLURM ==&lt;br /&gt;
Slurm has 2 entities: a slurmctld controller node and multiple slurmd host nodes where we can run jobs in parallel.&lt;br /&gt;
&lt;br /&gt;
To start the slurm controller on a machine we need to give this command from root:&lt;br /&gt;
 $ systemctl start slurmctld.service&lt;br /&gt;
After running this, we can verify that slurm controller is running by viewing the log with the command:&lt;br /&gt;
 $ tail /var/log/slurmctld.log&lt;br /&gt;
The nodes should be configured to run with the controller. You can see the information about available nodes using this command:&lt;br /&gt;
 $ sinfo&lt;br /&gt;
[[File:Sinfo.png|left|alt=|frameless|451x451px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In case the nodes show up in STATE &amp;quot;down&amp;quot;, run the following command to put them in idle state (then check again using sinfo command):&lt;br /&gt;
 $ scontrol update nodename=pnode[01-64] state=idle&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 2. Running a Slurm Batch Job on Multiple Nodes ==&lt;br /&gt;
We can create python scripts and run these scripts in parallel on the cluster nodes. This is a simple example where we will print the task number from each node.&lt;br /&gt;
&lt;br /&gt;
=== Create a Python Script ===&lt;br /&gt;
First we create a Python script which prints the system task number:&lt;br /&gt;
 #!/usr/bin/python&lt;br /&gt;
 # import sys library (needed for accepted command line args)&lt;br /&gt;
 import sys&lt;br /&gt;
 # print task number&lt;br /&gt;
 print('Hello! I am a task number: ', sys.argv[1])&lt;br /&gt;
We will save this python script as hello-parallel.py&lt;br /&gt;
&lt;br /&gt;
=== Create Slurm Script ===&lt;br /&gt;
Next, we need to create a Slurm script to run the python program we just created:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 # Example of running python script with a job array&lt;br /&gt;
 &lt;br /&gt;
 #SBATCH -J hello&lt;br /&gt;
 #SBATCH -p debug&lt;br /&gt;
 #SBATCH --array=1-10                    # how many tasks in the array&lt;br /&gt;
 #SBATCH -c 1                            # one CPU core per task&lt;br /&gt;
 #SBATCH -t 10:00&lt;br /&gt;
 #SBATCH -o hello-%j-%a.out&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 # Run python script with a command line argument&lt;br /&gt;
 srun python3 hello-parallel.py $SLURM_ARRAY_TASK_ID&lt;br /&gt;
We will save this Slurm script as hello-parallel.slurm&lt;br /&gt;
&lt;br /&gt;
The first few lines of this file (with #SBATCH) are used to configure different parameters we want for the execution of the python script on the cluster.&lt;br /&gt;
&lt;br /&gt;
For example, -J specifies job name, -p specifies the partition on which the cluster nodes are (for us the partition is named debug), --array specifies how many tasks we want, -c specifies number of CPU cores per task.&lt;br /&gt;
&lt;br /&gt;
For more details on other command line options for sbatch for configuring the cluster, please visit [https://slurm.schedmd.com/sbatch.html Slurm Sbatch Documentation]&lt;br /&gt;
&lt;br /&gt;
=== Run Script and Check Output ===&lt;br /&gt;
Now we are ready to run the script on the cluster, specifically on 10 nodes of the cluster. To run the slurm script, simply give the command:&lt;br /&gt;
 $ sbatch hello-parallel.slurm&lt;br /&gt;
This should run our python script on 10 different nodes and generate output files in the same location. The output files will have an extension .out. &lt;br /&gt;
[[File:Slurm op.png|left|frameless|709x709px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We can view the output using the less command:&lt;br /&gt;
[[File:Slrm.png|left|frameless|447x447px]]&lt;br /&gt;
[[File:Slurm output.png|left|frameless]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For more information: [https://slurm.schedmd.com/ Slurm Documentation]&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=278</id>
		<title>Slurm Quick Start</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=278"/>
		<updated>2022-11-29T16:06:54Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: /* Run Script and Check Output */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page explains how to get the configure and run a batch job on the SLURM cluster from the head node.&lt;br /&gt;
&lt;br /&gt;
== 1. Configuring SLURM ==&lt;br /&gt;
Slurm has 2 entities: a slurmctld controller node and multiple slurmd host nodes where we can run jobs in parallel.&lt;br /&gt;
&lt;br /&gt;
To start the slurm controller on a machine we need to give this command from root:&lt;br /&gt;
 $ systemctl start slurmctld.service&lt;br /&gt;
After running this, we can verify that slurm controller is running by viewing the log with the command:&lt;br /&gt;
 $ tail /var/log/slurmctld.log&lt;br /&gt;
The nodes should be configured to run with the controller. You can see the information about available nodes using this command:&lt;br /&gt;
 $ sinfo&lt;br /&gt;
[[File:Sinfo.png|left|alt=|frameless|451x451px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In case the nodes show up in STATE &amp;quot;down&amp;quot;, run the following command to put them in idle state (then check again using sinfo command):&lt;br /&gt;
 $ scontrol update nodename=pnode[01-64] state=idle&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 2. Running a Slurm Batch Job on Multiple Nodes ==&lt;br /&gt;
We can create python scripts and run these scripts in parallel on the cluster nodes. This is a simple example where we will print the task number from each node.&lt;br /&gt;
&lt;br /&gt;
=== Create a Python Script ===&lt;br /&gt;
First we create a Python script which prints the system task number:&lt;br /&gt;
 #!/usr/bin/python&lt;br /&gt;
 # import sys library (needed for accepted command line args)&lt;br /&gt;
 import sys&lt;br /&gt;
 # print task number&lt;br /&gt;
 print('Hello! I am a task number: ', sys.argv[1])&lt;br /&gt;
We will save this python script as hello-parallel.py&lt;br /&gt;
&lt;br /&gt;
=== Create Slurm Script ===&lt;br /&gt;
Next, we need to create a Slurm script to run the python program we just created:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 # Example of running python script with a job array&lt;br /&gt;
 &lt;br /&gt;
 #SBATCH -J hello&lt;br /&gt;
 #SBATCH -p debug&lt;br /&gt;
 #SBATCH --array=1-10                    # how many tasks in the array&lt;br /&gt;
 #SBATCH -c 1                            # one CPU core per task&lt;br /&gt;
 #SBATCH -t 10:00&lt;br /&gt;
 #SBATCH -o hello-%j-%a.out&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 # Run python script with a command line argument&lt;br /&gt;
 srun python3 hello-parallel.py $SLURM_ARRAY_TASK_ID&lt;br /&gt;
We will save this Slurm script as hello-parallel.slurm&lt;br /&gt;
&lt;br /&gt;
The first few lines of this file (with #SBATCH) are used to configure different parameters we want for the execution of the python script on the cluster.&lt;br /&gt;
&lt;br /&gt;
For example, -J specifies job name, -p specifies the partition on which the cluster nodes are (for us the partition is named debug), --array specifies how many tasks we want, -c specifies number of CPU cores per task.&lt;br /&gt;
&lt;br /&gt;
For more details on other command line options for sbatch for configuring the cluster, please visit [https://slurm.schedmd.com/sbatch.html Slurm Sbatch Documentation]&lt;br /&gt;
&lt;br /&gt;
=== Run Script and Check Output ===&lt;br /&gt;
Now we are ready to run the script on the cluster, specifically on 10 nodes of the cluster. To run the slurm script, simply give the command:&lt;br /&gt;
 $ sbatch hello-parallel.slurm&lt;br /&gt;
This should run our python script on 10 different nodes and generate output files in the same location. The output files will have an extension .out. &lt;br /&gt;
[[File:Slurm op.png|left|frameless|709x709px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We can view the output using the less command:&lt;br /&gt;
[[File:Slrm.png|left|frameless|447x447px]]&lt;br /&gt;
[[File:Slurm output.png|left|frameless]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more information: [https://slurm.schedmd.com/ Slurm Documentation]&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=277</id>
		<title>Slurm Quick Start</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=277"/>
		<updated>2022-11-29T16:06:33Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: /* Run Script and Check Output */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page explains how to get the configure and run a batch job on the SLURM cluster from the head node.&lt;br /&gt;
&lt;br /&gt;
== 1. Configuring SLURM ==&lt;br /&gt;
Slurm has 2 entities: a slurmctld controller node and multiple slurmd host nodes where we can run jobs in parallel.&lt;br /&gt;
&lt;br /&gt;
To start the slurm controller on a machine we need to give this command from root:&lt;br /&gt;
 $ systemctl start slurmctld.service&lt;br /&gt;
After running this, we can verify that slurm controller is running by viewing the log with the command:&lt;br /&gt;
 $ tail /var/log/slurmctld.log&lt;br /&gt;
The nodes should be configured to run with the controller. You can see the information about available nodes using this command:&lt;br /&gt;
 $ sinfo&lt;br /&gt;
[[File:Sinfo.png|left|alt=|frameless|451x451px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In case the nodes show up in STATE &amp;quot;down&amp;quot;, run the following command to put them in idle state (then check again using sinfo command):&lt;br /&gt;
 $ scontrol update nodename=pnode[01-64] state=idle&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 2. Running a Slurm Batch Job on Multiple Nodes ==&lt;br /&gt;
We can create python scripts and run these scripts in parallel on the cluster nodes. This is a simple example where we will print the task number from each node.&lt;br /&gt;
&lt;br /&gt;
=== Create a Python Script ===&lt;br /&gt;
First we create a Python script which prints the system task number:&lt;br /&gt;
 #!/usr/bin/python&lt;br /&gt;
 # import sys library (needed for accepted command line args)&lt;br /&gt;
 import sys&lt;br /&gt;
 # print task number&lt;br /&gt;
 print('Hello! I am a task number: ', sys.argv[1])&lt;br /&gt;
We will save this python script as hello-parallel.py&lt;br /&gt;
&lt;br /&gt;
=== Create Slurm Script ===&lt;br /&gt;
Next, we need to create a Slurm script to run the python program we just created:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 # Example of running python script with a job array&lt;br /&gt;
 &lt;br /&gt;
 #SBATCH -J hello&lt;br /&gt;
 #SBATCH -p debug&lt;br /&gt;
 #SBATCH --array=1-10                    # how many tasks in the array&lt;br /&gt;
 #SBATCH -c 1                            # one CPU core per task&lt;br /&gt;
 #SBATCH -t 10:00&lt;br /&gt;
 #SBATCH -o hello-%j-%a.out&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 # Run python script with a command line argument&lt;br /&gt;
 srun python3 hello-parallel.py $SLURM_ARRAY_TASK_ID&lt;br /&gt;
We will save this Slurm script as hello-parallel.slurm&lt;br /&gt;
&lt;br /&gt;
The first few lines of this file (with #SBATCH) are used to configure different parameters we want for the execution of the python script on the cluster.&lt;br /&gt;
&lt;br /&gt;
For example, -J specifies job name, -p specifies the partition on which the cluster nodes are (for us the partition is named debug), --array specifies how many tasks we want, -c specifies number of CPU cores per task.&lt;br /&gt;
&lt;br /&gt;
For more details on other command line options for sbatch for configuring the cluster, please visit [https://slurm.schedmd.com/sbatch.html Slurm Sbatch Documentation]&lt;br /&gt;
&lt;br /&gt;
=== Run Script and Check Output ===&lt;br /&gt;
Now we are ready to run the script on the cluster, specifically on 10 nodes of the cluster. To run the slurm script, simply give the command:&lt;br /&gt;
 $ sbatch hello-parallel.slurm&lt;br /&gt;
This should run our python script on 10 different nodes and generate output files in the same location. The output files will have an extension .out. &lt;br /&gt;
[[File:Slurm op.png|left|frameless|709x709px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We can view the output using the less command:&lt;br /&gt;
[[File:Slrm.png|left|frameless|447x447px]]&lt;br /&gt;
[[File:Slurm output.png|left|frameless]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more information: [https://slurm.schedmd.com/ Slurm Documentation]&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=276</id>
		<title>Slurm Quick Start</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=276"/>
		<updated>2022-11-29T16:05:54Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: /* Run Script and Check Output */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page explains how to get the configure and run a batch job on the SLURM cluster from the head node.&lt;br /&gt;
&lt;br /&gt;
== 1. Configuring SLURM ==&lt;br /&gt;
Slurm has 2 entities: a slurmctld controller node and multiple slurmd host nodes where we can run jobs in parallel.&lt;br /&gt;
&lt;br /&gt;
To start the slurm controller on a machine we need to give this command from root:&lt;br /&gt;
 $ systemctl start slurmctld.service&lt;br /&gt;
After running this, we can verify that slurm controller is running by viewing the log with the command:&lt;br /&gt;
 $ tail /var/log/slurmctld.log&lt;br /&gt;
The nodes should be configured to run with the controller. You can see the information about available nodes using this command:&lt;br /&gt;
 $ sinfo&lt;br /&gt;
[[File:Sinfo.png|left|alt=|frameless|451x451px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In case the nodes show up in STATE &amp;quot;down&amp;quot;, run the following command to put them in idle state (then check again using sinfo command):&lt;br /&gt;
 $ scontrol update nodename=pnode[01-64] state=idle&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 2. Running a Slurm Batch Job on Multiple Nodes ==&lt;br /&gt;
We can create python scripts and run these scripts in parallel on the cluster nodes. This is a simple example where we will print the task number from each node.&lt;br /&gt;
&lt;br /&gt;
=== Create a Python Script ===&lt;br /&gt;
First we create a Python script which prints the system task number:&lt;br /&gt;
 #!/usr/bin/python&lt;br /&gt;
 # import sys library (needed for accepted command line args)&lt;br /&gt;
 import sys&lt;br /&gt;
 # print task number&lt;br /&gt;
 print('Hello! I am a task number: ', sys.argv[1])&lt;br /&gt;
We will save this python script as hello-parallel.py&lt;br /&gt;
&lt;br /&gt;
=== Create Slurm Script ===&lt;br /&gt;
Next, we need to create a Slurm script to run the python program we just created:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 # Example of running python script with a job array&lt;br /&gt;
 &lt;br /&gt;
 #SBATCH -J hello&lt;br /&gt;
 #SBATCH -p debug&lt;br /&gt;
 #SBATCH --array=1-10                    # how many tasks in the array&lt;br /&gt;
 #SBATCH -c 1                            # one CPU core per task&lt;br /&gt;
 #SBATCH -t 10:00&lt;br /&gt;
 #SBATCH -o hello-%j-%a.out&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 # Run python script with a command line argument&lt;br /&gt;
 srun python3 hello-parallel.py $SLURM_ARRAY_TASK_ID&lt;br /&gt;
We will save this Slurm script as hello-parallel.slurm&lt;br /&gt;
&lt;br /&gt;
The first few lines of this file (with #SBATCH) are used to configure different parameters we want for the execution of the python script on the cluster.&lt;br /&gt;
&lt;br /&gt;
For example, -J specifies job name, -p specifies the partition on which the cluster nodes are (for us the partition is named debug), --array specifies how many tasks we want, -c specifies number of CPU cores per task.&lt;br /&gt;
&lt;br /&gt;
For more details on other command line options for sbatch for configuring the cluster, please visit [https://slurm.schedmd.com/sbatch.html Slurm Sbatch Documentation]&lt;br /&gt;
&lt;br /&gt;
=== Run Script and Check Output ===&lt;br /&gt;
Now we are ready to run the script on the cluster, specifically on 10 nodes of the cluster. To run the slurm script, simply give the command:&lt;br /&gt;
 $ sbatch hello-parallel.slurm&lt;br /&gt;
This should run our python script on 10 different nodes and generate output files in the same location. The output files will have an extension .out. &lt;br /&gt;
[[File:Slurm op.png|left|frameless|709x709px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We can view the output using the less command:&lt;br /&gt;
[[File:Slrm.png|left|frameless|447x447px]]&lt;br /&gt;
[[File:Slurm output.png|left|frameless]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;For more information: [https://slurm.schedmd.com/ Slurm Documentation] &amp;lt;/blockquote&amp;gt;&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=275</id>
		<title>Slurm Quick Start</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=275"/>
		<updated>2022-11-29T16:05:36Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: /* Run Script and Check Output */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page explains how to get the configure and run a batch job on the SLURM cluster from the head node.&lt;br /&gt;
&lt;br /&gt;
== 1. Configuring SLURM ==&lt;br /&gt;
Slurm has 2 entities: a slurmctld controller node and multiple slurmd host nodes where we can run jobs in parallel.&lt;br /&gt;
&lt;br /&gt;
To start the slurm controller on a machine we need to give this command from root:&lt;br /&gt;
 $ systemctl start slurmctld.service&lt;br /&gt;
After running this, we can verify that slurm controller is running by viewing the log with the command:&lt;br /&gt;
 $ tail /var/log/slurmctld.log&lt;br /&gt;
The nodes should be configured to run with the controller. You can see the information about available nodes using this command:&lt;br /&gt;
 $ sinfo&lt;br /&gt;
[[File:Sinfo.png|left|alt=|frameless|451x451px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In case the nodes show up in STATE &amp;quot;down&amp;quot;, run the following command to put them in idle state (then check again using sinfo command):&lt;br /&gt;
 $ scontrol update nodename=pnode[01-64] state=idle&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 2. Running a Slurm Batch Job on Multiple Nodes ==&lt;br /&gt;
We can create python scripts and run these scripts in parallel on the cluster nodes. This is a simple example where we will print the task number from each node.&lt;br /&gt;
&lt;br /&gt;
=== Create a Python Script ===&lt;br /&gt;
First we create a Python script which prints the system task number:&lt;br /&gt;
 #!/usr/bin/python&lt;br /&gt;
 # import sys library (needed for accepted command line args)&lt;br /&gt;
 import sys&lt;br /&gt;
 # print task number&lt;br /&gt;
 print('Hello! I am a task number: ', sys.argv[1])&lt;br /&gt;
We will save this python script as hello-parallel.py&lt;br /&gt;
&lt;br /&gt;
=== Create Slurm Script ===&lt;br /&gt;
Next, we need to create a Slurm script to run the python program we just created:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 # Example of running python script with a job array&lt;br /&gt;
 &lt;br /&gt;
 #SBATCH -J hello&lt;br /&gt;
 #SBATCH -p debug&lt;br /&gt;
 #SBATCH --array=1-10                    # how many tasks in the array&lt;br /&gt;
 #SBATCH -c 1                            # one CPU core per task&lt;br /&gt;
 #SBATCH -t 10:00&lt;br /&gt;
 #SBATCH -o hello-%j-%a.out&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 # Run python script with a command line argument&lt;br /&gt;
 srun python3 hello-parallel.py $SLURM_ARRAY_TASK_ID&lt;br /&gt;
We will save this Slurm script as hello-parallel.slurm&lt;br /&gt;
&lt;br /&gt;
The first few lines of this file (with #SBATCH) are used to configure different parameters we want for the execution of the python script on the cluster.&lt;br /&gt;
&lt;br /&gt;
For example, -J specifies job name, -p specifies the partition on which the cluster nodes are (for us the partition is named debug), --array specifies how many tasks we want, -c specifies number of CPU cores per task.&lt;br /&gt;
&lt;br /&gt;
For more details on other command line options for sbatch for configuring the cluster, please visit [https://slurm.schedmd.com/sbatch.html Slurm Sbatch Documentation]&lt;br /&gt;
&lt;br /&gt;
=== Run Script and Check Output ===&lt;br /&gt;
Now we are ready to run the script on the cluster, specifically on 10 nodes of the cluster. To run the slurm script, simply give the command:&lt;br /&gt;
 $ sbatch hello-parallel.slurm&lt;br /&gt;
This should run our python script on 10 different nodes and generate output files in the same location. The output files will have an extension .out. &lt;br /&gt;
[[File:Slurm op.png|left|frameless|709x709px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We can view the output using the less command:&lt;br /&gt;
[[File:Slrm.png|left|frameless|447x447px]]&lt;br /&gt;
[[File:Slurm output.png|left|frameless]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;For more information: [https://slurm.schedmd.com/ Slurm Documentation] &amp;lt;/blockquote&amp;gt;&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=274</id>
		<title>Slurm Quick Start</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=274"/>
		<updated>2022-11-29T15:59:07Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: /* 1. Configuring SLURM */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page explains how to get the configure and run a batch job on the SLURM cluster from the head node.&lt;br /&gt;
&lt;br /&gt;
== 1. Configuring SLURM ==&lt;br /&gt;
Slurm has 2 entities: a slurmctld controller node and multiple slurmd host nodes where we can run jobs in parallel.&lt;br /&gt;
&lt;br /&gt;
To start the slurm controller on a machine we need to give this command from root:&lt;br /&gt;
 $ systemctl start slurmctld.service&lt;br /&gt;
After running this, we can verify that slurm controller is running by viewing the log with the command:&lt;br /&gt;
 $ tail /var/log/slurmctld.log&lt;br /&gt;
The nodes should be configured to run with the controller. You can see the information about available nodes using this command:&lt;br /&gt;
 $ sinfo&lt;br /&gt;
[[File:Sinfo.png|left|alt=|frameless|451x451px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In case the nodes show up in STATE &amp;quot;down&amp;quot;, run the following command to put them in idle state (then check again using sinfo command):&lt;br /&gt;
 $ scontrol update nodename=pnode[01-64] state=idle&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 2. Running a Slurm Batch Job on Multiple Nodes ==&lt;br /&gt;
We can create python scripts and run these scripts in parallel on the cluster nodes. This is a simple example where we will print the task number from each node.&lt;br /&gt;
&lt;br /&gt;
=== Create a Python Script ===&lt;br /&gt;
First we create a Python script which prints the system task number:&lt;br /&gt;
 #!/usr/bin/python&lt;br /&gt;
 # import sys library (needed for accepted command line args)&lt;br /&gt;
 import sys&lt;br /&gt;
 # print task number&lt;br /&gt;
 print('Hello! I am a task number: ', sys.argv[1])&lt;br /&gt;
We will save this python script as hello-parallel.py&lt;br /&gt;
&lt;br /&gt;
=== Create Slurm Script ===&lt;br /&gt;
Next, we need to create a Slurm script to run the python program we just created:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 # Example of running python script with a job array&lt;br /&gt;
 &lt;br /&gt;
 #SBATCH -J hello&lt;br /&gt;
 #SBATCH -p debug&lt;br /&gt;
 #SBATCH --array=1-10                    # how many tasks in the array&lt;br /&gt;
 #SBATCH -c 1                            # one CPU core per task&lt;br /&gt;
 #SBATCH -t 10:00&lt;br /&gt;
 #SBATCH -o hello-%j-%a.out&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 # Run python script with a command line argument&lt;br /&gt;
 srun python3 hello-parallel.py $SLURM_ARRAY_TASK_ID&lt;br /&gt;
We will save this Slurm script as hello-parallel.slurm&lt;br /&gt;
&lt;br /&gt;
The first few lines of this file (with #SBATCH) are used to configure different parameters we want for the execution of the python script on the cluster.&lt;br /&gt;
&lt;br /&gt;
For example, -J specifies job name, -p specifies the partition on which the cluster nodes are (for us the partition is named debug), --array specifies how many tasks we want, -c specifies number of CPU cores per task.&lt;br /&gt;
&lt;br /&gt;
For more details on other command line options for sbatch for configuring the cluster, please visit [https://slurm.schedmd.com/sbatch.html Slurm Sbatch Documentation]&lt;br /&gt;
&lt;br /&gt;
=== Run Script and Check Output ===&lt;br /&gt;
Now we are ready to run the script on the cluster, specifically on 10 nodes of the cluster. To run the slurm script, simply give the command:&lt;br /&gt;
 $ sbatch hello-parallel.slurm&lt;br /&gt;
This should run our python script on 10 different nodes and generate output files in the same location. The output files will have an extension .out. &lt;br /&gt;
[[File:Slurm op.png|left|frameless|709x709px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We can view the output using the less command:&lt;br /&gt;
[[File:Slrm.png|left|frameless|447x447px]]&lt;br /&gt;
[[File:Slurm output.png|left|frameless]]&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Test_Page&amp;diff=273</id>
		<title>Test Page</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Test_Page&amp;diff=273"/>
		<updated>2022-11-29T15:55:17Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: /* Handy Links */  Added link for Slurm configuration and example&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Test Page ==&lt;br /&gt;
&lt;br /&gt;
This is a place to doodle, and test out formatting without having to temporarily mess up an actual page.&lt;br /&gt;
&lt;br /&gt;
= Handy Links =&lt;br /&gt;
&lt;br /&gt;
* [https://e.math.cornell.edu/wiki/index.php/Slurm_Quick_Start Slurm Quick Start Guide]&lt;br /&gt;
* [https://e.math.cornell.edu/wiki/index.php/Mathematica_Parallel_Computing_Configuration Mathematica Remote Kernel Configuration]&lt;br /&gt;
* [https://e.math.cornell.edu/wiki/index.php/Configuring_Ipython_for_Parallel_Computing Configuring IPython] for Parallel Computing&lt;br /&gt;
*The [https://math.cornell.edu Math Department] Homepage.&lt;br /&gt;
*The Math Department [https://math.cornell.edu/people People Pages].&lt;br /&gt;
*[https://people.as.cornell.edu/saml_login Link to edit] your People page.&lt;br /&gt;
*The [https://webwork2.math.cornell.edu/ WeBWorK] Math Homework System.&lt;br /&gt;
&lt;br /&gt;
For instructors and researchers:&lt;br /&gt;
*Log in to [http://outlook.cornell.edu Cornell Email] on the web.&lt;br /&gt;
*Reset your [https://accounts.math.cornell.edu/panel/ Math Account] Password.&lt;br /&gt;
*Instructors can [https://accounts.math.cornell.edu/panel/invite.php send an invitation] to set up a Math Account to any NetID.&lt;br /&gt;
*[https://pi.math.cornell.edu View] the old server, pi.&lt;br /&gt;
*[https://pi.math.cornell.edu/m/ADMIN/Protected Log in] to pi.&lt;br /&gt;
*The 'Syllabus File' [https://e.math.cornell.edu/apps/courseinfo/ Course Materials] database.&lt;br /&gt;
*[https://e.math.cornell.edu/webdisk Access your files] on the Math system Webdisk.&lt;br /&gt;
*Other ways to access your Math files.&lt;br /&gt;
*Use the Math Department computation machines.&lt;br /&gt;
*How to print at the Math department.&lt;br /&gt;
*Printer activity and availability.&lt;br /&gt;
*How to scan.&lt;br /&gt;
* View the status of the Math systems.&lt;br /&gt;
&lt;br /&gt;
Links for Staff:&lt;br /&gt;
*The [https://dynomite.math.cornell.edu Department Database].&lt;br /&gt;
*How to connect to the staff file share.&lt;br /&gt;
*How to connect to your work computer from home.&lt;br /&gt;
* Math Department Student [https://e.math.cornell.edu/apps/emp Employment] Site.&lt;br /&gt;
**Student Employee [https://e.math.cornell.edu/apps/emp-review/ Performance Review] Site.&lt;br /&gt;
**[https://e.math.cornell.edu/apps/emp/admin Administrative] Login&lt;br /&gt;
&lt;br /&gt;
[[Math How Tos]]&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=272</id>
		<title>Slurm Quick Start</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=272"/>
		<updated>2022-11-29T15:53:47Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: /* 2. Running a Slurm Batch Job on Multiple Nodes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page explains how to get the configure and run a batch job on the SLURM cluster from the head node.&lt;br /&gt;
&lt;br /&gt;
== 1. Configuring SLURM ==&lt;br /&gt;
Slurm has 2 entities: a slurmctld controller node and multiple slurmd host nodes where we can run jobs in parallel.&lt;br /&gt;
&lt;br /&gt;
To start the slurm controller on a machine we need to give this command from root:&lt;br /&gt;
 $ systemctl start slurmctld.service&lt;br /&gt;
After running this, we can verify that slurm controller is running by viewing the log with the command:&lt;br /&gt;
 $ tail /var/log/slurmctld.log&lt;br /&gt;
The nodes should be configured to run with the controller. You can see the information about available nodes using this command:&lt;br /&gt;
 $ sinfo&lt;br /&gt;
[[File:Sinfo.png|left|alt=|frameless|451x451px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 2. Running a Slurm Batch Job on Multiple Nodes ==&lt;br /&gt;
We can create python scripts and run these scripts in parallel on the cluster nodes. This is a simple example where we will print the task number from each node.&lt;br /&gt;
&lt;br /&gt;
=== Create a Python Script ===&lt;br /&gt;
First we create a Python script which prints the system task number:&lt;br /&gt;
 #!/usr/bin/python&lt;br /&gt;
 # import sys library (needed for accepted command line args)&lt;br /&gt;
 import sys&lt;br /&gt;
 # print task number&lt;br /&gt;
 print('Hello! I am a task number: ', sys.argv[1])&lt;br /&gt;
We will save this python script as hello-parallel.py&lt;br /&gt;
&lt;br /&gt;
=== Create Slurm Script ===&lt;br /&gt;
Next, we need to create a Slurm script to run the python program we just created:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 # Example of running python script with a job array&lt;br /&gt;
 &lt;br /&gt;
 #SBATCH -J hello&lt;br /&gt;
 #SBATCH -p debug&lt;br /&gt;
 #SBATCH --array=1-10                    # how many tasks in the array&lt;br /&gt;
 #SBATCH -c 1                            # one CPU core per task&lt;br /&gt;
 #SBATCH -t 10:00&lt;br /&gt;
 #SBATCH -o hello-%j-%a.out&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 # Run python script with a command line argument&lt;br /&gt;
 srun python3 hello-parallel.py $SLURM_ARRAY_TASK_ID&lt;br /&gt;
We will save this Slurm script as hello-parallel.slurm&lt;br /&gt;
&lt;br /&gt;
The first few lines of this file (with #SBATCH) are used to configure different parameters we want for the execution of the python script on the cluster.&lt;br /&gt;
&lt;br /&gt;
For example, -J specifies job name, -p specifies the partition on which the cluster nodes are (for us the partition is named debug), --array specifies how many tasks we want, -c specifies number of CPU cores per task.&lt;br /&gt;
&lt;br /&gt;
For more details on other command line options for sbatch for configuring the cluster, please visit [https://slurm.schedmd.com/sbatch.html Slurm Sbatch Documentation]&lt;br /&gt;
&lt;br /&gt;
=== Run Script and Check Output ===&lt;br /&gt;
Now we are ready to run the script on the cluster, specifically on 10 nodes of the cluster. To run the slurm script, simply give the command:&lt;br /&gt;
 $ sbatch hello-parallel.slurm&lt;br /&gt;
This should run our python script on 10 different nodes and generate output files in the same location. The output files will have an extension .out. &lt;br /&gt;
[[File:Slurm op.png|left|frameless|709x709px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We can view the output using the less command:&lt;br /&gt;
[[File:Slrm.png|left|frameless|447x447px]]&lt;br /&gt;
[[File:Slurm output.png|left|frameless]]&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=271</id>
		<title>Slurm Quick Start</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=271"/>
		<updated>2022-11-29T15:50:43Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: /* 2. Running a Slurm Batch Job on Multiple Nodes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page explains how to get the configure and run a batch job on the SLURM cluster from the head node.&lt;br /&gt;
&lt;br /&gt;
== 1. Configuring SLURM ==&lt;br /&gt;
Slurm has 2 entities: a slurmctld controller node and multiple slurmd host nodes where we can run jobs in parallel.&lt;br /&gt;
&lt;br /&gt;
To start the slurm controller on a machine we need to give this command from root:&lt;br /&gt;
 $ systemctl start slurmctld.service&lt;br /&gt;
After running this, we can verify that slurm controller is running by viewing the log with the command:&lt;br /&gt;
 $ tail /var/log/slurmctld.log&lt;br /&gt;
The nodes should be configured to run with the controller. You can see the information about available nodes using this command:&lt;br /&gt;
 $ sinfo&lt;br /&gt;
[[File:Sinfo.png|left|alt=|frame]]&amp;lt;blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 2. Running a Slurm Batch Job on Multiple Nodes ==&lt;br /&gt;
We can create python scripts and run these scripts in parallel on the cluster nodes. This is a simple example where we will print the task number from each node.&lt;br /&gt;
&lt;br /&gt;
=== Create a Python Script ===&lt;br /&gt;
First we create a Python script which prints the system task number:&lt;br /&gt;
 #!/usr/bin/python&lt;br /&gt;
 # import sys library (needed for accepted command line args)&lt;br /&gt;
 import sys&lt;br /&gt;
 # print task number&lt;br /&gt;
 print('Hello! I am a task number: ', sys.argv[1])&lt;br /&gt;
We will save this python script as hello-parallel.py&lt;br /&gt;
&lt;br /&gt;
=== Create Slurm Script ===&lt;br /&gt;
Next, we need to create a Slurm script to run the python program we just created:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 # Example of running python script with a job array&lt;br /&gt;
 &lt;br /&gt;
 #SBATCH -J hello&lt;br /&gt;
 #SBATCH -p debug&lt;br /&gt;
 #SBATCH --array=1-10                    # how many tasks in the array&lt;br /&gt;
 #SBATCH -c 1                            # one CPU core per task&lt;br /&gt;
 #SBATCH -t 10:00&lt;br /&gt;
 #SBATCH -o hello-%j-%a.out&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 # Run python script with a command line argument&lt;br /&gt;
 srun python3 hello-parallel.py $SLURM_ARRAY_TASK_ID&lt;br /&gt;
We will save this Slurm script as hello-parallel.slurm&lt;br /&gt;
&lt;br /&gt;
The first few lines of this file (with #SBATCH) are used to configure different parameters we want for the execution of the python script on the cluster.&lt;br /&gt;
&lt;br /&gt;
For example, -J specifies job name, -p specifies the partition on which the cluster nodes are (for us the partition is named debug), --array specifies how many tasks we want, -c specifies number of CPU cores per task.&lt;br /&gt;
&lt;br /&gt;
For more details on other command line options for sbatch for configuring the cluster, please visit [https://slurm.schedmd.com/sbatch.html Slurm Sbatch Documentation]&lt;br /&gt;
&lt;br /&gt;
=== Run Script and Check Output ===&lt;br /&gt;
Now we are ready to run the script on the cluster, specifically on 10 nodes of the cluster. To run the slurm script, simply give the command:&lt;br /&gt;
 $ sbatch hello-parallel.slurm&lt;br /&gt;
This should run our python script on 10 different nodes and generate output files in the same location. The output files will have an extension .out. &lt;br /&gt;
[[File:Slurm op.png|left|frameless|709x709px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We can view the output using the less command:&lt;br /&gt;
[[File:Slrm.png|left|frameless|447x447px]]&lt;br /&gt;
[[File:Slurm output.png|left|frameless]]&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=270</id>
		<title>Slurm Quick Start</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=270"/>
		<updated>2022-11-29T15:49:58Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: /* 2. Running a Slurm Batch Job on Multiple Nodes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page explains how to get the configure and run a batch job on the SLURM cluster from the head node.&lt;br /&gt;
&lt;br /&gt;
== 1. Configuring SLURM ==&lt;br /&gt;
Slurm has 2 entities: a slurmctld controller node and multiple slurmd host nodes where we can run jobs in parallel.&lt;br /&gt;
&lt;br /&gt;
To start the slurm controller on a machine we need to give this command from root:&lt;br /&gt;
 $ systemctl start slurmctld.service&lt;br /&gt;
After running this, we can verify that slurm controller is running by viewing the log with the command:&lt;br /&gt;
 $ tail /var/log/slurmctld.log&lt;br /&gt;
The nodes should be configured to run with the controller. You can see the information about available nodes using this command:&lt;br /&gt;
 $ sinfo&lt;br /&gt;
[[File:Sinfo.png|left|frameless|509x509px]]&amp;lt;blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== 2. Running a Slurm Batch Job on Multiple Nodes ==&lt;br /&gt;
We can create python scripts and run these scripts in parallel on the cluster nodes. This is a simple example where we will print the task number from each node.&lt;br /&gt;
&lt;br /&gt;
=== Create a Python Script ===&lt;br /&gt;
First we create a Python script which prints the system task number:&lt;br /&gt;
 #!/usr/bin/python&lt;br /&gt;
 # import sys library (needed for accepted command line args)&lt;br /&gt;
 import sys&lt;br /&gt;
 # print task number&lt;br /&gt;
 print('Hello! I am a task number: ', sys.argv[1])&lt;br /&gt;
We will save this python script as hello-parallel.py&lt;br /&gt;
&lt;br /&gt;
=== Create Slurm Script ===&lt;br /&gt;
Next, we need to create a Slurm script to run the python program we just created:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 # Example of running python script with a job array&lt;br /&gt;
 &lt;br /&gt;
 #SBATCH -J hello&lt;br /&gt;
 #SBATCH -p debug&lt;br /&gt;
 #SBATCH --array=1-10                    # how many tasks in the array&lt;br /&gt;
 #SBATCH -c 1                            # one CPU core per task&lt;br /&gt;
 #SBATCH -t 10:00&lt;br /&gt;
 #SBATCH -o hello-%j-%a.out&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 # Run python script with a command line argument&lt;br /&gt;
 srun python3 hello-parallel.py $SLURM_ARRAY_TASK_ID&lt;br /&gt;
We will save this Slurm script as hello-parallel.slurm&lt;br /&gt;
&lt;br /&gt;
The first few lines of this file (with #SBATCH) are used to configure different parameters we want for the execution of the python script on the cluster.&lt;br /&gt;
&lt;br /&gt;
For example, -J specifies job name, -p specifies the partition on which the cluster nodes are (for us the partition is named debug), --array specifies how many tasks we want, -c specifies number of CPU cores per task.&lt;br /&gt;
&lt;br /&gt;
For more details on other command line options for sbatch for configuring the cluster, please visit [https://slurm.schedmd.com/sbatch.html Slurm Sbatch Documentation]&lt;br /&gt;
&lt;br /&gt;
=== Run Script and Check Output ===&lt;br /&gt;
Now we are ready to run the script on the cluster, specifically on 10 nodes of the cluster. To run the slurm script, simply give the command:&lt;br /&gt;
 $ sbatch hello-parallel.slurm&lt;br /&gt;
This should run our python script on 10 different nodes and generate output files in the same location. The output files will have an extension .out. &lt;br /&gt;
[[File:Slurm op.png|left|frameless|709x709px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We can view the output using the less command:&lt;br /&gt;
[[File:Slrm.png|left|frameless|447x447px]]&lt;br /&gt;
[[File:Slurm output.png|left|frameless]]&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=269</id>
		<title>Slurm Quick Start</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=269"/>
		<updated>2022-11-29T15:49:33Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page explains how to get the configure and run a batch job on the SLURM cluster from the head node.&lt;br /&gt;
&lt;br /&gt;
== 1. Configuring SLURM ==&lt;br /&gt;
Slurm has 2 entities: a slurmctld controller node and multiple slurmd host nodes where we can run jobs in parallel.&lt;br /&gt;
&lt;br /&gt;
To start the slurm controller on a machine we need to give this command from root:&lt;br /&gt;
 $ systemctl start slurmctld.service&lt;br /&gt;
After running this, we can verify that slurm controller is running by viewing the log with the command:&lt;br /&gt;
 $ tail /var/log/slurmctld.log&lt;br /&gt;
The nodes should be configured to run with the controller. You can see the information about available nodes using this command:&lt;br /&gt;
 $ sinfo&lt;br /&gt;
[[File:Sinfo.png|left|frameless|509x509px]]&amp;lt;blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== 2. Running a Slurm Batch Job on Multiple Nodes ==&lt;br /&gt;
We can create python scripts and run these scripts in parallel on the cluster nodes. This is a simple example where we will print the task number from each node.&lt;br /&gt;
&lt;br /&gt;
=== Create a Python Script ===&lt;br /&gt;
First we create a Python script which prints the system task number:&lt;br /&gt;
 #!/usr/bin/python&lt;br /&gt;
 # import sys library (needed for accepted command line args)&lt;br /&gt;
 import sys&lt;br /&gt;
 # print task number&lt;br /&gt;
 print('Hello! I am a task number: ', sys.argv[1])&lt;br /&gt;
We will save this python script as hello-parallel.py&lt;br /&gt;
&lt;br /&gt;
=== Create Slurm Script ===&lt;br /&gt;
Next, we need to create a Slurm script to run the python program we just created:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 # Example of running python script with a job array&lt;br /&gt;
 &lt;br /&gt;
 #SBATCH -J hello&lt;br /&gt;
 #SBATCH -p debug&lt;br /&gt;
 #SBATCH --array=1-10                    # how many tasks in the array&lt;br /&gt;
 #SBATCH -c 1                            # one CPU core per task&lt;br /&gt;
 #SBATCH -t 10:00&lt;br /&gt;
 #SBATCH -o hello-%j-%a.out&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 # Run python script with a command line argument&lt;br /&gt;
 srun python3 hello-parallel.py $SLURM_ARRAY_TASK_ID&lt;br /&gt;
We will save this Slurm script as hello-parallel.slurm&lt;br /&gt;
&lt;br /&gt;
The first few lines of this file (with #SBATCH) are used to configure different parameters we want for the execution of the python script on the cluster.&lt;br /&gt;
&lt;br /&gt;
For example, -J specifies job name, -p specifies the partition on which the cluster nodes are (for us the partition is named debug), --array specifies how many tasks we want, -c specifies number of CPU cores per task.&lt;br /&gt;
&lt;br /&gt;
For more details on other command line options for sbatch for configuring the cluster, please visit [https://slurm.schedmd.com/sbatch.html Slurm Sbatch Documentation]&lt;br /&gt;
&lt;br /&gt;
=== Run Script and Check Output ===&lt;br /&gt;
Now we are ready to run the script on the cluster, specifically on 10 nodes of the cluster. To run the slurm script, simply give the command:&lt;br /&gt;
 $ sbatch hello-parallel.slurm&lt;br /&gt;
This should run our python script on 10 different nodes and generate output files in the same location. The output files will have an extension .out. &lt;br /&gt;
[[File:Slurm op.png|left|frameless|709x709px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We can view the output using the less command:&lt;br /&gt;
[[File:Slrm.png|left|frameless|447x447px]]&lt;br /&gt;
[[File:Slurm output.png|left|frameless]]&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=268</id>
		<title>Slurm Quick Start</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=268"/>
		<updated>2022-11-29T15:47:51Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: /* 2. Running a Slurm Batch Job on Multiple Nodes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page explains how to get the configure and run a batch job on the SLURM cluster from the head node.&lt;br /&gt;
&lt;br /&gt;
== 1. Configuring SLURM ==&lt;br /&gt;
Slurm has 2 entities: a slurmctld controller node and multiple slurmd host nodes where we can run jobs in parallel.&lt;br /&gt;
&lt;br /&gt;
To start the slurm controller on a machine we need to give this command from root:&lt;br /&gt;
 $ systemctl start slurmctld.service&lt;br /&gt;
After running this, we can verify that slurm controller is running by viewing the log with the command:&lt;br /&gt;
 $ tail /var/log/slurmctld.log&lt;br /&gt;
The nodes should be configured to run with the controller. You can see the information about available nodes using this command:&lt;br /&gt;
 $ sinfo&lt;br /&gt;
[[File:Sinfo.png|left|frameless|509x509px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 2. Running a Slurm Batch Job on Multiple Nodes ==&lt;br /&gt;
We can create python scripts and run these scripts in parallel on the cluster nodes. This is a simple example where we will print the task number from each node.&lt;br /&gt;
&lt;br /&gt;
=== Create a Python Script ===&lt;br /&gt;
First we create a Python script which prints the system task number:&lt;br /&gt;
 #!/usr/bin/python&lt;br /&gt;
 # import sys library (needed for accepted command line args)&lt;br /&gt;
 import sys&lt;br /&gt;
 # print task number&lt;br /&gt;
 print('Hello! I am a task number: ', sys.argv[1])&lt;br /&gt;
We will save this python script as hello-parallel.py&lt;br /&gt;
&lt;br /&gt;
=== Create Slurm Script ===&lt;br /&gt;
Next, we need to create a Slurm script to run the python program we just created:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 # Example of running python script with a job array&lt;br /&gt;
 &lt;br /&gt;
 #SBATCH -J hello&lt;br /&gt;
 #SBATCH -p debug&lt;br /&gt;
 #SBATCH --array=1-10                    # how many tasks in the array&lt;br /&gt;
 #SBATCH -c 1                            # one CPU core per task&lt;br /&gt;
 #SBATCH -t 10:00&lt;br /&gt;
 #SBATCH -o hello-%j-%a.out&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 # Run python script with a command line argument&lt;br /&gt;
 srun python3 hello-parallel.py $SLURM_ARRAY_TASK_ID&lt;br /&gt;
We will save this Slurm script as hello-parallel.slurm&lt;br /&gt;
&lt;br /&gt;
The first few lines of this file (with #SBATCH) are used to configure different parameters we want for the execution of the python script on the cluster.&lt;br /&gt;
&lt;br /&gt;
For example, -J specifies job name, -p specifies the partition on which the cluster nodes are (for us the partition is named debug), --array specifies how many tasks we want, -c specifies number of CPU cores per task.&lt;br /&gt;
&lt;br /&gt;
For more details on other command line options for sbatch for configuring the cluster, please visit [https://slurm.schedmd.com/sbatch.html Slurm Sbatch Documentation]&lt;br /&gt;
&lt;br /&gt;
=== Run Script and Check Output ===&lt;br /&gt;
Now we are ready to run the script on the cluster, specifically on 10 nodes of the cluster. To run the slurm script, simply give the command:&lt;br /&gt;
 $ sbatch hello-parallel.slurm&lt;br /&gt;
This should run our python script on 10 different nodes and generate output files in the same location. The output files will have an extension .out. &lt;br /&gt;
[[File:Slurm op.png|left|frameless|709x709px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We can view the output using the less command:&lt;br /&gt;
[[File:Slrm.png|left|frameless|447x447px]]&lt;br /&gt;
[[File:Slurm output.png|left|frameless]]&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=267</id>
		<title>Slurm Quick Start</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=267"/>
		<updated>2022-11-29T15:47:38Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: /* 2. Running a Slurm Batch Job on Multiple Nodes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page explains how to get the configure and run a batch job on the SLURM cluster from the head node.&lt;br /&gt;
&lt;br /&gt;
== 1. Configuring SLURM ==&lt;br /&gt;
Slurm has 2 entities: a slurmctld controller node and multiple slurmd host nodes where we can run jobs in parallel.&lt;br /&gt;
&lt;br /&gt;
To start the slurm controller on a machine we need to give this command from root:&lt;br /&gt;
 $ systemctl start slurmctld.service&lt;br /&gt;
After running this, we can verify that slurm controller is running by viewing the log with the command:&lt;br /&gt;
 $ tail /var/log/slurmctld.log&lt;br /&gt;
The nodes should be configured to run with the controller. You can see the information about available nodes using this command:&lt;br /&gt;
 $ sinfo&lt;br /&gt;
[[File:Sinfo.png|left|frameless|509x509px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 2. Running a Slurm Batch Job on Multiple Nodes ==&lt;br /&gt;
We can create python scripts and run these scripts in parallel on the cluster nodes. This is a simple example where we will print the task number from each node.&lt;br /&gt;
&lt;br /&gt;
=== Create a Python Script ===&lt;br /&gt;
First we create a Python script which prints the system task number:&lt;br /&gt;
 #!/usr/bin/python&lt;br /&gt;
 # import sys library (needed for accepted command line args)&lt;br /&gt;
 import sys&lt;br /&gt;
 # print task number&lt;br /&gt;
 print('Hello! I am a task number: ', sys.argv[1])&lt;br /&gt;
We will save this python script as hello-parallel.py&lt;br /&gt;
&lt;br /&gt;
=== Create Slurm Script ===&lt;br /&gt;
Next, we need to create a Slurm script to run the python program we just created:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 # Example of running python script with a job array&lt;br /&gt;
 &lt;br /&gt;
 #SBATCH -J hello&lt;br /&gt;
 #SBATCH -p debug&lt;br /&gt;
 #SBATCH --array=1-10                    # how many tasks in the array&lt;br /&gt;
 #SBATCH -c 1                            # one CPU core per task&lt;br /&gt;
 #SBATCH -t 10:00&lt;br /&gt;
 #SBATCH -o hello-%j-%a.out&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 # Run python script with a command line argument&lt;br /&gt;
 srun python3 hello-parallel.py $SLURM_ARRAY_TASK_ID&lt;br /&gt;
We will save this Slurm script as hello-parallel.slurm&lt;br /&gt;
&lt;br /&gt;
The first few lines of this file (with #SBATCH) are used to configure different parameters we want for the execution of the python script on the cluster.&lt;br /&gt;
&lt;br /&gt;
For example, -J specifies job name, -p specifies the partition on which the cluster nodes are (for us the partition is named debug), --array specifies how many tasks we want, -c specifies number of CPU cores per task.&lt;br /&gt;
&lt;br /&gt;
For more details on other command line options for sbatch for configuring the cluster, please visit [https://slurm.schedmd.com/sbatch.html Slurm Sbatch Documentation]&lt;br /&gt;
&lt;br /&gt;
=== Run Script and Check Output ===&lt;br /&gt;
Now we are ready to run the script on the cluster, specifically on 10 nodes of the cluster. To run the slurm script, simply give the command:&lt;br /&gt;
 $ sbatch hello-parallel.slurm&lt;br /&gt;
This should run our python script on 10 different nodes and generate output files in the same location. The output files will have an extension .out. &lt;br /&gt;
[[File:Slurm op.png|left|frameless|709x709px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We can view the output using the less command:&lt;br /&gt;
[[File:Slrm.png|left|frameless|447x447px]]&lt;br /&gt;
[[File:Slurm output.png|left|frameless]]&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=266</id>
		<title>Slurm Quick Start</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=266"/>
		<updated>2022-11-29T15:47:17Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: /* 1. Configuring SLURM */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page explains how to get the configure and run a batch job on the SLURM cluster from the head node.&lt;br /&gt;
&lt;br /&gt;
== 1. Configuring SLURM ==&lt;br /&gt;
Slurm has 2 entities: a slurmctld controller node and multiple slurmd host nodes where we can run jobs in parallel.&lt;br /&gt;
&lt;br /&gt;
To start the slurm controller on a machine we need to give this command from root:&lt;br /&gt;
 $ systemctl start slurmctld.service&lt;br /&gt;
After running this, we can verify that slurm controller is running by viewing the log with the command:&lt;br /&gt;
 $ tail /var/log/slurmctld.log&lt;br /&gt;
The nodes should be configured to run with the controller. You can see the information about available nodes using this command:&lt;br /&gt;
 $ sinfo&lt;br /&gt;
[[File:Sinfo.png|left|frameless|509x509px]]&lt;br /&gt;
&lt;br /&gt;
== 2. Running a Slurm Batch Job on Multiple Nodes ==&lt;br /&gt;
We can create python scripts and run these scripts in parallel on the cluster nodes. This is a simple example where we will print the task number from each node.&lt;br /&gt;
&lt;br /&gt;
=== Create a Python Script ===&lt;br /&gt;
First we create a Python script which prints the system task number:&lt;br /&gt;
 #!/usr/bin/python&lt;br /&gt;
 # import sys library (needed for accepted command line args)&lt;br /&gt;
 import sys&lt;br /&gt;
 # print task number&lt;br /&gt;
 print('Hello! I am a task number: ', sys.argv[1])&lt;br /&gt;
We will save this python script as hello-parallel.py&lt;br /&gt;
&lt;br /&gt;
=== Create Slurm Script ===&lt;br /&gt;
Next, we need to create a Slurm script to run the python program we just created:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 # Example of running python script with a job array&lt;br /&gt;
 &lt;br /&gt;
 #SBATCH -J hello&lt;br /&gt;
 #SBATCH -p debug&lt;br /&gt;
 #SBATCH --array=1-10                    # how many tasks in the array&lt;br /&gt;
 #SBATCH -c 1                            # one CPU core per task&lt;br /&gt;
 #SBATCH -t 10:00&lt;br /&gt;
 #SBATCH -o hello-%j-%a.out&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 # Run python script with a command line argument&lt;br /&gt;
 srun python3 hello-parallel.py $SLURM_ARRAY_TASK_ID&lt;br /&gt;
We will save this Slurm script as hello-parallel.slurm&lt;br /&gt;
&lt;br /&gt;
The first few lines of this file (with #SBATCH) are used to configure different parameters we want for the execution of the python script on the cluster.&lt;br /&gt;
&lt;br /&gt;
For example, -J specifies job name, -p specifies the partition on which the cluster nodes are (for us the partition is named debug), --array specifies how many tasks we want, -c specifies number of CPU cores per task.&lt;br /&gt;
&lt;br /&gt;
For more details on other command line options for sbatch for configuring the cluster, please visit [https://slurm.schedmd.com/sbatch.html Slurm Sbatch Documentation]&lt;br /&gt;
&lt;br /&gt;
=== Run Script and Check Output ===&lt;br /&gt;
Now we are ready to run the script on the cluster, specifically on 10 nodes of the cluster. To run the slurm script, simply give the command:&lt;br /&gt;
 $ sbatch hello-parallel.slurm&lt;br /&gt;
This should run our python script on 10 different nodes and generate output files in the same location. The output files will have an extension .out. &lt;br /&gt;
[[File:Slurm op.png|left|frameless|709x709px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We can view the output using the less command:&lt;br /&gt;
[[File:Slrm.png|left|frameless|447x447px]]&lt;br /&gt;
[[File:Slurm output.png|left|frameless]]&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=265</id>
		<title>Slurm Quick Start</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=265"/>
		<updated>2022-11-29T15:45:42Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: Added description of changes to configure Slurm for running batch jobs using the cluster&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page explains how to get the configure and run a batch job on the SLURM cluster from the head node.&lt;br /&gt;
&lt;br /&gt;
== 1. Configuring SLURM ==&lt;br /&gt;
Slurm has 2 entities: a slurmctld controller node and multiple slurmd host nodes where we can run jobs in parallel.&lt;br /&gt;
&lt;br /&gt;
To start the slurm controller on a machine we need to give this command from root:&lt;br /&gt;
 $ systemctl start slurmctld.service&lt;br /&gt;
After running this, we can verify that slurm controller is running by viewing the log with the command:&lt;br /&gt;
 $ tail /var/log/slurmctld.log&lt;br /&gt;
The nodes should be configured to run with the controller. You can see the information about available nodes using this command:&lt;br /&gt;
 $ sinfo&lt;br /&gt;
[[File:Sinfo.png|left|frameless|509x509px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 2. Running a Slurm Batch Job on Multiple Nodes ==&lt;br /&gt;
We can create python scripts and run these scripts in parallel on the cluster nodes. This is a simple example where we will print the task number from each node.&lt;br /&gt;
&lt;br /&gt;
=== Create a Python Script ===&lt;br /&gt;
First we create a Python script which prints the system task number:&lt;br /&gt;
 #!/usr/bin/python&lt;br /&gt;
 # import sys library (needed for accepted command line args)&lt;br /&gt;
 import sys&lt;br /&gt;
 # print task number&lt;br /&gt;
 print('Hello! I am a task number: ', sys.argv[1])&lt;br /&gt;
We will save this python script as hello-parallel.py&lt;br /&gt;
&lt;br /&gt;
=== Create Slurm Script ===&lt;br /&gt;
Next, we need to create a Slurm script to run the python program we just created:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 # Example of running python script with a job array&lt;br /&gt;
 &lt;br /&gt;
 #SBATCH -J hello&lt;br /&gt;
 #SBATCH -p debug&lt;br /&gt;
 #SBATCH --array=1-10                    # how many tasks in the array&lt;br /&gt;
 #SBATCH -c 1                            # one CPU core per task&lt;br /&gt;
 #SBATCH -t 10:00&lt;br /&gt;
 #SBATCH -o hello-%j-%a.out&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 # Run python script with a command line argument&lt;br /&gt;
 srun python3 hello-parallel.py $SLURM_ARRAY_TASK_ID&lt;br /&gt;
We will save this Slurm script as hello-parallel.slurm&lt;br /&gt;
&lt;br /&gt;
The first few lines of this file (with #SBATCH) are used to configure different parameters we want for the execution of the python script on the cluster.&lt;br /&gt;
&lt;br /&gt;
For example, -J specifies job name, -p specifies the partition on which the cluster nodes are (for us the partition is named debug), --array specifies how many tasks we want, -c specifies number of CPU cores per task.&lt;br /&gt;
&lt;br /&gt;
For more details on other command line options for sbatch for configuring the cluster, please visit [https://slurm.schedmd.com/sbatch.html Slurm Sbatch Documentation]&lt;br /&gt;
&lt;br /&gt;
=== Run Script and Check Output ===&lt;br /&gt;
Now we are ready to run the script on the cluster, specifically on 10 nodes of the cluster. To run the slurm script, simply give the command:&lt;br /&gt;
 $ sbatch hello-parallel.slurm&lt;br /&gt;
This should run our python script on 10 different nodes and generate output files in the same location. The output files will have an extension .out. &lt;br /&gt;
[[File:Slurm op.png|left|frameless|709x709px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We can view the output using the less command:&lt;br /&gt;
[[File:Slrm.png|left|frameless|447x447px]]&lt;br /&gt;
[[File:Slurm output.png|left|frameless]]&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=File:Slrm.png&amp;diff=264</id>
		<title>File:Slrm.png</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=File:Slrm.png&amp;diff=264"/>
		<updated>2022-11-29T15:43:54Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;slrm&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=File:Slurm_output.png&amp;diff=263</id>
		<title>File:Slurm output.png</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=File:Slurm_output.png&amp;diff=263"/>
		<updated>2022-11-29T15:42:27Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Slurm output&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=File:Slurm_op.png&amp;diff=262</id>
		<title>File:Slurm op.png</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=File:Slurm_op.png&amp;diff=262"/>
		<updated>2022-11-29T15:40:37Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Slurm output&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=261</id>
		<title>Slurm Quick Start</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=261"/>
		<updated>2022-11-29T15:24:09Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: /* Slurm Cluster Quick Start Guide */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page explains how to get the configure and run a batch job on the SLURM cluster from the head node.&lt;br /&gt;
&lt;br /&gt;
== 1. Configuring SLURM ==&lt;br /&gt;
Slurm has 2 entities: a slurmctld controller node and multiple slurmd host nodes where we can run jobs in parallel.&lt;br /&gt;
&lt;br /&gt;
To start the slurm controller on a machine we need to give this command from root:&lt;br /&gt;
 $ systemctl start slurmctld.service&lt;br /&gt;
After running this, we can verify that slurm controller is running by viewing the log with the command:&lt;br /&gt;
 $ tail /var/log/slurmctld.log&lt;br /&gt;
The nodes should be configured to run with the controller. You can see the information about available nodes using this command:&lt;br /&gt;
 $ sinfo&lt;br /&gt;
[[File:Sinfo.png|left|frameless|509x509px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 2. Running a Slurm Batch Job on Multiple Nodes ==&lt;br /&gt;
We can create python scripts and run these scripts in parallel on the cluster nodes. This is a simple example where we will print the task number from each node.&lt;br /&gt;
&lt;br /&gt;
=== Create a Python Script ===&lt;br /&gt;
First we create a Python script which prints the system task number:&lt;br /&gt;
 #!/usr/bin/python&lt;br /&gt;
 # import sys library (needed for accepted command line args)&lt;br /&gt;
 import sys&lt;br /&gt;
 # print task number&lt;br /&gt;
 print('Hello! I am a task number: ', sys.argv[1])&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=File:Sinfo.png&amp;diff=260</id>
		<title>File:Sinfo.png</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=File:Sinfo.png&amp;diff=260"/>
		<updated>2022-11-29T15:18:08Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;sinfo output&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=259</id>
		<title>Slurm Quick Start</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Slurm_Quick_Start&amp;diff=259"/>
		<updated>2022-11-29T15:08:14Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: Created page with &amp;quot;== Slurm Cluster Quick Start Guide ==&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Slurm Cluster Quick Start Guide ==&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Configuring_Ipython_for_Parallel_Computing&amp;diff=232</id>
		<title>Configuring Ipython for Parallel Computing</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Configuring_Ipython_for_Parallel_Computing&amp;diff=232"/>
		<updated>2022-10-13T15:16:13Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==== Step 1. From Terminal run following command to create a parallel profile ====&lt;br /&gt;
$ ipython3 profile create --parallel --profile=myprofile&lt;br /&gt;
&lt;br /&gt;
If this is completed successfully, then there should be some config files like ipcontroller_config.py, ipcluster_config.py, etc in the folder &amp;lt;NetID&amp;gt;/.ipython/profile_myprofile. Check this folder, if you are missing these config files, there might be some issue with the installation of IPython on the system.&lt;br /&gt;
&lt;br /&gt;
==== Step 2. We modify ipcluster_config.py so that engines are launched in remote machines and controller is launched on local machine ====&lt;br /&gt;
 $ cd &amp;lt;NetID&amp;gt;/.ipython/profile_myprofile&lt;br /&gt;
 &lt;br /&gt;
 $ vi ipcluster_config.py&lt;br /&gt;
&lt;br /&gt;
Now in this file we modify the following (uncomment the line and make the changes as shown):&lt;br /&gt;
&lt;br /&gt;
 c.Cluster.engine_launcher_class = 'ssh'                              --&amp;gt;line 528&lt;br /&gt;
 &lt;br /&gt;
 c.SSHControllerLauncher.controller_args = ['--ip=*']                 --&amp;gt;line 1689&lt;br /&gt;
 &lt;br /&gt;
 c.SSHEngineSetLauncher.engines = {'pnode10':2,'pnode11':2}           --&amp;gt;line 2423&lt;br /&gt;
&lt;br /&gt;
Save the file and exit. Note that in the c.SSHEngineSetLauncher.engines field, you need to put which machines (ramsey,fibonacci,etc) or nodes (pnode01,02,etc) and how many kernels you want from each. In my example I have taken 2 each from pnode10 and pnode11&lt;br /&gt;
&lt;br /&gt;
Next, in file ipcontroller_config.py in the same location:&lt;br /&gt;
&lt;br /&gt;
 c.IPController.ip = '0.0.0.0'&lt;br /&gt;
&lt;br /&gt;
Save and exit file. This step is important because otherwise the controller won't be able to listen on the engines and no parallel computation will happen.&lt;br /&gt;
&lt;br /&gt;
====Step 3. With this you should have parallel computing set up. Examples with IPython and MPI====&lt;br /&gt;
&lt;br /&gt;
======Only IPython======&lt;br /&gt;
Launch IPython with the profile you created and edited:&lt;br /&gt;
&lt;br /&gt;
 $ ipython3 --profile=myprofile&lt;br /&gt;
 &lt;br /&gt;
Code:&lt;br /&gt;
&lt;br /&gt;
 import time&lt;br /&gt;
 &lt;br /&gt;
 import ipyparallel as ipp&lt;br /&gt;
 &lt;br /&gt;
 def parallel_example():&lt;br /&gt;
 &lt;br /&gt;
    return f&amp;quot;Hello World!!&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt; request a cluster&lt;br /&gt;
 &lt;br /&gt;
 with ipp.Cluster() as rc:&lt;br /&gt;
 &lt;br /&gt;
     # get a view on the cluster&lt;br /&gt;
 &lt;br /&gt;
    view = rc.load_balanced_view()&lt;br /&gt;
 &lt;br /&gt;
    # submit the tasks&lt;br /&gt;
 &lt;br /&gt;
    asyncresult = view.map_async(parallel_example)&lt;br /&gt;
 &lt;br /&gt;
    # wait interactively for results&lt;br /&gt;
 &lt;br /&gt;
    asyncresult.wait_interactive()&lt;br /&gt;
 &lt;br /&gt;
    # retrieve actual results&lt;br /&gt;
 &lt;br /&gt;
    result = asyncresult.get()&lt;br /&gt;
 &lt;br /&gt;
    print(&amp;quot;\n&amp;quot;.join(result))&lt;br /&gt;
 &lt;br /&gt;
[[File:IPython_Parallel_Computing_Example.png|alt=|frameless|635x635px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On execution of this code:&lt;br /&gt;
&lt;br /&gt;
[[File:IPython_Parallel_Computing_Result_1.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20Computing%20Result%201.png|alt=|655x655px]][[File:IPython_Parallel_Computing_Result_2.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20Computing%20Result%202.png|alt=|frameless|658x658px]]&lt;br /&gt;
&lt;br /&gt;
======Example with MPI======&lt;br /&gt;
Code (taken from [https://ipyparallel.readthedocs.io/en/latest/ here]):&lt;br /&gt;
&lt;br /&gt;
 import ipyparallel as ipp&lt;br /&gt;
 &lt;br /&gt;
 def mpi_example():&lt;br /&gt;
 &lt;br /&gt;
    from mpi4py import MPI&lt;br /&gt;
 &lt;br /&gt;
    comm = MPI.COMM_WORLD&lt;br /&gt;
 &lt;br /&gt;
    return f&amp;quot;Hello World from rank {comm.Get_rank()}. total ranks={comm.Get_size()}&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt; request an MPI cluster with 4 engines&lt;br /&gt;
 &lt;br /&gt;
 with ipp.Cluster(engines='mpi', n=4) as rc:&lt;br /&gt;
 &lt;br /&gt;
    # get a broadcast_view on the cluster which is best&lt;br /&gt;
 &lt;br /&gt;
    # suited for MPI style computation&lt;br /&gt;
 &lt;br /&gt;
    view = rc.broadcast_view()&lt;br /&gt;
 &lt;br /&gt;
    # run the mpi_example function on all engines in parallel&lt;br /&gt;
 &lt;br /&gt;
    r = view.apply_sync(mpi_example)&lt;br /&gt;
 &lt;br /&gt;
    # Retrieve and print the result from the engines&lt;br /&gt;
 &lt;br /&gt;
    print(&amp;quot;\n&amp;quot;.join(r))   &lt;br /&gt;
&lt;br /&gt;
Result:&lt;br /&gt;
[[File:IPython Parallel with MPI.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20with%20MPI.png|left|frameless|768x768px]]&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Configuring_Ipython_for_Parallel_Computing&amp;diff=231</id>
		<title>Configuring Ipython for Parallel Computing</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Configuring_Ipython_for_Parallel_Computing&amp;diff=231"/>
		<updated>2022-10-13T15:15:11Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==== Step 1. From Terminal run following command to create a parallel profile ====&lt;br /&gt;
$ ipython3 profile create --parallel --profile=myprofile&lt;br /&gt;
&lt;br /&gt;
If this is completed successfully, then there should be some config files like ipcontroller_config.py, ipcluster_config.py, etc in the folder &amp;lt;NetID&amp;gt;/.ipython/profile_myprofile. Check this folder, if you are missing these config files, there might be some issue with the installation of IPython on the system.&lt;br /&gt;
&lt;br /&gt;
==== Step 2. We modify ipcluster_config.py so that engines are launched in remote machines and controller is launched on local machine ====&lt;br /&gt;
 $ cd &amp;lt;NetID&amp;gt;/.ipython/profile_myprofile&lt;br /&gt;
 &lt;br /&gt;
 $ vi ipcluster_config.py&lt;br /&gt;
&lt;br /&gt;
Now in this file we modify the following (uncomment the line and make the changes as shown):&lt;br /&gt;
&lt;br /&gt;
 c.Cluster.engine_launcher_class = 'ssh'                              --&amp;gt;line 528&lt;br /&gt;
 &lt;br /&gt;
 c.SSHControllerLauncher.controller_args = ['--ip=*']                 --&amp;gt;line 1689&lt;br /&gt;
 &lt;br /&gt;
 c.SSHEngineSetLauncher.engines = {'pnode10':2,'pnode11':2}           --&amp;gt;line 2423&lt;br /&gt;
&lt;br /&gt;
Save the file and exit. Note that in the c.SSHEngineSetLauncher.engines field, you need to put which machines (ramsey,fibonacci,etc) or nodes (pnode01,02,etc) and how many kernels you want from each. In my example I have taken 2 each from pnode10 and pnode11&lt;br /&gt;
&lt;br /&gt;
Next, in file ipcontroller_config.py in the same location:&lt;br /&gt;
&lt;br /&gt;
 c.IPController.ip = '0.0.0.0'&lt;br /&gt;
&lt;br /&gt;
Save and exit file. This step is important because otherwise the controller won't be able to listen on the engines and no parallel computation will happen.&lt;br /&gt;
&lt;br /&gt;
====Step 3. With this you should have parallel computing set up. Examples with IPython and MPI====&lt;br /&gt;
&lt;br /&gt;
======Only IPython======&lt;br /&gt;
Launch IPython with the profile you created and edited:&lt;br /&gt;
&lt;br /&gt;
 $ ipython3 --profile=myprofile&lt;br /&gt;
 &lt;br /&gt;
Code:&lt;br /&gt;
&lt;br /&gt;
 import time&lt;br /&gt;
 &lt;br /&gt;
 import ipyparallel as ipp&lt;br /&gt;
 &lt;br /&gt;
 def parallel_example():&lt;br /&gt;
 &lt;br /&gt;
    return f&amp;quot;Hello World!!&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt; request a cluster&lt;br /&gt;
 &lt;br /&gt;
 with ipp.Cluster() as rc:&lt;br /&gt;
 &lt;br /&gt;
     # get a view on the cluster&lt;br /&gt;
 &lt;br /&gt;
    view = rc.load_balanced_view()&lt;br /&gt;
 &lt;br /&gt;
    # submit the tasks&lt;br /&gt;
 &lt;br /&gt;
    asyncresult = view.map_async(parallel_example)&lt;br /&gt;
 &lt;br /&gt;
    # wait interactively for results&lt;br /&gt;
 &lt;br /&gt;
    asyncresult.wait_interactive()&lt;br /&gt;
 &lt;br /&gt;
    # retrieve actual results&lt;br /&gt;
 &lt;br /&gt;
    result = asyncresult.get()&lt;br /&gt;
 &lt;br /&gt;
    print(&amp;quot;\n&amp;quot;.join(result))&lt;br /&gt;
 &lt;br /&gt;
[[File:IPython_Parallel_Computing_Example.png|alt=|frameless|635x635px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On execution of this code:&lt;br /&gt;
&lt;br /&gt;
[[File:IPython_Parallel_Computing_Result_1.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20Computing%20Result%201.png|alt=|655x655px]][[File:IPython_Parallel_Computing_Result_2.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20Computing%20Result%202.png|alt=|frameless|658x658px]]&lt;br /&gt;
&lt;br /&gt;
======Example with MPI======&lt;br /&gt;
Code (taken from &amp;lt;nowiki&amp;gt;https://ipyparallel.readthedocs.io/en/latest/&amp;lt;/nowiki&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
 import ipyparallel as ipp&lt;br /&gt;
 &lt;br /&gt;
 def mpi_example():&lt;br /&gt;
 &lt;br /&gt;
    from mpi4py import MPI&lt;br /&gt;
 &lt;br /&gt;
    comm = MPI.COMM_WORLD&lt;br /&gt;
 &lt;br /&gt;
    return f&amp;quot;Hello World from rank {comm.Get_rank()}. total ranks={comm.Get_size()}&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt; request an MPI cluster with 4 engines&lt;br /&gt;
 &lt;br /&gt;
 with ipp.Cluster(engines='mpi', n=4) as rc:&lt;br /&gt;
 &lt;br /&gt;
    # get a broadcast_view on the cluster which is best&lt;br /&gt;
 &lt;br /&gt;
    # suited for MPI style computation&lt;br /&gt;
 &lt;br /&gt;
    view = rc.broadcast_view()&lt;br /&gt;
 &lt;br /&gt;
    # run the mpi_example function on all engines in parallel&lt;br /&gt;
 &lt;br /&gt;
    r = view.apply_sync(mpi_example)&lt;br /&gt;
 &lt;br /&gt;
    # Retrieve and print the result from the engines&lt;br /&gt;
 &lt;br /&gt;
    print(&amp;quot;\n&amp;quot;.join(r))   &lt;br /&gt;
&lt;br /&gt;
Result:&lt;br /&gt;
[[File:IPython Parallel with MPI.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20with%20MPI.png|left|frameless|768x768px]]&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Test_Page&amp;diff=230</id>
		<title>Test Page</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Test_Page&amp;diff=230"/>
		<updated>2022-10-13T15:11:03Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Test Page ==&lt;br /&gt;
&lt;br /&gt;
This is a place to doodle, and test out formatting without having to temporarily mess up an actual page.&lt;br /&gt;
&lt;br /&gt;
= Handy Links =&lt;br /&gt;
&lt;br /&gt;
* [https://e.math.cornell.edu/wiki/index.php/Mathematica_Parallel_Computing_Configuration Mathematica Remote Kernel Configuration]&lt;br /&gt;
* [https://e.math.cornell.edu/wiki/index.php/Configuring_Ipython_for_Parallel_Computing Configuring IPython] for Parallel Computing&lt;br /&gt;
*The [https://math.cornell.edu Math Department] Homepage.&lt;br /&gt;
*The Math Department [https://math.cornell.edu/people People Pages].&lt;br /&gt;
*[https://people.as.cornell.edu/saml_login Link to edit] your People page.&lt;br /&gt;
*The [https://webwork2.math.cornell.edu/ WeBWorK] Math Homework System.&lt;br /&gt;
&lt;br /&gt;
For instructors and researchers:&lt;br /&gt;
*Log in to [http://outlook.cornell.edu Cornell Email] on the web.&lt;br /&gt;
*Reset your [https://accounts.math.cornell.edu/panel/ Math Account] Password.&lt;br /&gt;
*Instructors can [https://accounts.math.cornell.edu/panel/invite.php send an invitation] to set up a Math Account to any NetID.&lt;br /&gt;
*[https://pi.math.cornell.edu View] the old server, pi.&lt;br /&gt;
*[https://pi.math.cornell.edu/m/ADMIN/Protected Log in] to pi.&lt;br /&gt;
*The 'Syllabus File' [https://e.math.cornell.edu/apps/courseinfo/ Course Materials] database.&lt;br /&gt;
*[https://e.math.cornell.edu/webdisk Access your files] on the Math system Webdisk.&lt;br /&gt;
*Other ways to access your Math files.&lt;br /&gt;
*Use the Math Department computation machines.&lt;br /&gt;
*How to print at the Math department.&lt;br /&gt;
*Printer activity and availability.&lt;br /&gt;
*How to scan.&lt;br /&gt;
* View the status of the Math systems.&lt;br /&gt;
&lt;br /&gt;
Links for Staff:&lt;br /&gt;
*The [https://dynomite.math.cornell.edu Department Database].&lt;br /&gt;
*How to connect to the staff file share.&lt;br /&gt;
*How to connect to your work computer from home.&lt;br /&gt;
* Math Department Student [https://e.math.cornell.edu/apps/emp Employment] Site.&lt;br /&gt;
**Student Employee [https://e.math.cornell.edu/apps/emp-review/ Performance Review] Site.&lt;br /&gt;
**[https://e.math.cornell.edu/apps/emp/admin Administrative] Login&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Mathematica_Parallel_Computing_Configuration&amp;diff=229</id>
		<title>Mathematica Parallel Computing Configuration</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Mathematica_Parallel_Computing_Configuration&amp;diff=229"/>
		<updated>2022-10-13T15:08:24Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: Added description of changes to configure Mathematica for parallel computing using the cluster nodes as Remote Kernels&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page contains information regarding how to setup nodes in the cluster as Remote Kernels in different methods to run parallel computation&lt;br /&gt;
&lt;br /&gt;
===First Method : Launch the kernels using commands in the notebook===&lt;br /&gt;
Use the code given below in a Mathematica notebook. Change &amp;quot;kernels&amp;quot; variable to the number of kernels wanted for parallel computation&lt;br /&gt;
 Needs[&amp;quot;SubKernels`RemoteKernels`&amp;quot;]&lt;br /&gt;
 kernels = 4; (*Number of kernels on each machine*)&lt;br /&gt;
 configureKernel[host_] := &lt;br /&gt;
  SubKernels`RemoteKernels`RemoteMachine[host, kernels];&lt;br /&gt;
 LaunchKernels[&lt;br /&gt;
  Map[configureKernel, {&amp;quot;&amp;lt;nowiki&amp;gt;ssh://ramsey/&amp;lt;/nowiki&amp;gt;&amp;quot;, &amp;quot;&amp;lt;nowiki&amp;gt;ssh://fibonacci/&amp;lt;/nowiki&amp;gt;&amp;quot;, &lt;br /&gt;
  &amp;quot;&amp;lt;nowiki&amp;gt;ssh://hopper/&amp;lt;/nowiki&amp;gt;&amp;quot;, &amp;quot;&amp;lt;nowiki&amp;gt;ssh://boole/&amp;lt;/nowiki&amp;gt;&amp;quot;, &amp;quot;&amp;lt;nowiki&amp;gt;ssh://heaviside/&amp;lt;/nowiki&amp;gt;&amp;quot;, &lt;br /&gt;
  &amp;quot;&amp;lt;nowiki&amp;gt;ssh://squid1/&amp;lt;/nowiki&amp;gt;&amp;quot;, &amp;quot;&amp;lt;nowiki&amp;gt;ssh://squid2/&amp;lt;/nowiki&amp;gt;&amp;quot;}]]; (*all the available machines, change to whatever is needed*)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Printing &amp;quot;Hello World&amp;quot; in parallel after setting up 4 kernels per machine with above code &lt;br /&gt;
&lt;br /&gt;
[[File:Image1.png|frameless|663x663px]]&lt;br /&gt;
&lt;br /&gt;
[[File:Image.png|frameless|667x667px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Second Method: Configure Remote Kernels in Mathematica Menus===&lt;br /&gt;
In this method, we will set up the nodes in the cluster as Remote Kernels from the Mathematica Parallelize menu. This setup is required only once, every subsequent time, simply running LaunchKernels&lt;br /&gt;
&lt;br /&gt;
command should set up the remote kernels for parallel evaluation. &lt;br /&gt;
&lt;br /&gt;
====Step 1====&lt;br /&gt;
Open Mathematica and open Parallel Kernel Configuration from the Evaluation tab&lt;br /&gt;
&lt;br /&gt;
[[File:Mathematica parallel.png|frameless|660x660px]]&lt;br /&gt;
&lt;br /&gt;
====Step 2====&lt;br /&gt;
Under Parallel, choose Remote Kernels, click on add hosts and add as shown below. Tick enable next to the number of kernels. For each, you can configure the number of kernels you want running on the machine.&lt;br /&gt;
&lt;br /&gt;
[[File:Image2.png|frameless|669x669px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add all the machines and number of kernels you want to add. This is a one time setup, you won't need to do this again and again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Image3.png|frameless|406x406px]]&lt;br /&gt;
&lt;br /&gt;
====Step 3====&lt;br /&gt;
Disable RemoteKernel Object here (if it is enabled)&lt;br /&gt;
&lt;br /&gt;
[[File:Image4.png|frameless|699x699px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you don't disable this, subkernels will be created on the local kernel and not on the remote hosts.&lt;br /&gt;
&lt;br /&gt;
====Step 4====&lt;br /&gt;
Once all these settings are done, running LaunchKernels[] from any notebook will launch the specification in the menu.&lt;br /&gt;
&lt;br /&gt;
In this example 3 machines were configured in menu as shown above:&lt;br /&gt;
&lt;br /&gt;
[[File:Image6.png|frameless|732x732px]]&lt;br /&gt;
&lt;br /&gt;
[[File:Image5.png|frameless|732x732px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Third Method: Launching the kernels in a loop===&lt;br /&gt;
For this method, the kernels will be launched by running the below code:&lt;br /&gt;
&lt;br /&gt;
 Needs[&amp;quot;SubKernels`&lt;br /&gt;
 &lt;br /&gt;
 RemoteKernels`&amp;quot;]&lt;br /&gt;
 &lt;br /&gt;
 kernels = 1;(*number of kernels on each machine*)&lt;br /&gt;
 &lt;br /&gt;
 configureKernel[host_] := &lt;br /&gt;
 &lt;br /&gt;
 SubKernels`RemoteKernels`RemoteMachine[host, kernels];&lt;br /&gt;
 &lt;br /&gt;
 For[i = 0, i &amp;lt; 15, i++; &lt;br /&gt;
 &lt;br /&gt;
 If[i &amp;lt; 10, &lt;br /&gt;
 &lt;br /&gt;
  LaunchKernels[&lt;br /&gt;
 &lt;br /&gt;
   Map[configureKernel, {StringJoin[{&amp;quot;&amp;lt;nowiki&amp;gt;ssh://pnode0&amp;lt;/nowiki&amp;gt;&amp;quot;, ToString[i], &lt;br /&gt;
 &lt;br /&gt;
       &amp;quot;/&amp;quot;}]}]], &lt;br /&gt;
 &lt;br /&gt;
  LaunchKernels[&lt;br /&gt;
 &lt;br /&gt;
    Map[configureKernel, {StringJoin[{&amp;quot;&amp;lt;nowiki&amp;gt;ssh://pnode&amp;lt;/nowiki&amp;gt;&amp;quot;, ToString[i], &lt;br /&gt;
 &lt;br /&gt;
       &amp;quot;/&amp;quot;}]}]]]]&lt;br /&gt;
We are launching 15 kernels here. Change the number in the loop condition to launch that number of kernels.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Parallel Evaluation Example: Generating and Plotting Mandelbrot Set===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this section we will take a look at how the time for Mandelbrot set generation is greatly improved when we run on parallel remote kernels.&lt;br /&gt;
&lt;br /&gt;
Code for generating Mandelbrot set without parallel kernels:&lt;br /&gt;
&lt;br /&gt;
[[File:Image7.png|frameless|527x527px]] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Output and Timing:&lt;br /&gt;
&lt;br /&gt;
[[File:Image8.png|frameless|547x547px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To parallelize this, we will launch 15 cores using the third method:&lt;br /&gt;
&lt;br /&gt;
[[File:Image9.png|frameless|557x557px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Output and Timing for Parallel Evaluation:&lt;br /&gt;
&lt;br /&gt;
[[File:Image10.png|frameless|566x566px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Code for running parallel evaluation of Mandelbrot set:&lt;br /&gt;
&lt;br /&gt;
 f[c_, maxiter_Integer : 10^4] := &lt;br /&gt;
 &lt;br /&gt;
 Abs[NestWhile[#^2 + c &amp;amp;, 0, Abs[#] &amp;lt; 2 &amp;amp;, 1, maxiter]] &amp;lt; 2&lt;br /&gt;
 &lt;br /&gt;
 f[0]&lt;br /&gt;
 &lt;br /&gt;
 (*True*)&lt;br /&gt;
 &lt;br /&gt;
 f[1]&lt;br /&gt;
 &lt;br /&gt;
 (*False*)&lt;br /&gt;
 &lt;br /&gt;
 f[I]&lt;br /&gt;
 &lt;br /&gt;
 (*True*)&lt;br /&gt;
 &lt;br /&gt;
 f[I, 10^6]&lt;br /&gt;
 &lt;br /&gt;
 (*True*)&lt;br /&gt;
 &lt;br /&gt;
 ArrayPlot[&lt;br /&gt;
 &lt;br /&gt;
 ParallelTable[&lt;br /&gt;
 &lt;br /&gt;
  Boole[f[N[x + I y], 10^3]], {y, -2, 2, 1/50}, {x, -2, 2, 1/50}]]&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=File:Image10.png&amp;diff=228</id>
		<title>File:Image10.png</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=File:Image10.png&amp;diff=228"/>
		<updated>2022-10-13T15:04:57Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Mandelbrot Parallel&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=File:Image9.png&amp;diff=227</id>
		<title>File:Image9.png</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=File:Image9.png&amp;diff=227"/>
		<updated>2022-10-13T15:03:22Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Parallel Kernels&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=File:Image8.png&amp;diff=226</id>
		<title>File:Image8.png</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=File:Image8.png&amp;diff=226"/>
		<updated>2022-10-13T15:01:52Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Mandelbrot output and timing&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=File:Image7.png&amp;diff=225</id>
		<title>File:Image7.png</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=File:Image7.png&amp;diff=225"/>
		<updated>2022-10-13T15:00:05Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Mandelbrot Code&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=File:Image6.png&amp;diff=224</id>
		<title>File:Image6.png</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=File:Image6.png&amp;diff=224"/>
		<updated>2022-10-13T14:46:41Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Res&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=File:Image5.png&amp;diff=223</id>
		<title>File:Image5.png</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=File:Image5.png&amp;diff=223"/>
		<updated>2022-10-13T14:46:01Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Result&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=File:Image4.png&amp;diff=222</id>
		<title>File:Image4.png</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=File:Image4.png&amp;diff=222"/>
		<updated>2022-10-13T14:43:29Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;RemoteKernel Object&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=File:Image3.png&amp;diff=221</id>
		<title>File:Image3.png</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=File:Image3.png&amp;diff=221"/>
		<updated>2022-10-13T14:41:44Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Configuring settings&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=File:Image2.png&amp;diff=220</id>
		<title>File:Image2.png</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=File:Image2.png&amp;diff=220"/>
		<updated>2022-10-13T14:40:20Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Cofiguring&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=File:Mathematica_parallel.png&amp;diff=219</id>
		<title>File:Mathematica parallel.png</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=File:Mathematica_parallel.png&amp;diff=219"/>
		<updated>2022-10-13T14:38:12Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Mathematica Parallel&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=File:Image.png&amp;diff=218</id>
		<title>File:Image.png</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=File:Image.png&amp;diff=218"/>
		<updated>2022-10-13T14:35:44Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Hello World Example&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=File:Image1.png&amp;diff=217</id>
		<title>File:Image1.png</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=File:Image1.png&amp;diff=217"/>
		<updated>2022-10-13T14:31:34Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Launch Remote Kernels&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Configuring_Ipython_for_Parallel_Computing&amp;diff=216</id>
		<title>Configuring Ipython for Parallel Computing</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Configuring_Ipython_for_Parallel_Computing&amp;diff=216"/>
		<updated>2022-10-13T14:08:06Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==== Step 1. From Terminal run following command to create a parallel profile ====&lt;br /&gt;
$ ipython3 profile create --parallel --profile=myprofile&lt;br /&gt;
&lt;br /&gt;
If this is completed successfully, then there should be some config files like ipcontroller_config.py, ipcluster_config.py, etc in the folder &amp;lt;NetID&amp;gt;/.ipython/profile_myprofile. Check this folder, if you are missing these config files, there might be some issue with the installation of IPython on the system.&lt;br /&gt;
&lt;br /&gt;
==== Step 2. We modify ipcluster_config.py so that engines are launched in remote machines and controller is launched on local machine ====&lt;br /&gt;
$ cd &amp;lt;NetID&amp;gt;/.ipython/profile_myprofile&lt;br /&gt;
&lt;br /&gt;
$ vi ipcluster_config.py&lt;br /&gt;
&lt;br /&gt;
Now in this file we modify the following (uncomment the line and make the changes as shown):&lt;br /&gt;
&lt;br /&gt;
c.Cluster.engine_launcher_class = 'ssh'                                               --&amp;gt;line 528&lt;br /&gt;
&lt;br /&gt;
c.SSHControllerLauncher.controller_args = ['--ip=*']                            --&amp;gt;line 1689&lt;br /&gt;
&lt;br /&gt;
c.SSHEngineSetLauncher.engines = {'pnode10':2,'pnode11':2}           --&amp;gt;line 2423&lt;br /&gt;
&lt;br /&gt;
Save the file and exit. Note that in the c.SSHEngineSetLauncher.engines field, you need to put which machines (ramsey,fibonacci,etc) or nodes (pnode01,02,etc) and how many kernels you want from each. In my example I have taken 2 each from pnode10 and pnode11&lt;br /&gt;
&lt;br /&gt;
Next, in file ipcontroller_config.py in the same location:&lt;br /&gt;
&lt;br /&gt;
c.IPController.ip = '0.0.0.0'&lt;br /&gt;
&lt;br /&gt;
Save and exit file. This step is important because otherwise the controller won't be able to listen on the engines and no parallel computation will happen.&lt;br /&gt;
&lt;br /&gt;
==== Step 3. With this you should have parallel computing set up. Examples with IPython and MPI ====&lt;br /&gt;
&lt;br /&gt;
====== Only IPython ======&lt;br /&gt;
Launch IPython with the profile you created and edited:&lt;br /&gt;
&lt;br /&gt;
$ ipython3 --profile=myprofile&lt;br /&gt;
&lt;br /&gt;
Code:&lt;br /&gt;
&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
import ipyparallel as ipp&lt;br /&gt;
&lt;br /&gt;
def parallel_example():&lt;br /&gt;
&lt;br /&gt;
    return f&amp;quot;Hello World!!&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt; request a cluster&lt;br /&gt;
&lt;br /&gt;
with ipp.Cluster() as rc:&lt;br /&gt;
&lt;br /&gt;
    # get a view on the cluster&lt;br /&gt;
&lt;br /&gt;
    view = rc.load_balanced_view()&lt;br /&gt;
&lt;br /&gt;
    # submit the tasks&lt;br /&gt;
&lt;br /&gt;
    asyncresult = view.map_async(parallel_example)&lt;br /&gt;
&lt;br /&gt;
    # wait interactively for results&lt;br /&gt;
&lt;br /&gt;
    asyncresult.wait_interactive()&lt;br /&gt;
&lt;br /&gt;
    # retrieve actual results&lt;br /&gt;
&lt;br /&gt;
    result = asyncresult.get()&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;\n&amp;quot;.join(result))&lt;br /&gt;
&lt;br /&gt;
[[File:IPython_Parallel_Computing_Example.png|alt=|frameless|635x635px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On execution of this code:&lt;br /&gt;
&lt;br /&gt;
[[File:IPython_Parallel_Computing_Result_1.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20Computing%20Result%201.png|alt=|655x655px]][[File:IPython_Parallel_Computing_Result_2.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20Computing%20Result%202.png|alt=|frameless|658x658px]]&lt;br /&gt;
&lt;br /&gt;
====== Example with MPI ======&lt;br /&gt;
Code (taken from &amp;lt;nowiki&amp;gt;https://ipyparallel.readthedocs.io/en/latest/&amp;lt;/nowiki&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
import ipyparallel as ipp&lt;br /&gt;
&lt;br /&gt;
def mpi_example():&lt;br /&gt;
&lt;br /&gt;
    from mpi4py import MPI&lt;br /&gt;
&lt;br /&gt;
    comm = MPI.COMM_WORLD&lt;br /&gt;
&lt;br /&gt;
    return f&amp;quot;Hello World from rank {comm.Get_rank()}. total ranks={comm.Get_size()}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt; request an MPI cluster with 4 engines&lt;br /&gt;
&lt;br /&gt;
with ipp.Cluster(engines='mpi', n=4) as rc:&lt;br /&gt;
&lt;br /&gt;
    # get a broadcast_view on the cluster which is best&lt;br /&gt;
&lt;br /&gt;
    # suited for MPI style computation&lt;br /&gt;
&lt;br /&gt;
    view = rc.broadcast_view()&lt;br /&gt;
&lt;br /&gt;
    # run the mpi_example function on all engines in parallel&lt;br /&gt;
&lt;br /&gt;
    r = view.apply_sync(mpi_example)&lt;br /&gt;
&lt;br /&gt;
    # Retrieve and print the result from the engines&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;\n&amp;quot;.join(r))   &lt;br /&gt;
&lt;br /&gt;
Result:&lt;br /&gt;
[[File:IPython Parallel with MPI.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20with%20MPI.png|left|frameless|768x768px]]&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Test_Page&amp;diff=215</id>
		<title>Test Page</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Test_Page&amp;diff=215"/>
		<updated>2022-10-13T14:07:25Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: Added link for IPython Config&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Test Page ==&lt;br /&gt;
&lt;br /&gt;
This is a place to doodle, and test out formatting without having to temporarily mess up an actual page.&lt;br /&gt;
&lt;br /&gt;
= Handy Links =&lt;br /&gt;
&lt;br /&gt;
* [https://e.math.cornell.edu/wiki/index.php/Configuring_Ipython_for_Parallel_Computing Configuring IPython] for Parallel Computing&lt;br /&gt;
*The [https://math.cornell.edu Math Department] Homepage.&lt;br /&gt;
*The Math Department [https://math.cornell.edu/people People Pages].&lt;br /&gt;
*[https://people.as.cornell.edu/saml_login Link to edit] your People page.&lt;br /&gt;
*The [https://webwork2.math.cornell.edu/ WeBWorK] Math Homework System.&lt;br /&gt;
&lt;br /&gt;
For instructors and researchers:&lt;br /&gt;
*Log in to [http://outlook.cornell.edu Cornell Email] on the web.&lt;br /&gt;
*Reset your [https://accounts.math.cornell.edu/panel/ Math Account] Password.&lt;br /&gt;
*Instructors can [https://accounts.math.cornell.edu/panel/invite.php send an invitation] to set up a Math Account to any NetID.&lt;br /&gt;
*[https://pi.math.cornell.edu View] the old server, pi.&lt;br /&gt;
*[https://pi.math.cornell.edu/m/ADMIN/Protected Log in] to pi.&lt;br /&gt;
*The 'Syllabus File' [https://e.math.cornell.edu/apps/courseinfo/ Course Materials] database.&lt;br /&gt;
*[https://e.math.cornell.edu/webdisk Access your files] on the Math system Webdisk.&lt;br /&gt;
*Other ways to access your Math files.&lt;br /&gt;
*Use the Math Department computation machines.&lt;br /&gt;
*How to print at the Math department.&lt;br /&gt;
*Printer activity and availability.&lt;br /&gt;
*How to scan.&lt;br /&gt;
* View the status of the Math systems.&lt;br /&gt;
&lt;br /&gt;
Links for Staff:&lt;br /&gt;
*The [https://dynomite.math.cornell.edu Department Database].&lt;br /&gt;
*How to connect to the staff file share.&lt;br /&gt;
*How to connect to your work computer from home.&lt;br /&gt;
* Math Department Student [https://e.math.cornell.edu/apps/emp Employment] Site.&lt;br /&gt;
**Student Employee [https://e.math.cornell.edu/apps/emp-review/ Performance Review] Site.&lt;br /&gt;
**[https://e.math.cornell.edu/apps/emp/admin Administrative] Login&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Configuring_Ipython_for_Parallel_Computing&amp;diff=214</id>
		<title>Configuring Ipython for Parallel Computing</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Configuring_Ipython_for_Parallel_Computing&amp;diff=214"/>
		<updated>2022-10-13T13:56:32Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==== Step 1. From Terminal run following command to create a parallel profile: ====&lt;br /&gt;
$ ipython3 profile create --parallel --profile=myprofile&lt;br /&gt;
&lt;br /&gt;
If this is completed successfully, then there should be some config files like ipcontroller_config.py, ipcluster_config.py, etc in the folder &amp;lt;NetID&amp;gt;/.ipython/profile_myprofile. Check this folder, if you are missing these config files, there might be some issue with the installation of IPython on the system.&lt;br /&gt;
&lt;br /&gt;
==== Step 2. We modify ipcluster_config.py so that engines are launched in remote machines and controller is launched on local machine: ====&lt;br /&gt;
$ cd &amp;lt;NetID&amp;gt;/.ipython/profile_myprofile&lt;br /&gt;
&lt;br /&gt;
$ vi ipcluster_config.py&lt;br /&gt;
&lt;br /&gt;
Now in this file we modify the following (uncomment the line and make the changes as shown):&lt;br /&gt;
&lt;br /&gt;
c.Cluster.engine_launcher_class = 'ssh'                                               --&amp;gt;line 528&lt;br /&gt;
&lt;br /&gt;
c.SSHControllerLauncher.controller_args = ['--ip=*']                            --&amp;gt;line 1689&lt;br /&gt;
&lt;br /&gt;
c.SSHEngineSetLauncher.engines = {'pnode10':2,'pnode11':2}           --&amp;gt;line 2423&lt;br /&gt;
&lt;br /&gt;
Save the file and exit. Note that in the c.SSHEngineSetLauncher.engines field, you need to put which machines (ramsey,fibonacci,etc) or nodes (pnode01,02,etc) and how many kernels you want from each. In my example I have taken 2 each from pnode10 and pnode11&lt;br /&gt;
&lt;br /&gt;
Next, in file ipcontroller_config.py in the same location:&lt;br /&gt;
&lt;br /&gt;
c.IPController.ip = '0.0.0.0'&lt;br /&gt;
&lt;br /&gt;
Save and exit file. This step is important because otherwise the controller won't be able to listen on the engines and no parallel computation will happen.&lt;br /&gt;
&lt;br /&gt;
==== Step 3. With this you should have parallel computing set up. Examples with IPython and MPI: ====&lt;br /&gt;
&lt;br /&gt;
====== Only IPython: ======&lt;br /&gt;
Launch IPython with the profile you created and edited:&lt;br /&gt;
&lt;br /&gt;
$ ipython3 --profile=myprofile&lt;br /&gt;
&lt;br /&gt;
Code:&lt;br /&gt;
&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
import ipyparallel as ipp&lt;br /&gt;
&lt;br /&gt;
def parallel_example():&lt;br /&gt;
&lt;br /&gt;
    return f&amp;quot;Hello World!!&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt; request a cluster&lt;br /&gt;
&lt;br /&gt;
with ipp.Cluster() as rc:&lt;br /&gt;
&lt;br /&gt;
    # get a view on the cluster&lt;br /&gt;
&lt;br /&gt;
    view = rc.load_balanced_view()&lt;br /&gt;
&lt;br /&gt;
    # submit the tasks&lt;br /&gt;
&lt;br /&gt;
    asyncresult = view.map_async(parallel_example)&lt;br /&gt;
&lt;br /&gt;
    # wait interactively for results&lt;br /&gt;
&lt;br /&gt;
    asyncresult.wait_interactive()&lt;br /&gt;
&lt;br /&gt;
    # retrieve actual results&lt;br /&gt;
&lt;br /&gt;
    result = asyncresult.get()&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;\n&amp;quot;.join(result))&lt;br /&gt;
&lt;br /&gt;
[[File:IPython_Parallel_Computing_Example.png|alt=|frameless|635x635px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On execution of this code:&lt;br /&gt;
&lt;br /&gt;
[[File:IPython_Parallel_Computing_Result_1.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20Computing%20Result%201.png|alt=|655x655px]][[File:IPython_Parallel_Computing_Result_2.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20Computing%20Result%202.png|alt=|frameless|658x658px]]&lt;br /&gt;
&lt;br /&gt;
====== Example with MPI: ======&lt;br /&gt;
Code (taken from &amp;lt;nowiki&amp;gt;https://ipyparallel.readthedocs.io/en/latest/&amp;lt;/nowiki&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
import ipyparallel as ipp&lt;br /&gt;
&lt;br /&gt;
def mpi_example():&lt;br /&gt;
&lt;br /&gt;
    from mpi4py import MPI&lt;br /&gt;
&lt;br /&gt;
    comm = MPI.COMM_WORLD&lt;br /&gt;
&lt;br /&gt;
    return f&amp;quot;Hello World from rank {comm.Get_rank()}. total ranks={comm.Get_size()}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt; request an MPI cluster with 4 engines&lt;br /&gt;
&lt;br /&gt;
with ipp.Cluster(engines='mpi', n=4) as rc:&lt;br /&gt;
&lt;br /&gt;
    # get a broadcast_view on the cluster which is best&lt;br /&gt;
&lt;br /&gt;
    # suited for MPI style computation&lt;br /&gt;
&lt;br /&gt;
    view = rc.broadcast_view()&lt;br /&gt;
&lt;br /&gt;
    # run the mpi_example function on all engines in parallel&lt;br /&gt;
&lt;br /&gt;
    r = view.apply_sync(mpi_example)&lt;br /&gt;
&lt;br /&gt;
    # Retrieve and print the result from the engines&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;\n&amp;quot;.join(r))   &lt;br /&gt;
&lt;br /&gt;
Result:&lt;br /&gt;
[[File:IPython Parallel with MPI.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20with%20MPI.png|left|frameless|768x768px]]&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Configuring_Ipython_for_Parallel_Computing&amp;diff=213</id>
		<title>Configuring Ipython for Parallel Computing</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Configuring_Ipython_for_Parallel_Computing&amp;diff=213"/>
		<updated>2022-10-13T13:55:44Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==== Step 1. From Terminal run following command to create a parallel profile: ====&lt;br /&gt;
$ ipython3 profile create --parallel --profile=myprofile&lt;br /&gt;
&lt;br /&gt;
If this is completed successfully, then there should be some config files like ipcontroller_config.py, ipcluster_config.py, etc in the folder &amp;lt;NetID&amp;gt;/.ipython/profile_myprofile. Check this folder, if you are missing these config files, there might be some issue with the installation of IPython on the system.&lt;br /&gt;
&lt;br /&gt;
==== Step 2. We modify ipcluster_config.py so that engines are launched in remote machines and controller is launched on local machine: ====&lt;br /&gt;
$ cd &amp;lt;NetID&amp;gt;/.ipython/profile_myprofile&lt;br /&gt;
&lt;br /&gt;
$ vi ipcluster_config.py&lt;br /&gt;
&lt;br /&gt;
Now in this file we modify the following (uncomment the line and make the changes as shown):&lt;br /&gt;
&lt;br /&gt;
c.Cluster.engine_launcher_class = 'ssh'                                               --&amp;gt;line 528&lt;br /&gt;
&lt;br /&gt;
c.SSHControllerLauncher.controller_args = ['--ip=*']                            --&amp;gt;line 1689&lt;br /&gt;
&lt;br /&gt;
c.SSHEngineSetLauncher.engines = {'pnode10':2,'pnode11':2}           --&amp;gt;line 2423&lt;br /&gt;
&lt;br /&gt;
Save the file and exit. Note that in the c.SSHEngineSetLauncher.engines field, you need to put which machines (ramsey,fibonacci,etc) or nodes (pnode01,02,etc) and how many kernels you want from each. In my example I have taken 2 each from pnode10 and pnode11&lt;br /&gt;
&lt;br /&gt;
Next, in file ipcontroller_config.py in the same location:&lt;br /&gt;
&lt;br /&gt;
c.IPController.ip = '0.0.0.0'&lt;br /&gt;
&lt;br /&gt;
Save and exit file. This step is important because otherwise the controller won't be able to listen on the engines and no parallel computation will happen.&lt;br /&gt;
&lt;br /&gt;
==== Step 3. With this you should have parallel computing set up. Examples with IPython and MPI: ====&lt;br /&gt;
&lt;br /&gt;
====== Only IPython: ======&lt;br /&gt;
Launch IPython with the profile you created and edited:&lt;br /&gt;
&lt;br /&gt;
$ ipython3 --profile=myprofile&lt;br /&gt;
&lt;br /&gt;
Code:&lt;br /&gt;
&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
import ipyparallel as ipp&lt;br /&gt;
&lt;br /&gt;
def parallel_example():&lt;br /&gt;
&lt;br /&gt;
    return f&amp;quot;Hello World!!&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt; request a cluster&lt;br /&gt;
&lt;br /&gt;
with ipp.Cluster() as rc:&lt;br /&gt;
&lt;br /&gt;
    # get a view on the cluster&lt;br /&gt;
&lt;br /&gt;
    view = rc.load_balanced_view()&lt;br /&gt;
&lt;br /&gt;
    # submit the tasks&lt;br /&gt;
&lt;br /&gt;
    asyncresult = view.map_async(parallel_example)&lt;br /&gt;
&lt;br /&gt;
    # wait interactively for results&lt;br /&gt;
&lt;br /&gt;
    asyncresult.wait_interactive()&lt;br /&gt;
&lt;br /&gt;
    # retrieve actual results&lt;br /&gt;
&lt;br /&gt;
    result = asyncresult.get()&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;\n&amp;quot;.join(result))&lt;br /&gt;
&lt;br /&gt;
[[File:IPython_Parallel_Computing_Example.png|alt=|frameless|635x635px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On execution of this code:[[File:IPython Parallel Computing Result 1.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20Computing%20Result%201.png|left|655x655px|IPython Parallel Computing Result-1]]&lt;br /&gt;
[[File:IPython Parallel Computing Result 2.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20Computing%20Result%202.png|left|658x658px|IPython Parallel Computing Result-2]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====== Example with MPI: ======&lt;br /&gt;
Code (taken from &amp;lt;nowiki&amp;gt;https://ipyparallel.readthedocs.io/en/latest/&amp;lt;/nowiki&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
import ipyparallel as ipp&lt;br /&gt;
&lt;br /&gt;
def mpi_example():&lt;br /&gt;
&lt;br /&gt;
    from mpi4py import MPI&lt;br /&gt;
&lt;br /&gt;
    comm = MPI.COMM_WORLD&lt;br /&gt;
&lt;br /&gt;
    return f&amp;quot;Hello World from rank {comm.Get_rank()}. total ranks={comm.Get_size()}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt; request an MPI cluster with 4 engines&lt;br /&gt;
&lt;br /&gt;
with ipp.Cluster(engines='mpi', n=4) as rc:&lt;br /&gt;
&lt;br /&gt;
    # get a broadcast_view on the cluster which is best&lt;br /&gt;
&lt;br /&gt;
    # suited for MPI style computation&lt;br /&gt;
&lt;br /&gt;
    view = rc.broadcast_view()&lt;br /&gt;
&lt;br /&gt;
    # run the mpi_example function on all engines in parallel&lt;br /&gt;
&lt;br /&gt;
    r = view.apply_sync(mpi_example)&lt;br /&gt;
&lt;br /&gt;
    # Retrieve and print the result from the engines&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;\n&amp;quot;.join(r))   &lt;br /&gt;
&lt;br /&gt;
Result:&lt;br /&gt;
[[File:IPython Parallel with MPI.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20with%20MPI.png|left|frameless|768x768px]]&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Configuring_Ipython_for_Parallel_Computing&amp;diff=212</id>
		<title>Configuring Ipython for Parallel Computing</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Configuring_Ipython_for_Parallel_Computing&amp;diff=212"/>
		<updated>2022-10-13T13:53:04Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: /* Example with MPI: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==== Step 1. From Terminal run following command to create a parallel profile: ====&lt;br /&gt;
$ ipython3 profile create --parallel --profile=myprofile&lt;br /&gt;
&lt;br /&gt;
If this is completed successfully, then there should be some config files like ipcontroller_config.py, ipcluster_config.py, etc in the folder &amp;lt;NetID&amp;gt;/.ipython/profile_myprofile. Check this folder, if you are missing these config files, there might be some issue with the installation of IPython on the system.&lt;br /&gt;
&lt;br /&gt;
==== Step 2. We modify ipcluster_config.py so that engines are launched in remote machines and controller is launched on local machine: ====&lt;br /&gt;
$ cd &amp;lt;NetID&amp;gt;/.ipython/profile_myprofile&lt;br /&gt;
&lt;br /&gt;
$ vi ipcluster_config.py&lt;br /&gt;
&lt;br /&gt;
Now in this file we modify the following (uncomment the line and make the changes as shown):&lt;br /&gt;
&lt;br /&gt;
c.Cluster.engine_launcher_class = 'ssh'                                               --&amp;gt;line 528&lt;br /&gt;
&lt;br /&gt;
c.SSHControllerLauncher.controller_args = ['--ip=*']                            --&amp;gt;line 1689&lt;br /&gt;
&lt;br /&gt;
c.SSHEngineSetLauncher.engines = {'pnode10':2,'pnode11':2}           --&amp;gt;line 2423&lt;br /&gt;
&lt;br /&gt;
Save the file and exit. Note that in the c.SSHEngineSetLauncher.engines field, you need to put which machines (ramsey,fibonacci,etc) or nodes (pnode01,02,etc) and how many kernels you want from each. In my example I have taken 2 each from pnode10 and pnode11&lt;br /&gt;
&lt;br /&gt;
Next, in file ipcontroller_config.py in the same location:&lt;br /&gt;
&lt;br /&gt;
c.IPController.ip = '0.0.0.0'&lt;br /&gt;
&lt;br /&gt;
Save and exit file. This step is important because otherwise the controller won't be able to listen on the engines and no parallel computation will happen.&lt;br /&gt;
&lt;br /&gt;
==== Step 3. With this you should have parallel computing set up. Examples with IPython and MPI: ====&lt;br /&gt;
&lt;br /&gt;
====== Only IPython: ======&lt;br /&gt;
Launch IPython with the profile you created and edited:&lt;br /&gt;
&lt;br /&gt;
$ ipython3 --profile=myprofile&lt;br /&gt;
&lt;br /&gt;
Code:&lt;br /&gt;
&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
import ipyparallel as ipp&lt;br /&gt;
&lt;br /&gt;
def parallel_example():&lt;br /&gt;
&lt;br /&gt;
    return f&amp;quot;Hello World!!&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt; request a cluster&lt;br /&gt;
&lt;br /&gt;
with ipp.Cluster() as rc:&lt;br /&gt;
&lt;br /&gt;
    # get a view on the cluster&lt;br /&gt;
&lt;br /&gt;
    view = rc.load_balanced_view()&lt;br /&gt;
&lt;br /&gt;
    # submit the tasks&lt;br /&gt;
&lt;br /&gt;
    asyncresult = view.map_async(parallel_example)&lt;br /&gt;
&lt;br /&gt;
    # wait interactively for results&lt;br /&gt;
&lt;br /&gt;
    asyncresult.wait_interactive()&lt;br /&gt;
&lt;br /&gt;
    # retrieve actual results&lt;br /&gt;
&lt;br /&gt;
    result = asyncresult.get()&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;\n&amp;quot;.join(result))&lt;br /&gt;
[[File:IPython Parallel Computing Example.png|left|frameless|635x635px|IPython Parallel Computing Example]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On execution of this code:[[File:IPython Parallel Computing Result 1.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20Computing%20Result%201.png|left|655x655px|IPython Parallel Computing Result-1]]&lt;br /&gt;
[[File:IPython Parallel Computing Result 2.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20Computing%20Result%202.png|left|658x658px|IPython Parallel Computing Result-2]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====== Example with MPI: ======&lt;br /&gt;
Code (taken from &amp;lt;nowiki&amp;gt;https://ipyparallel.readthedocs.io/en/latest/&amp;lt;/nowiki&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
import ipyparallel as ipp&lt;br /&gt;
&lt;br /&gt;
def mpi_example():&lt;br /&gt;
&lt;br /&gt;
    from mpi4py import MPI&lt;br /&gt;
&lt;br /&gt;
    comm = MPI.COMM_WORLD&lt;br /&gt;
&lt;br /&gt;
    return f&amp;quot;Hello World from rank {comm.Get_rank()}. total ranks={comm.Get_size()}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt; request an MPI cluster with 4 engines&lt;br /&gt;
&lt;br /&gt;
with ipp.Cluster(engines='mpi', n=4) as rc:&lt;br /&gt;
&lt;br /&gt;
    # get a broadcast_view on the cluster which is best&lt;br /&gt;
&lt;br /&gt;
    # suited for MPI style computation&lt;br /&gt;
&lt;br /&gt;
    view = rc.broadcast_view()&lt;br /&gt;
&lt;br /&gt;
    # run the mpi_example function on all engines in parallel&lt;br /&gt;
&lt;br /&gt;
    r = view.apply_sync(mpi_example)&lt;br /&gt;
&lt;br /&gt;
    # Retrieve and print the result from the engines&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;\n&amp;quot;.join(r))   &lt;br /&gt;
&lt;br /&gt;
Result:&lt;br /&gt;
[[File:IPython Parallel with MPI.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20with%20MPI.png|left|frameless|768x768px]]&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Configuring_Ipython_for_Parallel_Computing&amp;diff=211</id>
		<title>Configuring Ipython for Parallel Computing</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Configuring_Ipython_for_Parallel_Computing&amp;diff=211"/>
		<updated>2022-10-13T13:51:43Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==== Step 1. From Terminal run following command to create a parallel profile: ====&lt;br /&gt;
$ ipython3 profile create --parallel --profile=myprofile&lt;br /&gt;
&lt;br /&gt;
If this is completed successfully, then there should be some config files like ipcontroller_config.py, ipcluster_config.py, etc in the folder &amp;lt;NetID&amp;gt;/.ipython/profile_myprofile. Check this folder, if you are missing these config files, there might be some issue with the installation of IPython on the system.&lt;br /&gt;
&lt;br /&gt;
==== Step 2. We modify ipcluster_config.py so that engines are launched in remote machines and controller is launched on local machine: ====&lt;br /&gt;
$ cd &amp;lt;NetID&amp;gt;/.ipython/profile_myprofile&lt;br /&gt;
&lt;br /&gt;
$ vi ipcluster_config.py&lt;br /&gt;
&lt;br /&gt;
Now in this file we modify the following (uncomment the line and make the changes as shown):&lt;br /&gt;
&lt;br /&gt;
c.Cluster.engine_launcher_class = 'ssh'                                               --&amp;gt;line 528&lt;br /&gt;
&lt;br /&gt;
c.SSHControllerLauncher.controller_args = ['--ip=*']                            --&amp;gt;line 1689&lt;br /&gt;
&lt;br /&gt;
c.SSHEngineSetLauncher.engines = {'pnode10':2,'pnode11':2}           --&amp;gt;line 2423&lt;br /&gt;
&lt;br /&gt;
Save the file and exit. Note that in the c.SSHEngineSetLauncher.engines field, you need to put which machines (ramsey,fibonacci,etc) or nodes (pnode01,02,etc) and how many kernels you want from each. In my example I have taken 2 each from pnode10 and pnode11&lt;br /&gt;
&lt;br /&gt;
Next, in file ipcontroller_config.py in the same location:&lt;br /&gt;
&lt;br /&gt;
c.IPController.ip = '0.0.0.0'&lt;br /&gt;
&lt;br /&gt;
Save and exit file. This step is important because otherwise the controller won't be able to listen on the engines and no parallel computation will happen.&lt;br /&gt;
&lt;br /&gt;
==== Step 3. With this you should have parallel computing set up. Examples with IPython and MPI: ====&lt;br /&gt;
&lt;br /&gt;
====== Only IPython: ======&lt;br /&gt;
Launch IPython with the profile you created and edited:&lt;br /&gt;
&lt;br /&gt;
$ ipython3 --profile=myprofile&lt;br /&gt;
&lt;br /&gt;
Code:&lt;br /&gt;
&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
import ipyparallel as ipp&lt;br /&gt;
&lt;br /&gt;
def parallel_example():&lt;br /&gt;
&lt;br /&gt;
    return f&amp;quot;Hello World!!&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt; request a cluster&lt;br /&gt;
&lt;br /&gt;
with ipp.Cluster() as rc:&lt;br /&gt;
&lt;br /&gt;
    # get a view on the cluster&lt;br /&gt;
&lt;br /&gt;
    view = rc.load_balanced_view()&lt;br /&gt;
&lt;br /&gt;
    # submit the tasks&lt;br /&gt;
&lt;br /&gt;
    asyncresult = view.map_async(parallel_example)&lt;br /&gt;
&lt;br /&gt;
    # wait interactively for results&lt;br /&gt;
&lt;br /&gt;
    asyncresult.wait_interactive()&lt;br /&gt;
&lt;br /&gt;
    # retrieve actual results&lt;br /&gt;
&lt;br /&gt;
    result = asyncresult.get()&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;\n&amp;quot;.join(result))&lt;br /&gt;
[[File:IPython Parallel Computing Example.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20Computing%20Example.png|left|662x662px|IPython Parallel Computing Example]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On execution of this code:[[File:IPython Parallel Computing Result 1.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20Computing%20Result%201.png|left|655x655px|IPython Parallel Computing Result-1]]&lt;br /&gt;
[[File:IPython Parallel Computing Result 2.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20Computing%20Result%202.png|left|658x658px|IPython Parallel Computing Result-2]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====== Example with MPI: ======&lt;br /&gt;
Code (taken from &amp;lt;nowiki&amp;gt;https://ipyparallel.readthedocs.io/en/latest/&amp;lt;/nowiki&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
import ipyparallel as ipp&lt;br /&gt;
&lt;br /&gt;
def mpi_example():&lt;br /&gt;
&lt;br /&gt;
    from mpi4py import MPI&lt;br /&gt;
&lt;br /&gt;
    comm = MPI.COMM_WORLD&lt;br /&gt;
&lt;br /&gt;
    return f&amp;quot;Hello World from rank {comm.Get_rank()}. total ranks={comm.Get_size()}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt; request an MPI cluster with 4 engines&lt;br /&gt;
&lt;br /&gt;
with ipp.Cluster(engines='mpi', n=4) as rc:&lt;br /&gt;
&lt;br /&gt;
    # get a broadcast_view on the cluster which is best&lt;br /&gt;
&lt;br /&gt;
    # suited for MPI style computation&lt;br /&gt;
&lt;br /&gt;
    view = rc.broadcast_view()&lt;br /&gt;
&lt;br /&gt;
    # run the mpi_example function on all engines in parallel&lt;br /&gt;
&lt;br /&gt;
    r = view.apply_sync(mpi_example)&lt;br /&gt;
&lt;br /&gt;
    # Retrieve and print the result from the engines&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;\n&amp;quot;.join(r))   &lt;br /&gt;
&lt;br /&gt;
Result:&lt;br /&gt;
[[File:IPython Parallel with MPI.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20with%20MPI.png|left|frameless|768x768px]]&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Configuring_Ipython_for_Parallel_Computing&amp;diff=210</id>
		<title>Configuring Ipython for Parallel Computing</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Configuring_Ipython_for_Parallel_Computing&amp;diff=210"/>
		<updated>2022-10-13T13:51:07Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==== Step 1. From Terminal run following command to create a parallel profile: ====&lt;br /&gt;
$ ipython3 profile create --parallel --profile=myprofile&lt;br /&gt;
&lt;br /&gt;
If this is completed successfully, then there should be some config files like ipcontroller_config.py, ipcluster_config.py, etc in the folder &amp;lt;NetID&amp;gt;/.ipython/profile_myprofile. Check this folder, if you are missing these config files, there might be some issue with the installation of IPython on the system.&lt;br /&gt;
&lt;br /&gt;
==== Step 2. We modify ipcluster_config.py so that engines are launched in remote machines and controller is launched on local machine: ====&lt;br /&gt;
$ cd &amp;lt;NetID&amp;gt;/.ipython/profile_myprofile&lt;br /&gt;
&lt;br /&gt;
$ vi ipcluster_config.py&lt;br /&gt;
&lt;br /&gt;
Now in this file we modify the following (uncomment the line and make the changes as shown):&lt;br /&gt;
&lt;br /&gt;
c.Cluster.engine_launcher_class = 'ssh'                                               --&amp;gt;line 528&lt;br /&gt;
&lt;br /&gt;
c.SSHControllerLauncher.controller_args = ['--ip=*']                            --&amp;gt;line 1689&lt;br /&gt;
&lt;br /&gt;
c.SSHEngineSetLauncher.engines = {'pnode10':2,'pnode11':2}           --&amp;gt;line 2423&lt;br /&gt;
&lt;br /&gt;
Save the file and exit. Note that in the c.SSHEngineSetLauncher.engines field, you need to put which machines (ramsey,fibonacci,etc) or nodes (pnode01,02,etc) and how many kernels you want from each. In my example I have taken 2 each from pnode10 and pnode11&lt;br /&gt;
&lt;br /&gt;
Next, in file ipcontroller_config.py in the same location:&lt;br /&gt;
&lt;br /&gt;
c.IPController.ip = '0.0.0.0'&lt;br /&gt;
&lt;br /&gt;
Save and exit file. This step is important because otherwise the controller won't be able to listen on the engines and no parallel computation will happen.&lt;br /&gt;
&lt;br /&gt;
==== Step 3. With this you should have parallel computing set up. Examples with IPython and MPI: ====&lt;br /&gt;
&lt;br /&gt;
====== Only IPython: ======&lt;br /&gt;
Launch IPython with the profile you created and edited:&lt;br /&gt;
&lt;br /&gt;
$ ipython3 --profile=myprofile&lt;br /&gt;
&lt;br /&gt;
Code:&lt;br /&gt;
&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
import ipyparallel as ipp&lt;br /&gt;
&lt;br /&gt;
def parallel_example():&lt;br /&gt;
&lt;br /&gt;
    return f&amp;quot;Hello World!!&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt; request a cluster&lt;br /&gt;
&lt;br /&gt;
with ipp.Cluster() as rc:&lt;br /&gt;
&lt;br /&gt;
    # get a view on the cluster&lt;br /&gt;
&lt;br /&gt;
    view = rc.load_balanced_view()&lt;br /&gt;
&lt;br /&gt;
    # submit the tasks&lt;br /&gt;
&lt;br /&gt;
    asyncresult = view.map_async(parallel_example)&lt;br /&gt;
&lt;br /&gt;
    # wait interactively for results&lt;br /&gt;
&lt;br /&gt;
    asyncresult.wait_interactive()&lt;br /&gt;
&lt;br /&gt;
    # retrieve actual results&lt;br /&gt;
&lt;br /&gt;
    result = asyncresult.get()&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;\n&amp;quot;.join(result))&lt;br /&gt;
[[File:IPython Parallel Computing Example.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20Computing%20Example.png|left|662x662px|IPython Parallel Computing Example]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On execution of this code:[[File:IPython Parallel Computing Result 1.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20Computing%20Result%201.png|left|655x655px|IPython Parallel Computing Result-1]]&lt;br /&gt;
[[File:IPython Parallel Computing Result 2.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20Computing%20Result%202.png|left|658x658px|IPython Parallel Computing Result-2]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====== Example with MPI: ======&lt;br /&gt;
Code (taken from &amp;lt;nowiki&amp;gt;https://ipyparallel.readthedocs.io/en/latest/&amp;lt;/nowiki&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
import ipyparallel as ipp&lt;br /&gt;
&lt;br /&gt;
def mpi_example():&lt;br /&gt;
&lt;br /&gt;
    from mpi4py import MPI&lt;br /&gt;
&lt;br /&gt;
    comm = MPI.COMM_WORLD&lt;br /&gt;
&lt;br /&gt;
    return f&amp;quot;Hello World from rank {comm.Get_rank()}. total ranks={comm.Get_size()}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt; request an MPI cluster with 4 engines&lt;br /&gt;
&lt;br /&gt;
with ipp.Cluster(engines='mpi', n=4) as rc:&lt;br /&gt;
&lt;br /&gt;
    # get a broadcast_view on the cluster which is best&lt;br /&gt;
&lt;br /&gt;
    # suited for MPI style computation&lt;br /&gt;
&lt;br /&gt;
    view = rc.broadcast_view()&lt;br /&gt;
&lt;br /&gt;
    # run the mpi_example function on all engines in parallel&lt;br /&gt;
&lt;br /&gt;
    r = view.apply_sync(mpi_example)&lt;br /&gt;
&lt;br /&gt;
    # Retrieve and print the result from the engines&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;\n&amp;quot;.join(r))   &lt;br /&gt;
&lt;br /&gt;
Result:&lt;br /&gt;
[[File:IPython Parallel with MPI.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20with%20MPI.png|left|frameless|768x768px]]&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Configuring_Ipython_for_Parallel_Computing&amp;diff=209</id>
		<title>Configuring Ipython for Parallel Computing</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Configuring_Ipython_for_Parallel_Computing&amp;diff=209"/>
		<updated>2022-10-13T13:49:51Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==== Step 1. From Terminal run following command to create a parallel profile: ====&lt;br /&gt;
$ ipython3 profile create --parallel --profile=myprofile&lt;br /&gt;
&lt;br /&gt;
If this is completed successfully, then there should be some config files like ipcontroller_config.py, ipcluster_config.py, etc in the folder &amp;lt;NetID&amp;gt;/.ipython/profile_myprofile. Check this folder, if you are missing these config files, there might be some issue with the installation of IPython on the system.&lt;br /&gt;
&lt;br /&gt;
==== Step 2. We modify ipcluster_config.py so that engines are launched in remote machines and controller is launched on local machine: ====&lt;br /&gt;
$ cd &amp;lt;NetID&amp;gt;/.ipython/profile_myprofile&lt;br /&gt;
&lt;br /&gt;
$ vi ipcluster_config.py&lt;br /&gt;
&lt;br /&gt;
Now in this file we modify the following (uncomment the line and make the changes as shown):&lt;br /&gt;
&lt;br /&gt;
c.Cluster.engine_launcher_class = 'ssh'                                               --&amp;gt;line 528&lt;br /&gt;
&lt;br /&gt;
c.SSHControllerLauncher.controller_args = ['--ip=*']                            --&amp;gt;line 1689&lt;br /&gt;
&lt;br /&gt;
c.SSHEngineSetLauncher.engines = {'pnode10':2,'pnode11':2}           --&amp;gt;line 2423&lt;br /&gt;
&lt;br /&gt;
Save the file and exit. Note that in the c.SSHEngineSetLauncher.engines field, you need to put which machines (ramsey,fibonacci,etc) or nodes (pnode01,02,etc) and how many kernels you want from each. In my example I have taken 2 each from pnode10 and pnode11&lt;br /&gt;
&lt;br /&gt;
Next, in file ipcontroller_config.py in the same location:&lt;br /&gt;
&lt;br /&gt;
c.IPController.ip = '0.0.0.0'&lt;br /&gt;
&lt;br /&gt;
Save and exit file. This step is important because otherwise the controller won't be able to listen on the engines and no parallel computation will happen.&lt;br /&gt;
&lt;br /&gt;
==== Step 3. With this you should have parallel computing set up. Examples with IPython and MPI: ====&lt;br /&gt;
&lt;br /&gt;
====== Only IPython: ======&lt;br /&gt;
Launch IPython with the profile you created and edited:&lt;br /&gt;
&lt;br /&gt;
$ ipython3 --profile=myprofile&lt;br /&gt;
&lt;br /&gt;
Code:&lt;br /&gt;
&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
import ipyparallel as ipp&lt;br /&gt;
&lt;br /&gt;
def parallel_example():&lt;br /&gt;
&lt;br /&gt;
    return f&amp;quot;Hello World!!&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt; request a cluster&lt;br /&gt;
&lt;br /&gt;
with ipp.Cluster() as rc:&lt;br /&gt;
&lt;br /&gt;
    # get a view on the cluster&lt;br /&gt;
&lt;br /&gt;
    view = rc.load_balanced_view()&lt;br /&gt;
&lt;br /&gt;
    # submit the tasks&lt;br /&gt;
&lt;br /&gt;
    asyncresult = view.map_async(parallel_example)&lt;br /&gt;
&lt;br /&gt;
    # wait interactively for results&lt;br /&gt;
&lt;br /&gt;
    asyncresult.wait_interactive()&lt;br /&gt;
&lt;br /&gt;
    # retrieve actual results&lt;br /&gt;
&lt;br /&gt;
    result = asyncresult.get()&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;\n&amp;quot;.join(result))&lt;br /&gt;
[[File:IPython Parallel Computing Example.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20Computing%20Example.png|left|662x662px|IPython Parallel Computing Example]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On execution of this code:&lt;br /&gt;
[[File:IPython Parallel Computing Result 1.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20Computing%20Result%201.png|left|655x655px|IPython Parallel Computing Result-1]]&lt;br /&gt;
[[File:IPython Parallel Computing Result 2.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20Computing%20Result%202.png|left|658x658px|IPython Parallel Computing Result-2]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====== Example with MPI: ======&lt;br /&gt;
Code (taken from &amp;lt;nowiki&amp;gt;https://ipyparallel.readthedocs.io/en/latest/&amp;lt;/nowiki&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
import ipyparallel as ipp&lt;br /&gt;
&lt;br /&gt;
def mpi_example():&lt;br /&gt;
&lt;br /&gt;
    from mpi4py import MPI&lt;br /&gt;
&lt;br /&gt;
    comm = MPI.COMM_WORLD&lt;br /&gt;
&lt;br /&gt;
    return f&amp;quot;Hello World from rank {comm.Get_rank()}. total ranks={comm.Get_size()}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt; request an MPI cluster with 4 engines&lt;br /&gt;
&lt;br /&gt;
with ipp.Cluster(engines='mpi', n=4) as rc:&lt;br /&gt;
&lt;br /&gt;
    # get a broadcast_view on the cluster which is best&lt;br /&gt;
&lt;br /&gt;
    # suited for MPI style computation&lt;br /&gt;
&lt;br /&gt;
    view = rc.broadcast_view()&lt;br /&gt;
&lt;br /&gt;
    # run the mpi_example function on all engines in parallel&lt;br /&gt;
&lt;br /&gt;
    r = view.apply_sync(mpi_example)&lt;br /&gt;
&lt;br /&gt;
    # Retrieve and print the result from the engines&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;\n&amp;quot;.join(r))   &lt;br /&gt;
&lt;br /&gt;
Result:&lt;br /&gt;
[[File:IPython Parallel with MPI.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20with%20MPI.png|left|frameless|768x768px]]&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=Configuring_Ipython_for_Parallel_Computing&amp;diff=208</id>
		<title>Configuring Ipython for Parallel Computing</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=Configuring_Ipython_for_Parallel_Computing&amp;diff=208"/>
		<updated>2022-10-13T13:49:16Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: Added description of changes to configure IPython for parallel computing using the cluster&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==== Step 1. From Terminal run following command to create a parallel profile: ====&lt;br /&gt;
$ ipython3 profile create --parallel --profile=myprofile&lt;br /&gt;
&lt;br /&gt;
If this is completed successfully, then there should be some config files like ipcontroller_config.py, ipcluster_config.py, etc in the folder &amp;lt;NetID&amp;gt;/.ipython/profile_myprofile. Check this folder, if you are missing these config files, there might be some issue with the installation of IPython on the system.&lt;br /&gt;
&lt;br /&gt;
==== Step 2. We modify ipcluster_config.py so that engines are launched in remote machines and controller is launched on local machine: ====&lt;br /&gt;
$ cd &amp;lt;NetID&amp;gt;/.ipython/profile_myprofile&lt;br /&gt;
&lt;br /&gt;
$ vi ipcluster_config.py&lt;br /&gt;
&lt;br /&gt;
Now in this file we modify the following (uncomment the line and make the changes as shown):&lt;br /&gt;
&lt;br /&gt;
c.Cluster.engine_launcher_class = 'ssh'                                               --&amp;gt;line 528&lt;br /&gt;
&lt;br /&gt;
c.SSHControllerLauncher.controller_args = ['--ip=*']                            --&amp;gt;line 1689&lt;br /&gt;
&lt;br /&gt;
c.SSHEngineSetLauncher.engines = {'pnode10':2,'pnode11':2}           --&amp;gt;line 2423&lt;br /&gt;
&lt;br /&gt;
Save the file and exit. Note that in the c.SSHEngineSetLauncher.engines field, you need to put which machines (ramsey,fibonacci,etc) or nodes (pnode01,02,etc) and how many kernels you want from each. In my example I have taken 2 each from pnode10 and pnode11&lt;br /&gt;
&lt;br /&gt;
Next, in file ipcontroller_config.py in the same location:&lt;br /&gt;
&lt;br /&gt;
c.IPController.ip = '0.0.0.0'&lt;br /&gt;
&lt;br /&gt;
Save and exit file. This step is important because otherwise the controller won't be able to listen on the engines and no parallel computation will happen.&lt;br /&gt;
&lt;br /&gt;
==== Step 3. With this you should have parallel computing set up. Examples with IPython and MPI: ====&lt;br /&gt;
&lt;br /&gt;
====== Only IPython: ======&lt;br /&gt;
Launch IPython with the profile you created and edited:&lt;br /&gt;
&lt;br /&gt;
$ ipython3 --profile=myprofile&lt;br /&gt;
&lt;br /&gt;
Code:&lt;br /&gt;
&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
import ipyparallel as ipp&lt;br /&gt;
&lt;br /&gt;
def parallel_example():&lt;br /&gt;
&lt;br /&gt;
    return f&amp;quot;Hello World!!&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt; request a cluster&lt;br /&gt;
&lt;br /&gt;
with ipp.Cluster() as rc:&lt;br /&gt;
&lt;br /&gt;
    # get a view on the cluster&lt;br /&gt;
&lt;br /&gt;
    view = rc.load_balanced_view()&lt;br /&gt;
&lt;br /&gt;
    # submit the tasks&lt;br /&gt;
&lt;br /&gt;
    asyncresult = view.map_async(parallel_example)&lt;br /&gt;
&lt;br /&gt;
    # wait interactively for results&lt;br /&gt;
&lt;br /&gt;
    asyncresult.wait_interactive()&lt;br /&gt;
&lt;br /&gt;
    # retrieve actual results&lt;br /&gt;
&lt;br /&gt;
    result = asyncresult.get()&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;\n&amp;quot;.join(result))&lt;br /&gt;
[[File:IPython Parallel Computing Example.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20Computing%20Example.png|left|662x662px|IPython Parallel Computing Example]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On execution of this code:&lt;br /&gt;
[[File:IPython Parallel Computing Result 1.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20Computing%20Result%201.png|left|655x655px|IPython Parallel Computing Result-1]]&lt;br /&gt;
[[File:IPython Parallel Computing Result 2.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20Computing%20Result%202.png|left|658x658px|IPython Parallel Computing Result-2]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====== Example with MPI: ======&lt;br /&gt;
Code (taken from &amp;lt;nowiki&amp;gt;https://ipyparallel.readthedocs.io/en/latest/&amp;lt;/nowiki&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
import ipyparallel as ipp&lt;br /&gt;
&lt;br /&gt;
def mpi_example():&lt;br /&gt;
&lt;br /&gt;
    from mpi4py import MPI&lt;br /&gt;
&lt;br /&gt;
    comm = MPI.COMM_WORLD&lt;br /&gt;
&lt;br /&gt;
    return f&amp;quot;Hello World from rank {comm.Get_rank()}. total ranks={comm.Get_size()}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt; request an MPI cluster with 4 engines&lt;br /&gt;
&lt;br /&gt;
with ipp.Cluster(engines='mpi', n=4) as rc:&lt;br /&gt;
&lt;br /&gt;
    # get a broadcast_view on the cluster which is best&lt;br /&gt;
&lt;br /&gt;
    # suited for MPI style computation&lt;br /&gt;
&lt;br /&gt;
    view = rc.broadcast_view()&lt;br /&gt;
&lt;br /&gt;
    # run the mpi_example function on all engines in parallel&lt;br /&gt;
&lt;br /&gt;
    r = view.apply_sync(mpi_example)&lt;br /&gt;
&lt;br /&gt;
    # Retrieve and print the result from the engines&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;\n&amp;quot;.join(r))   &lt;br /&gt;
&lt;br /&gt;
Result:&lt;br /&gt;
[[File:IPython Parallel with MPI.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20with%20MPI.png|left|frameless|768x768px]]&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
	<entry>
		<id>https://e.math.cornell.edu/wiki/index.php?title=File:IPython_Parallel_with_MPI.png&amp;diff=207</id>
		<title>File:IPython Parallel with MPI.png</title>
		<link rel="alternate" type="text/html" href="https://e.math.cornell.edu/wiki/index.php?title=File:IPython_Parallel_with_MPI.png&amp;diff=207"/>
		<updated>2022-10-13T13:38:25Z</updated>

		<summary type="html">&lt;p&gt;Sl2625: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;IPython Parallel with MPI&lt;/div&gt;</summary>
		<author><name>Sl2625</name></author>
	</entry>
</feed>