53
edits
m |
|||
(One intermediate revision by the same user not shown) | |||
==== Step 2. We modify ipcluster_config.py so that engines are launched in remote machines and controller is launched on local machine ====
$ cd <NetID>/.ipython/profile_myprofile
$ vi ipcluster_config.py
Now in this file we modify the following (uncomment the line and make the changes as shown):
c.Cluster.engine_launcher_class = 'ssh'
c.SSHControllerLauncher.controller_args = ['--ip=*']
c.SSHEngineSetLauncher.engines = {'pnode10':2,'pnode11':2} -->line 2423
Save the file and exit. Note that in the c.SSHEngineSetLauncher.engines field, you need to put which machines (ramsey,fibonacci,etc) or nodes (pnode01,02,etc) and how many kernels you want from each. In my example I have taken 2 each from pnode10 and pnode11
Next, in file ipcontroller_config.py in the same location:
c.IPController.ip = '0.0.0.0'
Save and exit file. This step is important because otherwise the controller won't be able to listen on the engines and no parallel computation will happen.
====
======
Launch IPython with the profile you created and edited:
$ ipython3 --profile=myprofile
Code:
import time
import ipyparallel as ipp
def parallel_example():
<nowiki>#</nowiki> request a cluster
with ipp.Cluster() as rc:
# get a view on the cluster
[[File:IPython_Parallel_Computing_Example.png|alt=|frameless|635x635px]]
[[File:IPython_Parallel_Computing_Result_1.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20Computing%20Result%201.png|alt=|655x655px]][[File:IPython_Parallel_Computing_Result_2.png|link=https://e.math.cornell.edu/wiki/index.php/File:IPython%20Parallel%20Computing%20Result%202.png|alt=|frameless|658x658px]]
======
Code (taken from
import ipyparallel as ipp▼
def mpi_example():▼
from mpi4py import MPI▼
comm = MPI.COMM_WORLD▼
return f"Hello World from rank {comm.Get_rank()}. total ranks={comm.Get_size()}"▼
<nowiki>#</nowiki> request an MPI cluster with 4 engines▼
with ipp.Cluster(engines='mpi', n=4) as rc:▼
# get a broadcast_view on the cluster which is best▼
# suited for MPI style computation▼
view = rc.broadcast_view()▼
# run the mpi_example function on all engines in parallel▼
r = view.apply_sync(mpi_example)▼
# Retrieve and print the result from the engines▼
▲ import ipyparallel as ipp
print("\n".join(r)) ▼
▲ def mpi_example():
▲ <nowiki>#</nowiki> request an MPI cluster with 4 engines
▲ with ipp.Cluster(engines='mpi', n=4) as rc:
Result:
|
edits