KaPy:ParallelGameOfLife: Unterschied zwischen den Versionen
Cls (Diskussion | Beiträge) Keine Bearbeitungszusammenfassung |
Cls (Diskussion | Beiträge) Keine Bearbeitungszusammenfassung |
||
(15 dazwischenliegende Versionen desselben Benutzers werden nicht angezeigt) | |||
Zeile 4: | Zeile 4: | ||
Timo präsentiert uns eine parallele Implementation des [http://de.wikipedia.org/wiki/Game_of_Life Game of Life], realisiert mit [http://ipython.org IPython] und [http://code.google.com/p/mpi4py mpi4py]. Das Prinzip: Partitioniere das Spielfeld, ein 2D-[http://docs.scipy.org/doc/numpy/reference/generated/numpy.array.html Numpy-Array], und verteile es auf mehrere Prozessoren. | Timo präsentiert uns eine parallele Implementation des [http://de.wikipedia.org/wiki/Game_of_Life Game of Life], realisiert mit [http://ipython.org IPython] und [http://code.google.com/p/mpi4py mpi4py]. Das Prinzip: Partitioniere das Spielfeld, ein 2D-[http://docs.scipy.org/doc/numpy/reference/generated/numpy.array.html Numpy-Array], und verteile es auf mehrere Prozessoren. | ||
<!-- Was ist MPI? | <!-- Was ist MPI? | ||
Zeile 38: | Zeile 38: | ||
mpi4py Tutorial / Documentation | mpi4py Tutorial / Documentation | ||
---> | ---> | ||
=== mpi4py === | |||
'''Message Passing Interface (MPI)''' ist ein Standard, der den Nachrichtenaustausch bei parallelen Berechnungen auf verteilten Computersystemen beschreibt. Eine MPI-Applikation besteht in der Regel aus mehreren miteinander kommunizierenden Prozessen, die alle zu Beginn der Programmausführung parallel gestartet werden. Alle diese Prozesse arbeiten dann gemeinsam an einem Problem und nutzen zum Datenaustausch Nachrichten, welche explizit von einem zum anderen Prozess geschickt werden. Ein Vorteil dieses Prinzips ist es, dass der Nachrichtenaustausch auch über Rechnergrenzen hinweg funktioniert. | |||
'''[http://code.google.com/p/mpi4py/ mpi4py]''' stellt Python-Bindings für den MPI-Standard zur Verfügung. Die Serialisierung von Objekten übernimmt [http://docs.python.org/library/pickle.html pickle]. | |||
=== IPython.parallel === | |||
IPython ist längst nicht mehr "nur" eine alternative Python Shell, sondern mittlerweile auch ein "tool for high level and interactive parallel computing", wie es auf der Webseite heisst. Die Architektur wird [http://ipython.org/ipython-doc/stable/parallel/parallel_intro.html hier] beschrieben. Einige Grundbegriffe: | |||
* '''Engine''' The IPython engine is a Python instance that takes Python commands over a network connection. Eventually, the IPython engine will be a full IPython interpreter, but for now, it is a regular Python interpreter. | |||
* '''Client''' There is one primary object, the Client, for connecting to a cluster. | |||
* '''View''' For each execution model, there is a corresponding View. These views allow users to interact with a set of engines through the interface. | |||
Einige Funktionen für den Anfang: | |||
* ''View.activate'' Activate IPython magics associated with this View | |||
* ''View.scatter'' Partition a Python sequence and send the partitions to a set of engines. | |||
* ''View.execute'' Executes code on targets in blocking or nonblocking manner. | |||
* ''View.__getitem__'' Get object(s) by key_s from remote namespace | |||
=== Game of Life - Codebeispiel === | |||
<syntaxhighlight lang="python"> | |||
# -*- coding: utf-8 -*- | |||
# <nbformat>2</nbformat> | |||
# <codecell> | |||
from IPython.parallel import Client | |||
c = Client() | |||
v = c[:] | |||
with v.sync_imports(): | |||
import numpy as np | |||
from mpi4py import MPI | |||
from itertools import product | |||
v.block = True | |||
# shuffle ids around so that view ids match up with mpi ranks | |||
v.execute("my_rank = MPI.COMM_WORLD.Get_rank()") | |||
ranks = v["my_rank"] | |||
v = c[list(reversed(ranks))] | |||
v.block = True | |||
# enable ipython magic commands for use with this view | |||
v.activate() | |||
# use zasims display functionality to display a state to ipython | |||
from zasim.display.qt import render_state_array, qimage_to_pngstr | |||
from IPython.core.display import publish_png | |||
def display_state(config, scale=5): | |||
img = render_state_array(config).scaled(config.shape[0] * scale, config.shape[1] * scale) | |||
pngstr = qimage_to_pngstr(img) | |||
publish_png(pngstr) | |||
def build_border(config): | |||
new = np.zeros((config.shape[0] + 2, config.shape[1] + 2), config.dtype) | |||
new[1:-1, 1:-1] = config | |||
return new | |||
# <codecell> | |||
def offset_pos((y, x), (a, b)): | |||
return (y + a, x + b) | |||
# calculate the standard game of life | |||
def game_of_life(cconf, nconf): | |||
result = None | |||
W, H = cconf.shape | |||
for pos in product(xrange(1, W - 1), xrange(1, H - 1)): | |||
lu = cconf[offset_pos(pos, (-1, -1))] | |||
u = cconf[offset_pos(pos, (-1, 0))] | |||
ru = cconf[offset_pos(pos, (-1, 1))] | |||
l = cconf[offset_pos(pos, (0, -1))] | |||
m = cconf[offset_pos(pos, (0, 0))] | |||
r = cconf[offset_pos(pos, (0, 1))] | |||
ld = cconf[offset_pos(pos, (1, -1))] | |||
d = cconf[offset_pos(pos, (1, 0))] | |||
rd = cconf[offset_pos(pos, (1, 1))] | |||
nonzerocount = lu + u + ru + l + r + ld + d + rd | |||
result = m | |||
if m == 0: | |||
if 3 <= nonzerocount <= 3: | |||
result = 1 | |||
else: | |||
if not (2 <= nonzerocount <= 3): | |||
result = 0 | |||
nconf[pos] = result | |||
# copy the sides where the opposite side is in the same array | |||
def copy_up_down(conf): | |||
W, H = conf.shape | |||
conf[Ellipsis,0] = conf[Ellipsis,H - 2] | |||
conf[Ellipsis,H - 1] = conf[Ellipsis,1] | |||
# copy the sides where the opposite side is on another engine | |||
# this uses MPI "message passing" | |||
def distribute_borders(config): | |||
W, H = config.shape | |||
r = MPI.COMM_WORLD.Get_rank() | |||
size = MPI.COMM_WORLD.Get_size() | |||
# first, send our left side to the previous engine | |||
# and receive our right outside from the next engine | |||
MPI.COMM_WORLD.Sendrecv(config[1], (r - 1) % size, 0, config[W+1], (r + 1) % size, 0) | |||
# then, send our right side to the next engine and | |||
# receive our left outside from the previous engine | |||
MPI.COMM_WORLD.Sendrecv(config[W], (r + 1) % size, 0, config[0], (r - 1) % size, 0) | |||
# <codecell> | |||
# generate a random start configuration | |||
za_d = np.random.randint(0, 2, (40, 40)) | |||
#za_d = np.zeros((40, 40)) | |||
#za_d[1:4, 1:4] = [[0, 1, 0], [0, 0, 1], [1, 1, 1]] | |||
# distribute the starting configuration among all engines | |||
v.scatter("cconf", za_d) | |||
# push the functions we need for calculation to all engines | |||
v.push({"build_border": build_border, | |||
"game_of_life": game_of_life, | |||
"distribute_borders": distribute_borders, | |||
"copy_up_down": copy_up_down, | |||
"offset_pos": offset_pos}) | |||
# generate borders for all stripes on all engines | |||
v.execute("cconf = build_border(cconf)") | |||
# and we need a "new configuration" array, so that we can | |||
# switch nconf and cconf after each step. | |||
v.execute("nconf = cconf.copy()") | |||
# <codecell> | |||
def step(): | |||
# first we run the game of life step reading from cconf writing into nconf | |||
v.execute("game_of_life(cconf, nconf)") | |||
# then we copy the borders locally | |||
v.execute("copy_up_down(nconf)") | |||
# and then distribute borders among engines | |||
v.execute("distribute_borders(nconf)") | |||
# and finally we switch cconf and nconf | |||
v.execute("cconf, nconf = nconf, cconf") | |||
# <codecell> | |||
for i in range(10): | |||
step() | |||
# after a step, we strip the local border | |||
v.execute("view = cconf[1:-1, 1:-1]") | |||
# and gather all stripes into a big array | |||
conf_all = v.gather("view") | |||
# and display it | |||
display_state(conf_all) | |||
</syntaxhighlight> | |||
=== Lektüre === | |||
* [http://ipython.org/ipython-doc/dev/parallel/index.html Using IPython for parallel computing] |
Aktuelle Version vom 1. Juli 2012, 19:25 Uhr
15. Juni 2012: IPython + mpi4py = paralleles Game of Life
Timo präsentiert uns eine parallele Implementation des Game of Life, realisiert mit IPython und mpi4py. Das Prinzip: Partitioniere das Spielfeld, ein 2D-Numpy-Array, und verteile es auf mehrere Prozessoren.
mpi4py
Message Passing Interface (MPI) ist ein Standard, der den Nachrichtenaustausch bei parallelen Berechnungen auf verteilten Computersystemen beschreibt. Eine MPI-Applikation besteht in der Regel aus mehreren miteinander kommunizierenden Prozessen, die alle zu Beginn der Programmausführung parallel gestartet werden. Alle diese Prozesse arbeiten dann gemeinsam an einem Problem und nutzen zum Datenaustausch Nachrichten, welche explizit von einem zum anderen Prozess geschickt werden. Ein Vorteil dieses Prinzips ist es, dass der Nachrichtenaustausch auch über Rechnergrenzen hinweg funktioniert.
mpi4py stellt Python-Bindings für den MPI-Standard zur Verfügung. Die Serialisierung von Objekten übernimmt pickle.
IPython.parallel
IPython ist längst nicht mehr "nur" eine alternative Python Shell, sondern mittlerweile auch ein "tool for high level and interactive parallel computing", wie es auf der Webseite heisst. Die Architektur wird hier beschrieben. Einige Grundbegriffe:
- Engine The IPython engine is a Python instance that takes Python commands over a network connection. Eventually, the IPython engine will be a full IPython interpreter, but for now, it is a regular Python interpreter.
- Client There is one primary object, the Client, for connecting to a cluster.
- View For each execution model, there is a corresponding View. These views allow users to interact with a set of engines through the interface.
Einige Funktionen für den Anfang:
- View.activate Activate IPython magics associated with this View
- View.scatter Partition a Python sequence and send the partitions to a set of engines.
- View.execute Executes code on targets in blocking or nonblocking manner.
- View.__getitem__ Get object(s) by key_s from remote namespace
Game of Life - Codebeispiel
# -*- coding: utf-8 -*-
# <nbformat>2</nbformat>
# <codecell>
from IPython.parallel import Client
c = Client()
v = c[:]
with v.sync_imports():
import numpy as np
from mpi4py import MPI
from itertools import product
v.block = True
# shuffle ids around so that view ids match up with mpi ranks
v.execute("my_rank = MPI.COMM_WORLD.Get_rank()")
ranks = v["my_rank"]
v = c[list(reversed(ranks))]
v.block = True
# enable ipython magic commands for use with this view
v.activate()
# use zasims display functionality to display a state to ipython
from zasim.display.qt import render_state_array, qimage_to_pngstr
from IPython.core.display import publish_png
def display_state(config, scale=5):
img = render_state_array(config).scaled(config.shape[0] * scale, config.shape[1] * scale)
pngstr = qimage_to_pngstr(img)
publish_png(pngstr)
def build_border(config):
new = np.zeros((config.shape[0] + 2, config.shape[1] + 2), config.dtype)
new[1:-1, 1:-1] = config
return new
# <codecell>
def offset_pos((y, x), (a, b)):
return (y + a, x + b)
# calculate the standard game of life
def game_of_life(cconf, nconf):
result = None
W, H = cconf.shape
for pos in product(xrange(1, W - 1), xrange(1, H - 1)):
lu = cconf[offset_pos(pos, (-1, -1))]
u = cconf[offset_pos(pos, (-1, 0))]
ru = cconf[offset_pos(pos, (-1, 1))]
l = cconf[offset_pos(pos, (0, -1))]
m = cconf[offset_pos(pos, (0, 0))]
r = cconf[offset_pos(pos, (0, 1))]
ld = cconf[offset_pos(pos, (1, -1))]
d = cconf[offset_pos(pos, (1, 0))]
rd = cconf[offset_pos(pos, (1, 1))]
nonzerocount = lu + u + ru + l + r + ld + d + rd
result = m
if m == 0:
if 3 <= nonzerocount <= 3:
result = 1
else:
if not (2 <= nonzerocount <= 3):
result = 0
nconf[pos] = result
# copy the sides where the opposite side is in the same array
def copy_up_down(conf):
W, H = conf.shape
conf[Ellipsis,0] = conf[Ellipsis,H - 2]
conf[Ellipsis,H - 1] = conf[Ellipsis,1]
# copy the sides where the opposite side is on another engine
# this uses MPI "message passing"
def distribute_borders(config):
W, H = config.shape
r = MPI.COMM_WORLD.Get_rank()
size = MPI.COMM_WORLD.Get_size()
# first, send our left side to the previous engine
# and receive our right outside from the next engine
MPI.COMM_WORLD.Sendrecv(config[1], (r - 1) % size, 0, config[W+1], (r + 1) % size, 0)
# then, send our right side to the next engine and
# receive our left outside from the previous engine
MPI.COMM_WORLD.Sendrecv(config[W], (r + 1) % size, 0, config[0], (r - 1) % size, 0)
# <codecell>
# generate a random start configuration
za_d = np.random.randint(0, 2, (40, 40))
#za_d = np.zeros((40, 40))
#za_d[1:4, 1:4] = [[0, 1, 0], [0, 0, 1], [1, 1, 1]]
# distribute the starting configuration among all engines
v.scatter("cconf", za_d)
# push the functions we need for calculation to all engines
v.push({"build_border": build_border,
"game_of_life": game_of_life,
"distribute_borders": distribute_borders,
"copy_up_down": copy_up_down,
"offset_pos": offset_pos})
# generate borders for all stripes on all engines
v.execute("cconf = build_border(cconf)")
# and we need a "new configuration" array, so that we can
# switch nconf and cconf after each step.
v.execute("nconf = cconf.copy()")
# <codecell>
def step():
# first we run the game of life step reading from cconf writing into nconf
v.execute("game_of_life(cconf, nconf)")
# then we copy the borders locally
v.execute("copy_up_down(nconf)")
# and then distribute borders among engines
v.execute("distribute_borders(nconf)")
# and finally we switch cconf and nconf
v.execute("cconf, nconf = nconf, cconf")
# <codecell>
for i in range(10):
step()
# after a step, we strip the local border
v.execute("view = cconf[1:-1, 1:-1]")
# and gather all stripes into a big array
conf_all = v.gather("view")
# and display it
display_state(conf_all)