mpi4py 2.0.0 / LAMMPS

I just noticed that mpi4py 2.0.0 was released in mid-October:
https://bitbucket.org/mpi4py/mpi4py/downloads
https://groups.google.com/d/topic/mpi4py/yUYn-yCf4x4/discussion

I believe this should support passing a communicator from python to C as per Steve’s July '14 discussion on mpi4py’s google groups:
https://groups.google.com/d/topic/mpi4py/jPqNrr_8UWY/discussion

Has anyone tried 2.0.0 with LAMMPS? Any unusual experiences?

cheers,

Brian

I just noticed that mpi4py 2.0.0 was released in mid-October:

thanks for reporting this. i would expect that there are several
people on this list looking forward to use it.

https://bitbucket.org/mpi4py/mpi4py/downloads
https://groups.google.com/d/topic/mpi4py/yUYn-yCf4x4/discussion

I believe this should support passing a communicator from python to C as per
Steve's July '14 discussion on mpi4py's google groups:
https://groups.google.com/d/topic/mpi4py/jPqNrr_8UWY/discussion

Has anyone tried 2.0.0 with LAMMPS? Any unusual experiences?

just tried it and it seems to work fine.

of course in order to pass a communicator to the LAMMPS object, some
modifications to the LAMMPS python wrapper need to be made.
please try out the attached version of lammps.py (after uncompressing
it). it contains some additional code that tests whether mpi4py is
available (not whether it is loaded) and whether it is version 2.x.y
and if yes, you can pass a communicator with comm=XXX)

some trivial test code:

from mpi4py import MPI
from lammps import lammps

comm = MPI.COMM_WORLD
size = comm.Get_size()
rank = comm.Get_rank()
if rank < size // 2 :
    color = 0
else:
    color = 1
split = comm.Split(color, key=0)
size = split.Get_size()
rank = split.Get_rank()

lmp = lammps(comm=split)
lmp.command("lattice fcc 0.8442")
lmp.command("region box block 0 4 0 4 0 4")
lmp.command("create_box 1 box")

when running: mpirun -np 4 example.py

with mpi4py v2.0.0 you should get something like this:

LAMMPS (6 Nov 2015-ICMS)
WARNING: OMP_NUM_THREADS environment is not set. (../comm.cpp:90)
  using 1 OpenMP thread(s) per MPI task
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
Created orthogonal box = (0 0 0) to (6.71838 6.71838 6.71838)
  1 by 1 by 2 MPI processor grid
LAMMPS (6 Nov 2015-ICMS)
WARNING: OMP_NUM_THREADS environment is not set. (../comm.cpp:90)
  using 1 OpenMP thread(s) per MPI task
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
Created orthogonal box = (0 0 0) to (6.71838 6.71838 6.71838)
  1 by 1 by 2 MPI processor grid
Total wall time: 0:00:00
Total wall time: 0:00:00

otherwise, you should get:

LAMMPS (6 Nov 2015-ICMS)
WARNING: OMP_NUM_THREADS environment is not set. (../comm.cpp:90)
  using 1 OpenMP thread(s) per MPI task
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
Created orthogonal box = (0 0 0) to (6.71838 6.71838 6.71838)
  1 by 2 by 2 MPI processor grid
Total wall time: 0:00:00

so, in the unsupported case, the passed in communicator is ignored.
the underlying heuristic implicitly assumes, that you are using mpi4py
when passing a communicator to the lammps object constructor.

axel.

lammps.py.gz (2.33 KB)

Axel’s hooks to support this, and some example scripts

will be in a patch later today. It should allow a Python script

to launch multiple instances of LAMMPS, e.g. to run

multiple jobs on a subsets of allocated procs.

Steve

Axel's hooks to support this, and some example scripts
will be in a patch later today. It should allow a Python script
to launch multiple instances of LAMMPS, e.g. to run
multiple jobs on a subsets of allocated procs.

... or run some other (parallel) computation on the MPI ranks not used
by the LAMMPS instance concurrently.

axel.