Question of parallel python inside the LAMMPS

Dear all,

I am trying to use parallel python inside lammps. For example inside lammps input script, I would like to execute a file with parallelized python code(using py4mpi). I have tried to do so, but I got many error message from different cores:

Rank 0 [Mon Jan 28 11:32:09 2019] [c0-0c0s2n0] application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0

From the manual, every core would execute the python code. But I am wondering does lammps support the execution of parallelized python code from py4mpi? or just because that lammps is not properly installed?

Many thanks,
Yafan

Dear all,

I am trying to use parallel python inside lammps. For example inside lammps input script, I would like to execute a file with parallelized python code(using py4mpi). I have tried to do so, but I got many error message from different cores:

Rank 0 [Mon Jan 28 11:32:09 2019] [c0-0c0s2n0] application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0

From the manual, every core would execute the python code. But I am wondering does lammps support the execution of parallelized python code from py4mpi? or just because that lammps is not properly installed?

yes, it is technically possible, but there are constraints. python and
LAMMPS have to use the same MPI library, and you must not call
MPI_Init() a second time as well as be very careful with your MPI
programming in python.

the cleanest way to do this, in my personal opinion, would be to use
the LAMMPS python module and launch (and initialize) parallel python
and then execute LAMMPS commands through the library interface from
python.

axel.

Dear Axel,

Thank you so much for your help. It is indeed the problem of installation. After making the MPI libraries same, parallel python code can run perfectly inside the my lammps input script.

I am now trying to use python lammps module for the entire simulation in parallel as you suggested. However I got a problem when I try to run different lammps object inside different core, for example:

if rank == 0:
lmp1 = lammps()
print(lmp1.version())
lmp1.file(‘input_A’)
print(lmp1.version())
lmp1.close()

if rank == 1:
lmp2 = lammps()
print(lmp2.version())
lmp2.file(‘input_B’)
print(lmp2.version())
lmp2.close()

It seems lmp1.file(‘input_A’) and lmp2.file(‘input_B’) are not executed and the simulation is frozen at that point. Is it possible to run in this way?

Dear Axel,

Thank you so much for your help. It is indeed the problem of installation. After making the MPI libraries same, parallel python code can run perfectly inside the my lammps input script.

I am now trying to use python lammps module for the entire simulation in parallel as you suggested. However I got a problem when I try to run different lammps object inside different core, for example:

  if rank == 0:
      lmp1 = lammps()
      print(lmp1.version())
      lmp1.file('input_A')
      print(lmp1.version())
      lmp1.close()
  if rank == 1:
      lmp2 = lammps()
      print(lmp2.version())
      lmp2.file('input_B')
      print(lmp2.version())
      lmp2.close()

It seems lmp1.file('input_A') and lmp2.file('input_B') are not executed and the simulation is frozen at that point. Is it possible to run in this way?

yes, but as i wrote before, you have to be very careful with your MPI
programming, and here you are not.
since you don't pass a specific MPI communicator to each instance of
LAMMPS that you create, each of them will assume to be operating on
the global MPI communicator MPI_COMM_WORLD.
that is going to create a conflict and problems. what you have to do
is, to call MPI_Comm_split() with the suitable settings and then
create the LAMMPS instances passing the corresponding sub-communicator
to its constructor.

axel.

Got it, many thanks. just saw that there is an split.py example available…