Disabling PyLammps native Parallelization

Hello all,

I’m trying to develop a python Monte-Carlo Software that takes advantage of lammps’ efficient calculations and wide range of potential implementations. However, for this specific Monte-Carlo technique, I need a number of parallel calculations happening each with it’s own implementation of lammps.

When I try to do parallel calculations in lammps with mpi4py the results from every processor matches the head node results. It looks as if the lammps python script has been parallelized ‘underneath the hood’ already but it is far more efficient for me to have multiple instances of the library with each process having its own. Is there a way to kill the built-in parallelization of lammps python library in favour of having one instance of lammps on each processor?

I’ve been stuck for a while now and would appreciate any insights available.

Thank you,
Collin Wilkinson
Penn State

Yes. you can use standard MPI procedures for this. When creating the LAMMPS instance, you can hand in a communicator. Now if you create a custom communicator with MPI_Comm_split() and hand this into creating the LAMMPS instance, you can chop the default parallelization using MPI_COMM_WORLD into as many slices as you have ranks on the world communicator, or any subset you like.

Axel.

Thank you for your help. This solution was easy and efficient and I very much appreciate it.

Thank you,
Collin