Change_box with mpi

Dear lammps users,

Hi everyone I’m writing because i have a practica quetion about the change_box command.
I’m running the lammps distribution of 23 Jun 2022.
In few words after an equlibration I want to elongate the box in one axis maintaining the coordinates of atoms so without remapping.
To do this I use the change_box command and all is fine like this:
change_box all z final “zlo” “zhi”
and all works fine

The problem arise when I run it mpi.
I believe that is because the box is changes istantaneustly and all the cores have not the time to communicate tha changing thus leadind to the error:
Bond atom missing in image check.

On the other hand if I run the same script with OPENMP all works fine I beliave that’s because all the cores ar treated as one.

My first question is : am I getting it right ?

Now my problem is that I want to run my code also with gpu acceleration. At the beginning I was running it with mpi and package gpu but since this problem arose I’m running with OPENMP and package gpu.
To do il I’m setting the external variable OMP_NUM_THREADS=ncore.

It seems to work but my concerning is about the efficienty. Does it will be the same as using mpirun or it will be slower ?

If yes is there a way to use change_box with mpi ?

Best regards

Daniele

It is difficult to comment on your observations without seeing your complete input file. It would be even better, if you could construct a simplified and small(!) test system that can be run very fast and exhibits the same behavior and then post it here (note, it does not have to produce a meaningful simulation, so equilibration can be very short and so on).

Without the input, I have to guess. That the issue would be related to the domain decomposition is likely given your observations. When using (only) thread parallelization, you cannot get the error you are reporting, assuming you have fully periodic boundaries.

In principle, the change_box command should be suitable to handle arbitrary box changes and communicate atoms properly. Please see the (many) notes in the documentation for the command. So having a small reproducer input deck would be helpful to determine if this is due to a bug in the code that may need correcting.

That said, I can think of a couple of workarounds:

  1. since domain decomposition is not an issue for using a single MPI rank, you can just break down your simulation into three inputs and write out a restart or data file (more generic) at the end and read it back and the beginning of the following input and then run the box change part just with one processor. This also addresses the possible need to change the distribution of subdomains when changing the box as noted in the change_box documentation.
  2. you can try to perform the box change in a more gradual way using fix deform and a run. If you unfix your time integration fixes before the run, atoms will not move and the reneighboring will redistribute the atoms across subdomains. After that you can stop fix deform and reissue the time integration fix commands.