rigid/small

Hello LAMMPS users,

In an attempt to improve the parallel efficiency of our workload, we
decided to try 'fix rigid/small' instead of 'fix rigid'. However we get
multiple "ERROR on proc #: Rigid body atoms # # missing on proc # at
step 0 (src/RIGID/fix_rigid_small.cpp:3312)". We are simulating large
numbers of rigid bodies made of large numbers of atoms.

-What does this really mean?
-Should we not use 'rigid/small' and use just 'rigid'?

Regards,
Luis

The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336.

Hello LAMMPS users,

In an attempt to improve the parallel efficiency of our workload, we
decided to try ‘fix rigid/small’ instead of ‘fix rigid’. However we get
multiple “ERROR on proc #: Rigid body atoms # # missing on proc # at
step 0 (src/RIGID/fix_rigid_small.cpp:3312)”. We are simulating large
numbers of rigid bodies made of large numbers of atoms.

-What does this really mean?

it means, that not all constituent particles of a rigid body are found in a local subdomain when running in parallel.
please also note the following, which is given in the fix rigid/small documentation:
To use the rigid/small styles the ghost atom cutoff must be large enough to span the distance between the atom that owns the body and every other atom in the body. This distance value is printed out when the rigid bodies are defined. If the pair_style cutoff plus neighbor skin does not span this distance, then you should use the comm_modify cutoff command with a setting epsilon larger than the distance.

-Should we not use ‘rigid/small’ and use just ‘rigid’?

as with so many things in science, the answer is “it depends”. your rigid object are obviously not small enough for your current settings. you can try to increase the communication cutoff (and thus the number of ghost atoms in each subdomain) in the hope to contain all needed particles in each subdomain. but that causes additional overhead at each step, since the position data for ghost atoms needs to be updated. on the other hand, fix rigid requires collective communication operations, which can reduce parallel efficiency. now you need to figure out, how much the relative cost of one versus the other is. it all depends on how large your rigid objects are (they are obviously not “small”) and how many MPI ranks you are using.

BTW: another step to improve parallel efficiency would be to see if your pair styles support the USER-OMP (or USER-INTEL) package and then you could try a combination of OpenMP (say 2-3 threads) plus MPI versus pure MPI. by reducing the number of MPI ranks, you are also reducing the overhead of the collective MPI communication operations.

axel.