Error in running with mpi with brownian/poly

Error in running with mpi with brownian/poly

We are working towards simulating bi-disperse dense suspensions. The particles interact via hydrodynamics (lubrication), contact (friction) and pairwise Brownian motion. To generate the initial condition, we have a well relaxed energy minimized state with a 1:1.4 size ratio bi-disperse spherical particle system with 1000 particle count and varying volume fraction from 0.35 - 0.55, where the number of particles is balanced by the volume fraction (data.file). We are combining lubricate/poly (in our code, we use bmpoly instead which simply is an addition of torque force calculation) with brownian/poly to include both hydrodynamic interactions and brownian motion, and then shear the simulation box (with Lees Edwards periodic boundary conditions) at different shear rates. When we match the cutoff values for both force calculations and run the simulation (run file attached at the end) using mpirun command below (note version of lammps used is 23Jun2022)

mpirun -n 8 /home/kxl785/apps/lammps-23Jun2022/src/lmp_mpi -in in.run

we start to lose atoms in the simulation and get the following error.

ERROR: Lost atoms: original 1000 current 123 (…/thermo.cpp:481)

We did not have this issue when working with a monodisperse system (use lubricate and brownian) and the simulation runs without any error. We’ve also tried running in the bi-disperse system while turning off the shear (excluding much of the hydrodynamics) which then the simulation runs without any errors. The simulations also run fine if the Brownian motion is turned off. This suggests that the error is coming from the brownian/poly part when hydrodynamic forces are taken into account but not exactly sure where it comes from.

All these simulations errors we have been getting used 8 CPU nodes (running simulations with 8 CPU nodes for monodisperse systems were still fine). However, when we decrease the number of nodes to 1 for this simulation, the simulation code runs without error. Are there any errors or reasons why we are losing particles when running simulations with a high number of CPU nodes?

In.run file (running file)

#SETTINGS
atom_style sphere
comm_modify mode single vel yes
newton off

#READ THE PARTICLE CONFIGURATION
read_data data.file

#SPECIFY THE PARTICLE-PARTICLE INTERACTION
pair_style hybrid/overlay granular lubricate/bmpoly 0.1 1 1 0.001 0.05 1 0 brownian/poly 0.1 1 1 1.001 1.05 2.0 1234123 1 1
pair_coeff * * granular hooke 10000 0 tangential linear_history 7000 0 0.01
pair_coeff * * lubricate/bmpoly
pair_coeff * * brownian/poly 1.001 1.05

#DO THE STRESS CALC
compute Temp all temp/profile 1 1 1 xyz 10 10 10
compute str all pressure Temp

#SPECIFY THE OUTPUTS
thermo_style custom time c_str[1] c_str[2] c_str[3] c_str[4] c_str[5] c_str[6]
thermo 10000
thermo_modify format line “%.0f %12.4e %12.4e %12.4e %12.4e %12.4e %12.4e”
dump id all custom 10000 run.dump id x y z radius
log run.log

#SPECIFY THE TIMESTEP, THE INTEGRATION SCHEME AND RUN
timestep 0.00001
fix 1 all nve/sphere
fix 2 all deform 1 xy erate 0.0001 remap v
run 30000000

This is over two years old. You should upgrade to make certain that you have the latest bug fixes and improvements.

This usually means that you have atoms that accelerate a lot which is commonly the result of either bad force field parameters or bad choices or simulation settings (e.g. too large a timestep).

It is your job to determine which either by setting up a small enough test system where you can compute the forces (and velocities) manually and compare them to the results computed by LAMMPS or by making empirical tests varying the relevant parameters.

Please note that for extended particles the mass changes with the diameter and smaller masses will require shorter time steps, something that cannot happen for a monodisperse system.

Are you sure that adding these three pair styles results in meaningful forces and velocities?
Are you following some publication using exactly this combination?

There is no such pair style in the LAMMPS distribution.

Doesn’t your system need any equilibration?