Dear Lammps user.
I am using Lammps (Stable version (3 Mar 2020)) to model the evaporation of n-alkanes nano-droplet (OPLS-AA force field, 55200 atoms in total in liquid phase, placed in the center of the box with edge length of 50 nm) in nitrogen (95500 atom in total in gas phase, zero charge and fix shake). The issue of my simulation is that the time of PPPM kspace is too large (82.17%) as shown in the below log file, although I know that it is normal to take more time to calculate long range Coulomb force. I have tried some of the methods according to the previous mailing lists to reduce the kspace time. For example, I tried to adjust the mesh size and order by using kspace_modify order 2/4/6/7. I compared different order for 5000 steps, the time of kspace still keep around 85%. I also tried fix tune/kspace 100, but it stopped with the error: Fatal error in MPI_Sendrecv: Message truncated, error stack. Message from rank 257 and tag 0 truncated; 2560 bytes received but buffer size is 4.
I really appreciate it, if anyone can give me some suggestions on how to reduce the time of kspace and accelerate the simulation of this kind of system.
The parameters of my input scripts:
pair_style hybrid lj/cut/coul/long 12.0 12.0
pair_modify mix geometric tail yes
kspace_style pppm 1.0e-5
minimize 1.0e-4 1.0e-6 100 1000
fix SHAKE all shake 0.0001 20 0 b 19 20 21……
velocity all create 360 12345
neighbor 2.0 bin
neigh_modify delay 0 every 1 check yes
fix 1 all nvt temp 360 360 200
The log file:
Loop time of 63457.5 on 480 procs for 1000000 steps with 104430 atoms
Performance: 2.723 ns/day, 8.814 hours/ns, 15.759 timesteps/s
99.7% CPU use with 480 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total