Ghost atom cutoff

Dear all,

hi I’m running a simulations with lammps of a system that has a really short pairwise interaction cutoff this is correct and I have to simulate like this.

I’m doing an NPT equilibration of the system and after a while I obtain the warning
Communication cutoff is shorter than a bond length based estimate. This may lead to errors. and after a bit the simulation stops. Now I understand why this happen. Because of that I tried to set the ghost atom cutoff with the command

comm_modify mode single cutoff 30

but the problem persist without having any error or wanring neither in the output and in the log.lammps file.

Do anyone has any advice ?

Thanks in advance

Daniele

Apparently, the issue you are seeing is independent from the communication cutoff.
Without a small(!) test input deck that can reproduce it, it will be difficult to determine.

you are absolutely right sorry. Here there is an example of input

package gpu 1 neigh yes


boundary    p p p            # boundary condition along x, y, z



units           real                 #  units
atom_style      molecular                #  style: atomic, molecular, ..




read_data random_position.data                   # data of starting configuration

include force.data


#   bond
bond_style               harmonic
bond_coeff          1 1.20 5
bond_coeff          2 20 8.305
bond_coeff          3 20 8.365
bond_coeff      4 1.20 3.8

angle_style harmonic

angle_coeff         1 20 180

comm_modify mode single cutoff 30

minimize 1.0e-6 1.0e-6 1000 1000

write_data minimize.data


# set starting velocity
velocity                all create 150 87287  mom yes rot yes dist gaussian

#  check of neighbor list
neighbor            10 bin              # skin length and algorithm used [bin]
neigh_modify        one 5000 every 1 delay 10 check yes  # build neighbor list every ... ts


################# NPT ###########################################


fix         1 all langevin 150 150 1000 34567      # NPT ensemble  T_i T_f tau_T   p_i p_f tau_p
fix         2 all nph iso 0.986923 0.986923 1000

timestep               1

dump    1 all dcd 1000 NPT.dcd     # save coordinates in *.dcd file every ... ts

thermo  1000

run     10000

unfix   1
unfix   2
undump  1

where the force,data is just a file with the pair_style command like :

#data file for lammps simulation with salt = 800 mM and T = 303.15 K 

pair_style  table/gpu linear 19642 
pair_coeff 1 1 ./tabelle/LYS_LYS.txt LYS_LYS 11
pair_coeff 1 2 ./tabelle/LYS_ADE.txt LYS_ADE 11
pair_coeff 1 3 ./tabelle/LYS_CYT.txt LYS_CYT 11
pair_coeff 1 4 ./tabelle/LYS_GUA.txt LYS_GUA 11
pair_coeff 1 5 ./tabelle/LYS_THY.txt LYS_THY 11
pair_coeff 2 2 ./tabelle/ADE_ADE.txt ADE_ADE 11
pair_coeff 2 3 ./tabelle/ADE_CYT.txt ADE_CYT 11
pair_coeff 2 4 ./tabelle/ADE_GUA.txt ADE_GUA 11
pair_coeff 2 5 ./tabelle/ADE_THY.txt ADE_THY 11
pair_coeff 3 3 ./tabelle/CYT_CYT.txt CYT_CYT 11
pair_coeff 3 4 ./tabelle/CYT_GUA.txt CYT_GUA 11
pair_coeff 3 5 ./tabelle/CYT_THY.txt CYT_THY 11
pair_coeff 4 4 ./tabelle/GUA_GUA.txt GUA_GUA 11
pair_coeff 4 5 ./tabelle/GUA_THY.txt GUA_THY 11
pair_coeff 5 5 ./tabelle/THY_THY.txt THY_THY 11
variable final_T equal 303.15 

As said before if I remove the comm_modify mode single cutoff 30 command I don’t obtain any warning in the log or output but the simulation still crash. The only error is in the error file referred to a cuda script.

Cuda driver error 700 in call at file '/dev/shm/slurm_job.1760636/propro01/spack-stage-lammps-20220623-olnugcpph5bhx6hmulgwngc3jme4j4aw/spack-src/lib/gpu/geryon/nvd_timer.h' in line 76. 

Many thanks for your time.

Daniele

So then I would run the system without GPU to see if the error can be reproduced. There you would get some more useful error and can do some debugging.

GPU kernels assume that the input is valid and can be computed without failure since testing for and reporting errors can be very detrimental to the performance. You also need to check, if you have the GPU support compiled with single or mixed precision: sometimes you have numerical errors with those that do not happen with the (slower) double precision configuration.

A common problem with tabulated potentials is that you may have to compute forces for distances outside the tabulated range.

I tried with the neigh no option of the package gpu command and I didn’t need to set the comm_modify mode single cutoff 30 the simulations is runnning. Many thanks

Daniele

@ampharos This sounds like a bug in the GPU library in LAMMPS.
Which version of LAMMPS do you use?
If it is not the latest version (7 Feb 2024), can you please confirm that the issue still exists?
What is your GPU hardware and what compilation settings do you use?

Furthermore, your example does not contain the data file and the table file, so it cannot be reproduced independently.

If the issue is still present in the latest release, would you mind submitting a full bug report issue at the LAMMPS GitHub project page or at least provide the missing information here so that @ndtrung and I can have a closer look.

Thanks in advance.

Hi @akohlmey. I would be more than happy to help !!.

Unfortunately in these days I’m a bit busy with work but as soon as I can I will gather all the informations (I’m working in an external cluster so I’m not sure about the GPU hardware and LAMMPS version) and write back to you.

Many thanks

Daniele