Hello LAMMPS Users Mailing List,
I have been trying to accelerate coarse-grained MD simulations using the GPU module. We are using the SDK (Shinoda, Devane, Klein) CG model to do these simulations, which is part of the CG/CMM user package. My simulations work properly when run without using the GPU acceleration. However, when I run with the GPU, I find that after just a single time step, the E_vdwl energy drops to zero and the system is unstable, and often I get CUDA errors (my machines all have NVIDIA GPUs). Here, I decreased “thermo” output to 1, and this is output for the first two steps (all the subsequent steps also show E_vdwl = 0.0), where the starting state is from a prior run using energy minimization:
---------------- Step 0 ----- CPU = 0.0000 (sec) ----------------
TotEng = -305096.5698 KinEng = 36126.5146 Temp = 303.0000
PotEng = -341223.0844 E_bond = 57.3538 E_angle = 501.6453
E_dihed = 0.0000 E_impro = 0.0000 E_vdwl = -341782.0835
E_coul = 0.0000 E_long = 0.0000 Press = -2400.6705
Volume = 4018921.0727
---------------- Step 1 ----- CPU = 0.0077 (sec) ----------------
TotEng = 36706.3820 KinEng = 36106.6905 Temp = 302.8337
PotEng = 599.6915 E_bond = 96.9535 E_angle = 502.7381
E_dihed = 0.0000 E_impro = 0.0000 E_vdwl = 0.0000
E_coul = 0.0000 E_long = 0.0000 Press = 434.3639
Volume = 4015399.2973
All the subsequent time steps also have E_vdwl = 0.0, and many times after several steps I also get an error like this: “Cuda driver error 700 in call at file ‘geryon/nvd_timer.h’ in line 98.”
The header of my PARM.FILE is as follows: