Hi people,
I’m currently doing a project where I want to simulate a graphene sheet on top of a Si-crystal substrate. I use the tersoff potential for the interatomic forces in the graphene sheet, since this should be able to run with KOKKOS on a computer cluster available at my university. Unfortunately it doesn’t work when the Tersoff potential is used with the pair_style hybrid/kk. If I simulate the graphene sheet alone with the standard pair_style tersoff/kk everything looks fine though. I have written a simple Lammps script which reproduces the problem. Notice that if I remove the “/kk” from the script this runs perfectly fine on CPU. The scripts creates tw graphene sheets where the interatomic forces in the first sheet is modeled with the Tersoff potential and the second one with a dummy Lennard Jones (LJ) potential. The interactions between the sheets are also modeled by a LJ potential. The script reads as follows (simple_reproduce.in)
######################################
units metal
newton on
boundary p p m
atom_style atomic
# Graphene lattice
lattice custom 2.419 &
a1 0 1.0 0 &
a2 $(sqrt(3)/2) 0.5 0 &
a3 $(1/(2*sqrt(3))) 0.5 0.83 &
basis 0 0 0 &
basis $(1/3) $(1/3) 0.0
# Simulation box
region simreg1 block 0 20 0 20 0 0.5
region simreg2 block 0 20 0 20 4.5 5
region merge union 2 simreg1 simreg2
create_box 2 merge
create_atoms 1 region simreg1 basis 1 1
create_atoms 2 region simreg2 basis 2 2
# Dynamics
variable temp equal 50.0 # Kelvin
mass 1 12.0107
mass 2 12.0107
velocity all create ${temp} 5432373 dist gaussian
pair_style hybrid/kk tersoff/kk lj/cut/kk 2.0
pair_coeff * * tersoff/kk C.tersoff C NULL # <----The line that causes issues
pair_coeff 1 2 lj/cut/kk 1 1
pair_coeff 2 2 lj/cut/kk 1 1
timestep 0.001
fix nve all nve
# Output
thermo 100
run 1000
######################################
I try to run the Lammps script with the following slurm job script.
######################################
#!/bin/bash
#SBATCH --job-name=Debug
#
#SBATCH --partition=normal
#
#SBATCH --ntasks=1
#
#SBATCH --cpus-per-task=2
#
#SBATCH --gres=gpu:1
#
#SBATCH --output=slurm.out
#
mpirun -n 1 lmp -pk kokkos newton on neigh half -k on g 1 -sf kk -in simple_reproduce.in
######################################
It crashes immediately yielding the following message:
######################################
LAMMPS (10 Feb 2021)
KOKKOS mode is enabled (src/KOKKOS/kokkos.cpp:92)
will use up to 1 GPU(s) per node
using 1 OpenMP thread(s) per MPI task
Lattice spacing in x,y,z = 2.7932206 4.8380000 2.0077700
Created orthogonal box = (0.0000000 0.0000000 0.0000000) to (55.864412 96.760000 10.038850)
1 by 1 by 1 MPI processor grid
Created 2159 atoms
create_atoms CPU = 0.002 seconds
Created 2118 atoms
create_atoms CPU = 0.002 seconds
[bigfacet:515305] *** Process received signal ***
[bigfacet:515305] Signal: Segmentation fault (11)
[bigfacet:515305] Signal code: Address not mapped (1)
[bigfacet:515305] Failing at address: 0x21
[bigfacet:515305] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x14420)[0x7fccd3dca420]
[bigfacet:515305] [ 1] lmp(+0x13e9d27)[0x562d1d413d27]
[bigfacet:515305] [ 2] lmp(+0x93a91b)[0x562d1c96491b]
[bigfacet:515305] [ 3] lmp(+0x777803)[0x562d1c7a1803]
[bigfacet:515305] [ 4] lmp(+0x19e026)[0x562d1c1c8026]
[bigfacet:515305] [ 5] lmp(+0x1a5c8e)[0x562d1c1cfc8e]
[bigfacet:515305] [ 6] lmp(+0x1a5f05)[0x562d1c1cff05]
[bigfacet:515305] [ 7] lmp(+0x14170e)[0x562d1c16b70e]
[bigfacet:515305] [ 8] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3)[0x7fccd3618083]
[bigfacet:515305] [ 9] lmp(+0x181f9e)[0x562d1c1abf9e]
[bigfacet:515305] *** End of error message ***
--------------------------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 0 with PID 515305 on node bigfacet exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------
######################################
Judging from earlier posts you are probably going to ask me for more information about my installation on the cluster, but I’m not really sure what to share here. While I wait for response I am going to try to build Lammps with the GPU package and see if I’m able to run it like that.
I hope that you might be able to give me some hints on this one.
Best regards
Mikkel