I’m trying to create a new analytic potential that represents a Yukawa potential with a shift in the screening by the diameter of a particle. I downloaded the most recent stable release from github by using:
git clone -b release https://github.com/lammps/lammps.git stable-lammps
.
In this lammps i then coped 7 files to make a expanded (shifted) yukawa potential
cp src/pair_yukawa.cpp src/pair_yukawa_expand.cpp
cp src/pair_yukawa.h src/pair_yukawa_expand.h
cp src/GPU/pair_yukawa_gpu.cpp src/GPU/pair_yukawa_expand_gpu.cpp
cp src/GPU/pair_yukawa_gpu.h src/GPU/pair_yukawa_expand_gpu.h
cp lib/gpu/lal_yukawa.cpp lib/gpu/lal_yukawa_expand.cpp
cp lib/gpu/lal_yukawa.cu lib/gpu/lal_yukawa_expand.cu
cp lib/gpu/lal_yukawa.h lib/gpu/lal_yukawa_expand.h
I then changed the new files to represent my new potential and then looked at the differences between pair_lj and pair_lj_expand and modeled those to change all the header information and such for my new yukawa_expand files.
When I then compiled it on a node of my local cluster (which is compatible with gpu) I could use it perfectly with cpu and when i compiled it with gpu (how i did that is at the end of this post) it compiled fine but when i actually went to run yukawa/expand/gpu by running:
mpirun -np 8 pathtolammps/lmp_mpi -sf gpu -in colloid.inp
I get this error:
ERROR: Unrecognized pair style 'yukawa/expand/gpu' is part of the GPU package, but seems to be missing because of a dependency (../force.cpp:275)
I’m 99% sure that that error is happening when the code is inside src/GPU
so I assume ../force.cpp
refers to src/force.cpp
. Which has line 275:
error->all(FLERR, utils::check_packages_for_style("pair", style, lmp));
I thus think in my compilation I need some type of package for my potential to run on gpu.
I tried to look through the documentation for how to add a package for this type of thing and found 4.8.1. Writing new pair styles, which helped make sure the potential code worked but I wasn’t sure about compiling it with gpu. I also checked out the package information in section 3 of the documentation but wasn’t sure where to go from there.
Essentially I’ve been able to write the code to make the potential work for cpu (and i think the code looks sturdy for gpu as well) but I’m having trouble getting the compilation right for running gpu with my new potential. (it all compiles fine, it’s just the runtime errors of things not being linked together right).
Do you have any ideas what might be the next step for fixing that error and getting my potential to work?
Thanks in advance!
The following is a txt file of the instructions i’ve been following:
#ssh into the node with gpu capabilities
ssh n79
#run these exports (i think this is more for my local cluster)
export PATH=/share/apps/CENTOS7/gcc/6.5.0/bin/:$PATH
export LD_LIBRARY_PATH=/share/apps/CENTOS7/gcc/6.5.0/lib64:$LD_LIBRARY_PATH
export PATH=/share/apps/CENTOS7/python/3.8.3/bin:$PATH
export LD_LIBRARY_PATH=/share/apps/CENTOS7/python/3.8.3/lib:$LD_LIBRARY_PATH
export PATH=/share/apps/CENTOS7/openmpi/4.0.4/bin:$PATH
export LD_LIBRARY_PATH=/share/apps/CENTOS7/openmpi/4.0.4/lib:$LD_LIBRARY_PATH
which mpirun mpicc python gcc
cd ~/lammps/src/
#include some packages don't know if this is doing anything
make yes-colloid
make yes-misc
cd STUBS
make
cd ..
vi MAKE/Makefile.serial
# in this file replace the empty initialization of these variables with this
LMP_INC = -DLAMMPS_GZIP -DLAMMPS_MEMALIGN=64 -DLAMMPS_JPEG # -DLAMMPS_CXX98
JPG_INC = -I/usr/include
JPG_PATH = -L/usr/lib64
JPG_LIB = -ljpeg
#exit the file
make serial
vi MAKE/Makefile.mpi
#do the same changes as before with those 4 lines
make mpi
make yes-gpu
#now run the following exports
export CUDA_HOME=/usr/local/cuda
export PATH=/usr/local/cuda/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib:$LD_LIBRARY_PATH
which nvcc
#go back up
cd ..
#save the original version of lib/gpu
scp -rp lib/gpu lib/gpu-orig
cd lib/gpu
vi Makefile.linux
#change this to that
CUDA_HOME = /usr/local/cuda > CUDA_HOME = -sm_75
#change this to that
CUDA_ARCH = -arch=sm_60 > CUDA_ARCH = -arch=sm_75
#save the file and run this
make -f Makefile.linux
now cd ../../src
#then again run:
make mpi
#then scp the lmp_mpi and lmp_serial into your directory (be in src when you do this)
scp lmp_mpi /zfshomes/saronow/new-lammps/test-lammps/lmp_mpi
scp lmp_serial /zfshomes/saronow/new-lammps/test-lammps/lmp_serial
# THEN FINALLY to run things:
#make sure you still have the same exports and run
mpirun -np 8 pathtolammps/lmp_mpi -in your_file.inp
#or for gpu
mpirun -np 8 pathtolammps/lmp_mpi -sf gpu -in colloid.inp