<< system stuck/reboot during LAMMPS compilation >>

Hello,

I am newbie to LAMMPS.

During the LAMMPS compilations, I did not receive any error message but my computer get stuck and then reboot.

My box configuration is a Dell G3 laptop with:

  • GPU Nvidia GTX1050Ti

  • Intel CPU with 12 cores

  • Intel compilers (with MPI and MKL libraries)

  • CUDA libraries installed

  • Ubuntu 19.04

Basically, I followed the instructions on https://lammps.sandia.gov/doc/Manual.html:

1- I cloned the Git repository (firs for the unstable and then also for stable release)

2- Tried to build using the CMAKE recommendations in https://github.com/lammps/lammps/blob/master/cmake/README.md
2.1 In this case I used a modified “all_on.cmake” preset (I only removed the USER-ADIOS package)
3- Tried to compile using:

cmake -D CMAKE_C_COMPILER=mpiicc -D CMAKE_CXX_COMPILER=mpiicpc -D CMAKE_Fortran_COMPILER=mpifort -C …/cmake/presets/all_on-camps.cmake -D PKG_GPU=on GPU_API=cuda GPU_ARCH=sm_60 MKL_INCLUDE_DIRS=/opt/intel/mkl/include MKL_LIBRARIES=/opt/intel/mkl/lib/intel64 FFTW3_INCLUDE_DIRS=/opt/intel/mkl/include/fftw FFTW3_LIBRARIES=/opt/intel/compilers_and_libraries_2019.0.117/linux/mkl/interfaces/fftw3xf BUILD_MPI=on BUILD_OMP=off …/cmake

Then my PC get stuck with the output from this command, only showing the message:

loading initial cache file …/cmake/presets/all_on-camps.cmake
– The CXX compiler identification is Intel 19.0.0.20180804
– Check for working CXX compiler: /opt/intel/compilers_and_libraries_2019.0.117/linux/mpi/intel64/bin/mpiicpc
– Check for working CXX compiler: /opt/intel/compilers_and_libraries_2019.0.117/linux/mpi/intel64/bin/mpiicpc – works
– Detecting CXX compiler ABI info
– Detecting CXX compiler ABI info - done
– Detecting CXX compile features
– Detecting CXX compile features - done
– Found Git: /usr/bin/git (found version “2.20.1”)
– Running check for auto-generated files from make-based build system

Looking for the running process, I got more than 250 cmake’ processes related to mpiicpc.
I did no identify any error in the CMakeOutput.log (attached here)

CMakeOutput.log (55.1 KB)

Hello,

I am newbie to LAMMPS.

During the LAMMPS compilations, I did not receive any error message but my computer get stuck and then reboot.

My box configuration is a Dell G3 laptop with:

  • GPU Nvidia GTX1050Ti

  • Intel CPU with 12 cores

  • Intel compilers (with MPI and MKL libraries)

  • CUDA libraries installed

  • Ubuntu 19.04

Basically, I followed the instructions on https://lammps.sandia.gov/doc/Manual.html:

1- I cloned the Git repository (firs for the unstable and then also for stable release)

2- Tried to build using the CMAKE recommendations in https://github.com/lammps/lammps/blob/master/cmake/README.md
2.1 In this case I used a modified “all_on.cmake” preset (I only removed the USER-ADIOS package)
3- Tried to compile using:

cmake -D CMAKE_C_COMPILER=mpiicc -D CMAKE_CXX_COMPILER=mpiicpc -D CMAKE_Fortran_COMPILER=mpifort -C …/cmake/presets/all_on-camps.cmake -D PKG_GPU=on GPU_API=cuda GPU_ARCH=sm_60 MKL_INCLUDE_DIRS=/opt/intel/mkl/include MKL_LIBRARIES=/opt/intel/mkl/lib/intel64 FFTW3_INCLUDE_DIRS=/opt/intel/mkl/include/fftw FFTW3_LIBRARIES=/opt/intel/compilers_and_libraries_2019.0.117/linux/mkl/interfaces/fftw3xf BUILD_MPI=on BUILD_OMP=off …/cmake

wouldn’t it be smarter to start compiling with default settings and no additional packages enabled first?
… and then gradually enable packages and/or other settings/compilers?
that way you get to see more clearly at what step things go south.

axel.

Hello Axel,

Thank you very much for your advice.

I followed and figure out that the error was due to Intel C++ compiler bot findiing gcc headers. Adding export CPLUS_INCLUDE_PATH=$CPLUS_INCLUDE_PATH:/usr/include/x86_64-linux-gnu/c++/8 solve the problem.

But now, at the end of make, I am facing the following errors:

[ 95%] Building C object examples/simulators/ex_test_Ar_fcc_cluster/CMakeFiles/ex_test_Ar_fcc_cluster.dir/ex_test_Ar_fcc_cluster.c.o
[ 95%] Linking C executable ex_test_Ar_fcc_cluster
ld: warning: libmpi.so.40, needed by …/…/…/libkim-api.so.2, may conflict with libmpi.so.12
[ 95%] Built target ex_test_Ar_fcc_cluster
Scanning dependencies of target ex_test_Ar_fcc_cluster_cpp
[ 96%] Building CXX object examples/simulators/ex_test_Ar_fcc_cluster_cpp/CMakeFiles/ex_test_Ar_fcc_cluster_cpp.dir/ex_test_Ar_fcc_cluster_cpp.cpp.o
remark #11074: Inlining inhibited by limit max-size
remark #11076: To get full report use -qopt-report=4 -qopt-report-phase ipo
[ 96%] Linking CXX executable ex_test_Ar_fcc_cluster_cpp
ld: warning: libmpi.so.40, needed by …/…/…/libkim-api.so.2, may conflict with libmpi.so.12
[ 96%] Built target ex_test_Ar_fcc_cluster_cpp
Scanning dependencies of target ex_test_Ar_fcc_cluster_fortran

[ 97%] Building Fortran object examples/simulators/ex_test_Ar_fcc_cluster_fortran/CMakeFiles/ex_test_Ar_fcc_cluster_fortran.dir/ex_test_Ar_fcc_cluster_fortran.f90.o
[ 98%] Linking Fortran executable ex_test_Ar_fcc_cluster_fortran
/usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpicxx.so.12: undefined reference to MPII_Errhandler_set_cxx'* */usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpifort.so.12: undefined reference to PMPI_Aint_diff’
/usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpifort.so.12: undefined reference to MPIR_F_NeedInit'* */usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpifort.so.12: undefined reference to MPIR_F_MPI_WEIGHTS_EMPTY’
/usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpifort.so.12: undefined reference to MPIX_Comm_revoke'* */usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpifort.so.12: undefined reference to MPIX_Comm_failure_get_acked’
/usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpifort.so.12: undefined reference to MPII_Comm_get_attr'* */usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpifort.so.12: undefined reference to MPIR_F_MPI_UNWEIGHTED’
/usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpifort.so.12: undefined reference to MPII_Win_set_attr'* */usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpifort.so.12: undefined reference to MPIX_Comm_shrink’
/usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpifort.so.12: undefined reference to PMPI_Aint_add'* */usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpifort.so.12: undefined reference to MPI_F_ARGVS_NULL’
/usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpifort.so.12: undefined reference to MPIX_Comm_failure_ack'* */usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpifort.so.12: undefined reference to MPI_Aint_add’
/usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpifort.so.12: undefined reference to MPIX_Comm_agree'* */usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpicxx.so.12: undefined reference to MPII_Op_set_cxx’
/usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpifort.so.12: undefined reference to MPII_Comm_set_attr'* */usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpifort.so.12: undefined reference to MPI_WEIGHTS_EMPTY’
/usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpifort.so.12: undefined reference to MPIR_Err_create_code'* */usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpicxx.so.12: undefined reference to MPII_Keyval_set_proxy’
/usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpifort.so.12: undefined reference to MPIR_F_MPI_IN_PLACE'* */usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpifort.so.12: undefined reference to MPI_Aint_diff’
/usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpifort.so.12: undefined reference to MPII_Type_get_attr'* */usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpifort.so.12: undefined reference to MPII_Win_get_attr’
/usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpifort.so.12: undefined reference to MPII_F_FALSE'* */usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpifort.so.12: undefined reference to mpirinitf_’
/usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpifort.so.12: undefined reference to MPII_Grequest_set_lang_f77'* */usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpifort.so.12: undefined reference to MPIR_F_MPI_BOTTOM’
/usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpifort.so.12: undefined reference to MPII_Comm_get_attr_fort'* */usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpifort.so.12: undefined reference to MPI_UNWEIGHTED’
/usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpifort.so.12: undefined reference to MPIR_Err_return_comm'* */usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpifort.so.12: undefined reference to MPI_F_ERRCODES_IGNORE’
/usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpifort.so.12: undefined reference to MPII_Type_set_attr'* */usr/bin/ld: /opt/intel//compilers_and_libraries_2019.0.117/linux/mpi/intel64/lib/libmpifort.so.12: undefined reference to MPII_F_TRUE’
collect2: error: ld returned 1 exit status
make[5]: *** [examples/simulators/ex_test_Ar_fcc_cluster_fortran/CMakeFiles/ex_test_Ar_fcc_cluster_fortran.dir/build.make:85: examples/simulators/ex_test_Ar_fcc_cluster_fortran/ex_test_Ar_fcc_cluster_fortran] Error 1
make[4]: *** [CMakeFiles/Makefile2:1450: examples/simulators/ex_test_Ar_fcc_cluster_fortran/CMakeFiles/ex_test_Ar_fcc_cluster_fortran.dir/all] Error 2
make[3]: *** [Makefile:141: all] Error 2
make[2]: *** [CMakeFiles/kim_build.dir/build.make:114: kim_build-prefix/src/kim_build-stamp/kim_build-build] Error 2
make[1]: *** [CMakeFiles/Makefile2:303: CMakeFiles/kim_build.dir/all] Error 2
make: *** [Makefile:130: all] Error 2

Hello Axel,

Thank you very much for your advice.

I followed and figure out that the error was due to Intel C++ compiler bot findiing gcc headers. Adding export CPLUS_INCLUDE_PATH=$CPLUS_INCLUDE_PATH:/usr/include/x86_64-linux-gnu/c++/8 solve the problem.

that sounds like you have a broken intel compiler installation. been using intel compilers (on occasion) for many years and never needed to use such an ugly hack.

But now, at the end of make, I am facing the following errors:

[ 95%] Building C object examples/simulators/ex_test_Ar_fcc_cluster/CMakeFiles/ex_test_Ar_fcc_cluster.dir/ex_test_Ar_fcc_cluster.c.o
[ 95%] Linking C executable ex_test_Ar_fcc_cluster
ld: warning: libmpi.so.40, needed by …/…/…/libkim-api.so.2, may conflict with libmpi.so.12

…and this warning doesn’t bother you??

again, why try to compile LAMMPS in the most complicated way with a non-standard (and apparently not properly configured) compiler? you are just asking for trouble and seem ill prepared to deal with those kind of problems.

axel.

Hello Axel,

I changed my compilation setuo to GCC and OpemMP.

Compiling LAMMPS in serial, with MPI and with MPI & CUDA (without any explicit package definition) works fine.
(for MPI & CUDA I used: cmake -D CMAKE_C_COMPILER=mpicc -D CMAKE_CXX_COMPILER=mpic++ -D CMAKE_Fortran_COMPILER=mpif90 -D BUILD_MPI=yes -D BUILD_OMP=yes -D PKG_GPU=yes -D GPU_API=cuda -D GPU_ARCH=sm_60 -D PKG_KSPACE=yes …/cmake)

As I do not know which package could I need in the future, I am trying to compile with all of them using a modifications of all-on.cmake (without USER-ADIOS, USER-QMMM, USER-QUIP, LATTE first and then without KOKKOS too). In both cases, the compilations went fine, without warnings/errors.
(using: cmake -D CMAKE_C_COMPILER=mpicc -D CMAKE_CXX_COMPILER=mpic++ -D CMAKE_Fortran_COMPILER=mpif90 -D BUILD_MPI=yes -D BUILD_OMP=yes -D PKG_GPU=yes -D GPU_API=cuda -D GPU_ARCH=sm_60 -D PKG_KSPACE=yes -C …/cmake/presets/all_on-camps.cmake …/cmake)

At the end of the compilation I got:
[ 97%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/fix_gpu.cpp.o
[ 97%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pppm_gpu.cpp.o
[ 97%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_beck_gpu.cpp.o
[ 97%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_born_gpu.cpp.o
[ 97%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_born_coul_wolf_gpu.cpp.o
[ 98%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_buck_gpu.cpp.o
[ 98%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_buck_coul_cut_gpu.cpp.o
[ 98%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_coul_cut_gpu.cpp.o
[ 98%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_coul_debye_gpu.cpp.o
[ 98%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_coul_dsf_gpu.cpp.o
[ 98%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_dpd_gpu.cpp.o
[ 98%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_dpd_tstat_gpu.cpp.o
[ 98%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_gauss_gpu.cpp.o
[ 98%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_lj96_cut_gpu.cpp.o
[ 98%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_lj_cubic_gpu.cpp.o
[ 98%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_lj_cut_gpu.cpp.o
[ 98%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_lj_cut_coul_cut_gpu.cpp.o
[ 98%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_lj_cut_coul_debye_gpu.cpp.o
[ 98%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_lj_cut_coul_dsf_gpu.cpp.o
[ 98%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_lj_expand_gpu.cpp.o
[ 98%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_lj_gromacs_gpu.cpp.o
[ 98%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_mie_cut_gpu.cpp.o
[ 98%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_morse_gpu.cpp.o
[ 99%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_soft_gpu.cpp.o
[ 99%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_table_gpu.cpp.o
[ 99%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_ufm_gpu.cpp.o
[ 99%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_yukawa_gpu.cpp.o
[ 99%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_zbl_gpu.cpp.o
[ 99%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_gayberne_gpu.cpp.o
[ 99%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_resquared_gpu.cpp.o
[ 99%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_lj_class2_gpu.cpp.o
[ 99%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_lj_class2_coul_long_gpu.cpp.o
[ 99%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_colloid_gpu.cpp.o
[ 99%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_yukawa_colloid_gpu.cpp.o
[ 99%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_born_coul_long_cs_gpu.cpp.o
[ 99%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_born_coul_wolf_cs_gpu.cpp.o
[ 99%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_coul_long_cs_gpu.cpp.o
[ 99%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_lj_cut_dipole_cut_gpu.cpp.o
[ 99%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_lj_cut_dipole_long_gpu.cpp.o
[ 99%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_born_coul_long_gpu.cpp.o
[ 99%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_buck_coul_long_gpu.cpp.o
[100%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_coul_long_gpu.cpp.o
[100%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_lj_charmm_coul_long_gpu.cpp.o
[100%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_lj_cut_coul_long_gpu.cpp.o
[100%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_lj_cut_coul_msm_gpu.cpp.o
[100%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_eam_gpu.cpp.o
[100%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_eam_alloy_gpu.cpp.o
[100%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_eam_fs_gpu.cpp.o

[100%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_sw_gpu.cpp.o
[100%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_tersoff_gpu.cpp.o
[100%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_tersoff_mod_gpu.cpp.o
[100%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_tersoff_zbl_gpu.cpp.o
[100%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_vashishta_gpu.cpp.o
[100%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_lj_sdk_gpu.cpp.o
[100%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_lj_sdk_coul_long_gpu.cpp.o
[100%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_lj_expand_coul_long_gpu.cpp.o
[100%] Building CXX object CMakeFiles/lmp.dir/home/icamps/Downloads/LAMMPS/mylammps/src/GPU/pair_lj_sf_dipole_sf_gpu.cpp.o
[100%] Linking CXX executable lmp
[100%] Built target lmp
[100%] Building CXX object CMakeFiles/nvc_get_devices.dir/home/icamps/Downloads/LAMMPS/mylammps/lib/gpu/geryon/ucl_get_devices.cpp.o
[100%] Linking CXX executable nvc_get_devices
[100%] Built target nvc_get_devices

I run some examples with this latter executable using CPU (even in parallel) without a problem, but when running with “-sf gpu” I got the error below:
(this executable is 150MB whereas the MPI & GPU version is only 46MB)

lmp -in in.phosphate -sf gpu
LAMMPS (18 Jun 2019)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (src/comm.cpp:88)
using 1 OpenMP thread(s) per MPI task
ERROR: GPU library not compiled for this accelerator (src/GPU/gpu_extra.h:40)
Last command: package gpu 1
Cuda driver error 4 in call at file ‘/home/icamps/Downloads/LAMMPS/mylammps/lib/gpu/geryon/nvd_device.h’ in line 135.

The OMP_NUM_THREADS is ok because I did not specify anything about it in the enviromental variables.

Is there any package that is, in some way, incompatible with GPU compilation?

Regards,

Camps

I think the issue is that the binary lmp has the GPU package installed but it is not the right one for the GPU you are using. Packages are all compatible with each other because they never overwrite parts of other packages (or of the base), they only add features.

Let me clarify that. There is only one GPU package, but you have to make sure it is set up properly for your GPU. So the cuda architecture should match whatever compute capabilities your card has.

Hello Axel,

I changed my compilation setuo to GCC and OpemMP.

[…]

As I do not know which package could I need in the future, I am trying to compile with all of them using a modifications of all-on.cmake (without USER-ADIOS, USER-QMMM, USER-QUIP, LATTE first and then without KOKKOS too). In both cases, the compilations went fine, without warnings/errors.

since you are compiling yourself and since it is rather simple to add a package and recompile LAMMPS, i would go the opposite route. configure LAMMPS with a minimal set of packages and then enable packages as you need them. with a recent version of LAMMPS you should get a message telling you which package is missing when you are using a style that is not available in the executable.

this avoids a lot of the complications of compiling packages that have external dependencies or having to deal with the limitations of auto-downloaded libraries. going for the one-size-fits-all approach is really only meaningful for people creating packaged binaries.

axel.

theoretically, when compiling using the CMake build toolchain, the compilation should use built-in heuristics to determine which GPU architectures are supported by a given CUDA toolkit and then try to build “fat” binaries with support for all of them.

for debugging this we would need to know the version of the CUDA toolkit, the version of the CUDA driver and the output of nvc_get_devices.

alternatively, you could go for compiling the GPU package with OpenCL support, which then delegates this GPU compatibility issue completely to your local OpenCL setup. the performance of OpenCL is not always fully on par with CUDA, but it is still massively better than not having GPU support, assuming that you do have a capable GPU with a sufficient number of cores and suitable amount of RAM. trying to use an entry level or lower mid-range GPU will not be worth the effort and often result in deceleration.

axel.

i looked up the specs of your machine from the first post in this thread.

you have a laptop GPU, so your performance will be limited compared to the corresponding desktop version by having slower memory access (on the GPU and to the host memory) and by the need to cool things and corresponding limitations to the clock rate. since the hardware has only about 20% of the CUDA cores of a high-end GPU of the same generation (GTX 1050Ti vs GTX 1080Ti), you can expect only moderate speedups from this kind of GPU, if any. you have to be careful to only offload the pair calculations, which have the best acceleration.

axel.

Thank both of you for your comments.

My little experience with molecular dynamics (I used GROMACS for ligand/protein dynamics) is that running with limited CPU resources (10-20 cores) took around one and a half month whereas while running with this GPU took only 22 hours.