Compiling lammps with cmake

Hi, I am new to lammps and trying to compile it from source code in HPC cluster of my institute. I want to compile it with MPI so that i can use more number of cpu core(60). Initially I used make but it was running only on 1 MPI task. So I again tried to compile using cmake with MPI.

LAMMPS Version: 2025.7.22.2 stable_22Jul2025_update2-modified
Operating System: Linux CentOS 7
CMake Version: 3.23.1
Build type: RelWithDebInfo

these are the module i have loaded-

Currently Loaded Modulefiles:

  1. pythonpackages/3.10.4/cupy/10.6.0/gnu
  2. apps/cmake/3.23.1/gnu
  3. lib/isl/0.18/gnu
  4. compiler/gcc/11.2.0
  5. compiler/gcc/11.2/openmpi/4.1.6
  6. lib/centos/libinfinipath
  7. mpi/openmpi/4.1/gnu/mpivars

Everything goes perfectly till compilation, but when i run ‘ ./lmp - h ’ it shows-

The library attempted to open the following supporting CUDA libraries,

but each of them failed. CUDA-aware support is disabled.
libcuda.so.1: cannot open shared object file: No such file or directory
libcuda.dylib: cannot open shared object file: No such file or directory
/usr/lib64/libcuda.so.1: cannot open shared object file: No such file or directory
/usr/lib64/libcuda.dylib: cannot open shared object file: No such file or directory
If you are not interested in CUDA-aware support, then run with
–mca opal_warn_on_missing_libcuda 0 to suppress this message. If you are interested
in CUDA-aware support, then try setting LD_LIBRARY_PATH to the location
of libcuda.so.1 to get passed this issue.

./lmp: symbol lookup error: /home/soft/centOS/compilers/gcc/openmpi/4.1.6/lib/openmpi/mca_coll_han.so: undefined symbol: mca_coll_base_colltype_to_str

Now I am not able to understand what is problem. Does lammps work in parallel even after this msg?

Thank you.

This not an error message from LAMMPS but from your MPI library. Apparently, it has been configured to support “GPU Direct” which in turn requires loading the environment module with the CUDA libraries, which you didn’t do. If the people managing your HPC cluster would do things properly, this would happen automatically when loading this specific OpenMPI package.

The error message gives you instructions to suppress that error. Have you tried them?

At any rate, there is not much that can be done from remote since this is a question of the setup of the cluster that you are using, so you should contact the cluster admins for further assistance.

Yes i tried and now this is suppressed. Thank you.