LAMMPS parallel run using openmpi on macos

Dear lammps users

Sorry for the basic question, I’m trying to run LAMMPS parallelly using openmp on mac os. But, there is some error. How can I solve this problem? Any advice is very much appreciated.

This is my error

Per MPI rank memory allocation (min/avg/max) = 24.69 | 24.89 | 25.16 Mbytes
Step Temp PotEng KinEng TotEng Press Volume
0 3000 -6530.6253 795.87628 -5734.749 -11461.69 15625
10000 3062.1929 -6818.0478 812.37558 -6005.6722 1288.8264 15625
[Jongwunui-MacBookPro:63186] *** Process received signal ***
[Jongwunui-MacBookPro:63186] Signal: Segmentation fault: 11 (11)
[Jongwunui-MacBookPro:63186] Signal code: Address not mapped (1)
[Jongwunui-MacBookPro:63186] Failing at address: 0x7fec7b506a70
[Jongwunui-MacBookPro:63186] [ 0] 0 libsystem_platform.dylib 0x00007fff63b9f42d _sigtramp + 29
[Jongwunui-MacBookPro:63186] [ 1] 0 ??? 0x0000000000000008 0x0 + 8
[Jongwunui-MacBookPro:63186] [ 2] 0 lmp_omp 0x0000000104f637f1 _ZN9LAMMPS_NS10PairReaxFF7computeEii + 369
[Jongwunui-MacBookPro:63186] [ 3] 0 lmp_omp 0x0000000104d79fc2 _ZN9LAMMPS_NS6Verlet3runEi + 994
[Jongwunui-MacBookPro:63186] [ 4] 0 lmp_omp 0x0000000104d29513 _ZN9LAMMPS_NS3Run7commandEiPPc + 2563
[Jongwunui-MacBookPro:63186] [ 5] 0 lmp_omp 0x0000000104be229a _ZN9LAMMPS_NS5Input15execute_commandEv + 1770
[Jongwunui-MacBookPro:63186] [ 6] 0 lmp_omp 0x0000000104be142e _ZN9LAMMPS_NS5Input4fileEv + 878
[Jongwunui-MacBookPro:63186] [ 7] 0 lmp_omp 0x00000001049d486d main + 77
[Jongwunui-MacBookPro:63186] [ 8] 0 libdyld.dylib 0x00007fff639a67fd start + 1
[Jongwunui-MacBookPro:63186] [ 9] 0 ??? 0x0000000000000003 0x0 + 3
[Jongwunui-MacBookPro:63186] *** End of error message ***

Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.


mpirun noticed that process rank 3 with PID 0 on node Jongwunui-MacBookPro exited on signal 11 (Segmentation fault: 11).

lmp_omp information

OS: Darwin 19.2.0 x86_64
Compiler: Clang C++ Homebrew Clang 13.0.0 with OpenMP 5.0
C++ standard: C++11
MPI v3.1: Open MPI v4.1.2, package: Open MPI brew@iMac-Pro Distribution, ident: 4.1.2, repo rev: v4.1.2, Nov 24, 2021

Accelerator configuration:
OPENMP package API: OpenMP
OPENMP package precision: double

Build information
cmake -C …/cmake/presets/basic.cmake -D PKG_OPENMP=on -D CMAKE_INSTALL_PREFIX=’/Users/jongwuni/opt/anaconda3/envs/MD’ -D LAMMPS_MACHINE=omp …/cmake

– <<< Build configuration >>>
Operating System: Darwin
Build type: RelWithDebInfo
Install path: /Users/jongwuni/opt/anaconda3/envs/MD
Generator: Unix Makefiles using /usr/bin/make
– Enabled packages: KSPACE;MANYBODY;MOLECULE;OPENMP;REAXFF;RIGID
– <<< Compilers and Flags: >>>
– C++ Compiler: /usr/local/opt/llvm/bin/clang++
Type: Clang
Version: 13.0.0
C++ Flags: -O2 -g -DNDEBUG
Defines: LAMMPS_SMALLBIG;LAMMPS_MEMALIGN=64;LAMMPS_OMP_COMPAT=4;LAMMPS_JPEG;LAMMPS_PNG;LAMMPS_GZIP;FFT_FFTW3;FFT_FFTW_THREADS;LMP_OPENMP
– <<< Linker flags: >>>
– Executable name: lmp_omp
– Executable linker flags: -L/usr/local/opt/llvm/lib
– Static library flags:
– <<< MPI flags >>>
– MPI_defines: MPICH_SKIP_MPICXX;OMPI_SKIP_MPICXX;_MPICC_H
– MPI includes: /usr/local/Cellar/open-mpi/4.1.2/include
– MPI libraries: /usr/local/Cellar/open-mpi/4.1.2/lib/libmpi.dylib;
– <<< FFT settings >>>
– Primary FFT lib: FFTW3
– Using double precision FFTs
– Using threaded FFTs
– Configuring done
– Generating done
– Build files have been written to: /Users/jongwuni/Documents/LAMMPS/lmp_source/lammps/build

Best regards
Jongwun

Two important pieces of information are still missing:

  • what is your LAMMPS version? does this have any customizations?
  • what is your command line?

It also looks like you may be confusing OpenMPI and OpenMP. I see that you are using MPI via OpenMPI and that you have installed the OPENMP package, but from the stack trace doesn’t look like you have enabled any OpenMP styles during your run but from the memory output it still looks like you are running in parallel.

Thank you for your reply!

Sorry, I should change my question,
My question is LAMMPS parallel run using ‘OpenMPI’

My lammps version is ‘14 Dec 2021’
I don’t have any customizations and command line is mpirun -np 4 lmp_omp -in input.

Ok. Thanks for the update. This looks like you are running into a limitation of the ReaxFF implementation in LAMMPS: the internal memory management assumes that the environment of atoms doesn’t change much. However, it looks like your system could have changed too much after it has successfully completed the first 10,000 steps.

You can do now the following:

  • add a dump command that outputs the coordinates frequently, say every 100 steps. visualize the resulting trajectory and make an assessment of whether the system does indeed change a lot as suspected. and also check if you see desired behavior. you are running at very high temperature, so I would assume it is.
  • run only for 10,000 steps, then write out a restart file and continue your simulation in a second input with a second run from the restart file. does it still crash before reaching 20,000 steps? does it crash at all? Note: no frequent dump output needed for this one.