Signal: Segmentation fault 11 - Address not mapped

Dear LAMMPS users,

I am using 29 September 2021 version of LAMMPS. I have looked the similar errors discussed in this mailing list. However, my problem seems different. The errors are shown below.

[n37] *** Process received signal ***
[n37] Signal: Segmentation fault (11)
[n37] Signal code: Address not mapped (1)
[n37] Failing at address: 0x149c66f57008
[n37] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x12730)[0x149c68846730]
[n37] [ 1] /usr/lib/x86_64-linux-gnu/pmix/lib/pmix/mca_gds_ds21.so(+0x2936)[0x149c66f5a936]
[n37] [ 2] /lib/x86_64-linux-gnu/libmca_common_dstore.so.1(pmix_common_dstor_init+0x9d3)[0x149c66f4d733]
[n37] [ 3] /usr/lib/x86_64-linux-gnu/pmix/lib/pmix/mca_gds_ds21.so(+0x25b4)[0x149c66f5a5b4]
[n37] [ 4] /lib/x86_64-linux-gnu/libpmix.so.2(pmix_gds_base_select+0x12e)[0x149c670ac46e]
[n37] [ 5] /lib/x86_64-linux-gnu/libpmix.so.2(pmix_rte_init+0x8cd)[0x149c6706488d]
[n37] [ 6] /lib/x86_64-linux-gnu/libpmix.so.2(PMIx_Init+0xdc)[0x149c67020d7c]
[n37] [ 7] /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_pmix_ext2x.so(ext2x_client_init+0xc4)[0x149c670f8fe4]
[n37] [ 8] /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_ess_pmi.so(+0x2656)[0x149c677c5656]
[n37] [ 9] /lib/x86_64-linux-gnu/libopen-rte.so.40(orte_init+0x29a)[0x149c678c711a]
[n37] [10] /lib/x86_64-linux-gnu/libmpi.so.40(ompi_mpi_init+0x252)[0x149c68301e62]
[n37] [11] /lib/x86_64-linux-gnu/libmpi.so.40(MPI_Init+0x6e)[0x149c6833017e]
[n37] [12] *****/lammps/bin/lmp_alisha[0x410913]
[n37] [13] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xeb)[0x149c67fea09b]
[n37] [14] *****/lammps/bin/lmp_alisha[0x41082a]
[n37] *** End of error message ***

The problem is that some times the simulations are running fine with the same input script, whereas, some times giving this error and exiting.

Thank you in advance.

Segmentation faults typically have one of three origins:

  • a bug in the software
  • a hardware problem (usually faulty RAM, but sometimes also a faulty CPU)
  • insufficient cooling

The latter two are often related, i.e. hardware faults are more likely when the temperature inside the compute or of the CPU or RAM is higher. You can check for them by testing your hardware under load, e.g. with a memory test (like provided by https://www.memtest86.com/) or a CPU stress test like “mprime” (
GIMPS - Free Prime95 software downloads - PrimeNet). There is lots of information about this on the internet, too.

If it is a software bug, then the stack trace you quoted shows that it is not a LAMMPS issue but would be an issue inside your MPI library (which is OpenMPI version 3.x). You may want to check if there is an updated package for your linux distribution and recompile LAMMPS to cover your bases.

TL;DR: Bottom line. This is very unlikely to be caused by LAMMPS itself, so you need to look for a cause elsewhere.