ERROR:LAMMPS provides no message, while the supercomputer reports a segmentation fault.

Hello everyone,

I am a beginner in LAMMPS, and I am attempting to simulate the pyrolysis of the PVC-DOP system under air components. However, the error does not display in LAMMPS, but a segmentation fault appears in the supercomputer’s err file. I checked the output trajectory files and did not find any obvious issues. I reproduced the same error on my own computer, where the MPI processes were terminated, yet no error messages were provided by LAMMPS.

I performed modeling in Material Studio, where I created a simulation box of PVC and DOP (in a ratio of 3:19) using the AC module. I optimized the structure and performed dynamic optimization on the box using Forcite. Up to this point, I imported the optimized data into LAMMPS for simulation, and the program ran smoothly. My goal is to explore the pyrolysis behavior of PVC and DOP in an air environment. Thus, I exported the created PVC-DOP model (ignoring lattice periodic conditions) and used Packmol to assemble PVC-DOP, oxygen, and nitrogen, ensuring that all substances were contained within the box. At this point, when I input the result into LAMMPS for computation, an MPI error occurred:

Job aborted:
[ranks] message
[0] process exited without calling finalize
[1-15] terminated
---- error analysis -----
[0] on DESKTOP 
lmp ended prematurely and may have crashed. Exit code 0xc0000005
---- error analysis -----

I have two questions now: Could the error be caused by ignoring the periodic boundary conditions? Or could it be due to an incorrect method used to build the model with Packmol?

Here’s the translated version:
The version I am using is LAMMPS 64-bit 29Aug2024 with OpenMPI 4.16.

Looking forward to your feedback. Thank you.

Without any information about what your input actually does, seeing the details of your error message and what is leading up it or a stack trace, and an option to try reproduce your issue, there is very little advice that can be given. One would have to be able to read minds :slightly_frowning_face:

The only other thing that I can think of is that LAMMPS has a -nb command line flag, that will turn off buffering and thus should provide more output before the crash happens.

Dear akohlmey, :
Thank you very much for your reply. Since I am unable to upload attachments, I will supplement my information in text form below. My system involves the pyrolysis of PVC-DOP in an air composition, and the code is as follows:

units               real
atom_style          full

read_data           in.data

pair_style          reaxff NULL
pair_coeff          * * ffield.reax C Cl H N O
neighbor            2 bin
neigh_modify        every 10 delay 0 check no
fix                 reax_qeq all qeq/reax 10 0.0 8.0 1e-4 reaxff
timestep            0.2

minimize            1e-6 1e-6 1000 1000
velocity            all create 300 114514

thermo              100
thermo_style        custom step temp etotal spcpu cpuremain
fix                 reax_out_species all reaxff/species 20 50 1000 species.out position 1000 pyrolysis.pos
fix                 reax_out_bonds all reaxff/bonds 1000 bonds.reaxc
dump                traj all atom 1000 pyrolysis.lammpstrj
log                 pyrolysis.log

fix                 1 all nvt temp 300.0 300.0 20.0
run                 20000
unfix               1

fix                 2 all nvt temp 300.0 2200.0 20.0 
run                 20000
unfix               2

fix                 3 all nvt temp 2200.0 2200.0 20.0
run                 300000

write_data pyrolysis.data

It is known that this error is not related to the LAMMPS version (as I have debugged using two versions). The most likely issue lies with my model and the force field file. The force field file was derived from the literature, but could this be related to the model file? Below is the error encountered when running on a Windows system:

job aborted:
[ranks] message

[0] process exited without calling finalize

[1-15] terminated

---- error analysis -----

[0] on DESKTOP 
lmp ended prematurely and may have crashed. exit code 0xc0000005

---- error analysis -----

Could you kindly provide your insights or suggestions regarding the cause of this issue?

Best regards,
Gavin

Segmentation faults with ReaxFF force field have been reported many times, so you can search through the existing posts discussing those for possible resolutions. The most common cause is due to running in parallel and having significant local changes that collide with the default memory management algorithm, which assumes only small changes in the geometry and bonding topology. This can to a large degree be avoided by using the KOKKOS version of the code (with or without GPU) which has a more robust memory management. Other workarounds are running multiple chunks of shorter runs, so that the memory management heuristics are re-initialized before they become invalid. This is - naturally - also affected by the initial geometry, timestep, neighbor list settings and system temperature.

It is not possible to comment on those from remote and without having detailed knowledge of your research and the relevant files.

Thank you for your reply. I will first review the solutions you suggested. Thank you again!

Please also carefully watch your simulation output for warnings and the information about how averaging data in this fix impacts neighbor list settings in its documentation.