[lammps-users] dump custom causes segfault

Hi all.

I have an issue where a dump custom command determines whether or not LAMMPS segfaults before taking a single timestep. The problem only seems to happen when I initialize with a "read_restart" binary restart file. I haven't been able to reproduce the problem starting from an ascii restart file.

Included files:
init.dat (ascii file with initial data)
input1 (LAMMPS input file which takes a single timestep and writes a binary restart file: "restart.1")
inputSegfault (LAMMPS input file which reads restart.1 which should be generated using the input1 commands)
inputWorking (same as inputSegfault with the dump custom line commented out)

Steps to reproduce error:
1) unpack tar file
2) "lmp_serial < input1"
3) "lmp_serial < inputSegfault"

The bug was produced with a pristine build of lmp_serial from the 10Nov05 distribution. I was hoping someone could reproduce this error.

Thanks much,

stuff.tar.gz (6.92 KB)

The problem is that when you are restarting, you are
initially on a timestep that is not a multiple of the dump
custom frequency, so the initial special quantities
you are requesting are not being
computed (energy, stress) on that timestep.

I.e. your restart file starts at step 1, but you are dumping
every 10,000 steps. If you had written the restart file
at step 10,000 or 20,000, etc, then you would be fine.

I need to figure out how to handle that case better, but
for now that’s a quick work-around …


Ah, I see.

I also see that there is a "reset_timestep" command which is probably useful here.


Just posted a patch to fix this problem …