[lammps-users] Large dump File


I would like to know if there is a way to tell to LAMMPS to write dump files of a given size to avoid generating a single large file which makes by simulation crashes. I work under Linux and can not handle a file larger than 2 Go.

Thank ahead for your answer.


To my knowledge there is not. I would change the appropriate dump module to
make LAMMPS stop when it hits your preset maximum by altering dump.cpp:
Dump::write() slightly, having write_data() return the number of bytes
actually written. An alternative is to use fgetpos(fp, &pos) (see man
pages) just before fp gets closed at the end of Dump::write(). The variable
pos would hold the end position of fp.



you can generate multiple dumps by making a loop of runs that do not exceed your limiting size.
e.g. using the example given in the jump command page:

variable a loop 10
label loop
dump 1 all atom 100 file.$a
run 10000
undump 1
next a

Each run should continue exactly where you left the other. It should be just one 10000 x a long run dumped
into “a” different files.

I hope this helps, although I should say i did not test it

Hi David,

Any recent version of linux can support file sizes larger than 2 Gb - you
must recompile LAMMPS with the following flags (these work with gcc,
you'll have to check if you are using Intel's compile):


Maybe the linux Makefiles should have this flag added to them - I see no
harm in automatically including this feature except for a slightly larger
binary (since all file operations now use 64 bit offsets).



Thank you for the tips. I added "jump input_file.in loop" to your lines and it seems to work properly. The simulations will run this weekend. I let you know if everythings is OK.

All the best


Valeria Molinero wrote: