[lammps-users] File size limit exceeded - workaround ..

Dear all,

During a longish lammps run I get an error, "File size limit exceeded"
and the run stops. The file being written out was 2147483647 bytes. I
searched the lammps user-archives and did not get any reference to
this. Obviously it is more of an OS related issue and further search
lead me to the following work-around:
The way to create files with size > 2147483647 bytes using GNU
Compiler Collection (gcc) is as follows:
1) You should have a linuc kernel version > 2.4 (Only these support
LFS - Large File System).
2) You should use the following compilation flag -D_FILE_OFFSET_BITS=64
Some GNU utilities are created using the above compilation flags (tar
for example) and some are created not using it.

My question is, what are the issues in making the compilation flag
"-D_FILE_OFFSET_BITS=64" as a default flag in Makefiles? Probably this
question must be asked in the gcc discussion list ..

Best Regards,
Manoj

I have found an easier path is to do my simulation in ‘chunks’ , and create several moderately-large dump files. But, I think this comes naturally to the problems I am working on (need to change timestep too), so may not be a universal solution.

One reason I prefer the several files is that it seems to make post-processing easier, as well as moving the data around (only move what I need to, etc).

YMMV

Dave

I have found an easier path is to do my simulation in 'chunks' , and create
several moderately-large dump files. But, I think this comes naturally to

yes. this is a very good practice since also many analysis and
visualization codes on 32-bit platforms will have trouble with

2GB files. the problem of excessive size can also be limited

by using binary and - if high accuracy of positions is not needed -
compressed file formats like .dcd and .xtc respectively.

the problems I am working on (need to change timestep too), so may not be a
universal solution.

everybody i know who runs large scale runs, particularly on
supercomputing centers, is actually _forced_ to run in chunks
due to queue size limitations. if you want to run just one
input, you can program loops in lammps...

One reason I prefer the several files is that it seems to make
post-processing easier, as well as moving the data around (only move what I
need to, etc).

exactly.

cheers,
  axel.

As David suggested, you can write your output in chunks
if you use the undump command and do multiple runs, defining
a new dump file each time.

Steve

David, Axel, Steve,

I guess what you all have said about making smaller files makes a lot
of common sense. Thank you.

Manoj