Problems with 'dump_modify append no"

Hi everybody,

Due to a design problem in LAMMPS which does not allow for compute stress command to be invoked unless the stresses are dumped into a text file, I have to generate a dummy dump file to keep these unnecessary numbers every few steps at which I need the stresses. [I am linking the LAMMPS library so I am dealing with these numbers in my own code internally, that’s why I don’t need an output file.] In order to keep the file size in control and save the storage system from crashing, I use the ‘dump_modifyu’ command immediately after the dummy dump so that any new output is not appended into the existing dummy file. But this seems not to be working although I am invoking ‘dump_modify’ RIGHT AFTER invoking the original dump, and the result is the formation of GIGABYTE-size files just in a few hours.

I can reproduce the exact same problem for the following test system even when I am using the LAMMPS binaries- and not the linked library through within my own code. Here you can find a script that I am using, but again, the file butane100.dat keeps growing as new configurations are added to its end despite the dump_modify command.

Regards

Amir

PS, I changed the locations of dump and dump_modify in the script, but I always get the same outcome despite what the LAMMPS documentation asserts.

in.butane:

units real
atom_style full
bond_style harmonic
angle_style harmonic
dihedral_style opls
boundary p p p

timestep 2.0

#pair_style lj/sf 3.5
pair_style lj/cut 2.5
read_data butane-single.txt
replicate 10 10 20

dump myDump all custom 5000 butane100.dat id mol type q x y z
dump_modify myDump append no sort id

velocity all create 90.0 87287

pair_coeff 1 1 0.091411415139202 3.95 9.875
pair_coeff 2 2 0.194746058340039 3.75 9.375
pair_coeff 1 2 0.133424183661151 3.85 9.625
dihedral_coeff 1 0.705517854342035 -0.135507597914496 1.572516093000421 0.0

neighbor 0.3 bin
neigh_modify every 20 delay 0 check no

#fix shakeFixStr all shake 1e-6 2000 0 b 1
fix ensembleFixStr all npt temp 100.0 100.0 200.0 iso 1.0 1.0 2000.0

thermo 1000

run 100000

and here is butane-single.txt

one propane molecule: traPPE-UA model

0.0 10.0 xlo xhi
0.0 10.0 ylo yhi
0.0 10.0 zlo zhi
2 atom types
4 atoms
1 bond types
3 bonds
1 angle types
2 angles
1 dihedral types
1 dihedrals

Masses

1 14.02658
2 15.03452

Atoms

1 1 2 0.0 1.0 1.0 1.0
2 1 1 0.0 1.0 1.0 2.54
3 1 1 0.0 1.406860004769605 1.0 3.166374430336732
4 1 2 0.0 1.406860004769605 1.0 4.706374430336732

Velocities

1 0 0 0
2 0 0 0
3 0 0 0
4 0 0 0

Bond Coeffs

1 1000 1.54

Bonds

1 1 1 2
2 1 2 3
3 1 3 4

Angle Coeffs

1 1242.002923086986 114.0

Angles

1 1 1 2 3
2 1 2 3 4

Dihedrals

1 1 1 2 3 4

Hi everybody,

Due to a design problem in LAMMPS which does not allow for compute stress
command to be invoked unless the stresses are dumped into a text file, I

this is not entirely correct.

have to generate a dummy dump file to keep these unnecessary numbers every
few steps at which I need the stresses. [I am linking the LAMMPS library so

why don't you just write to /dev/null? and why not dump out data from only one
atom to further reduce the amount of time spent on generating formatted output?

I am dealing with these numbers in my own code internally, that's why I
don't need an output file.] In order to keep the file size in control and
save the storage system from crashing, I use the 'dump_modifyu' command
immediately after the dummy dump so that any new output is not appended into
the existing dummy file. But this seems not to be working although I am
invoking 'dump_modify' RIGHT AFTER invoking the original dump, and the
result is the formation of GIGABYTE-size files just in a few hours.

this is not a LAMMPS problem, but a case of PEBCAC.

all you do is tell lammps to not append to a (potentially)
existing file, but to overwrite it (which is the default behavior, btw).
the file is being written to for as long as the dump is active,
i.e. undump is called. so the dump_modify command only
applies *before* a an dump is written to for the first time.
this is also what the documentation says.

I can reproduce the exact same problem for the following test system even
when I am using the LAMMPS binaries- and not the linked library through
within my own code. Here you can find a script that I am using, but again,
the file butane100.dat keeps growing as new configurations are added to its
end despite the dump_modify command.

Regards

Amir

PS, I changed the locations of dump and dump_modify in the script, but I
always get the same outcome despite what the LAMMPS documentation asserts.

sorry, but the issue is in your not reading the LAMMPS documentation
correctly. LAMMPS does behave *exactly* as documented in your input.

axel.

Hi everybody,

Due to a design problem in LAMMPS which does not allow for compute stress

command to be invoked unless the stresses are dumped into a text file, I

this is not entirely correct.

That’s EXACTLY true. This is indeed what Steve told me to do a few weeks ago. You can read for yourself if you don’t have a PEBCAC problem yourself!!!

http://lammps.sandia.gov/threads/msg32382.html

 you have to do some
output that uses the per-atom stress compute you have defined and are
requesting.

have to generate a dummy dump file to keep these unnecessary numbers every

few steps at which I need the stresses. [I am linking the LAMMPS library so

why don’t you just write to /dev/null? and why not dump out data from only one
atom to further reduce the amount of time spent on generating formatted output?

If LAMMPS would do the right thing, I would not have to do this at all as I don’t need to do that anyway, so you are implicitly accepting that ‘append’ command in its current format is a garbage not doing anything.

I am dealing with these numbers in my own code internally, that’s why I

don’t need an output file.] In order to keep the file size in control and

save the storage system from crashing, I use the ‘dump_modifyu’ command

immediately after the dummy dump so that any new output is not appended into

the existing dummy file. But this seems not to be working although I am

invoking ‘dump_modify’ RIGHT AFTER invoking the original dump, and the

result is the formation of GIGABYTE-size files just in a few hours.

this is not a LAMMPS problem, but a case of PEBCAC.

Please be respectful and non-deragotary!!!

all you do is tell lammps to not append to a (potentially)
existing file, but to overwrite it (which is the default behavior, btw).
the file is being written to for as long as the dump is active,
i.e. undump is called. so the dump_modify command only
applies before a an dump is written to for the first time.
this is also what the documentation says.

That’s what the system is NOT doing i.e. instead of overwriting the existing file, it appends the data to it. As you can see in the following script, I am calling ‘dump_modify’ immediately after defining the dump and it just doesn’t work, unless a dump is done right before the dump_modify command without me invoking ‘run’ or ‘minimize’.

I can reproduce the exact same problem for the following test system even

when I am using the LAMMPS binaries- and not the linked library through

within my own code. Here you can find a script that I am using, but again,

the file butane100.dat keeps growing as new configurations are added to its

end despite the dump_modify command.

Regards

Amir

PS, I changed the locations of dump and dump_modify in the script, but I

always get the same outcome despite what the LAMMPS documentation asserts.

sorry, but the issue is in your not reading the LAMMPS documentation
correctly. LAMMPS does behave exactly as documented in your input.

axel.

OK, so what is wrong in all what I said? What is wrong in the following script? If I am doing something wrong, it must be with the following script, right? But you become derogatory instead of letting me know what is wrong with my script and/or my approach.

Hi everybody,

Due to a design problem in LAMMPS which does not allow for compute stress

command to be invoked unless the stresses are dumped into a text file, I

this is not entirely correct.

That's EXACTLY true. This is indeed what Steve told me to do a few weeks
ago. You can read for yourself if you don't have a PEBCAC problem
yourself!!!!

http://lammps.sandia.gov/threads/msg32382.html

you have to do some
output that uses the per-atom stress compute you have defined and are
requesting.

a) the PEBCAC didn't refer to this
b) you don't have to create a dump to trigger a compute.

have to generate a dummy dump file to keep these unnecessary numbers every

few steps at which I need the stresses. [I am linking the LAMMPS library so

why don't you just write to /dev/null? and why not dump out data from only
one
atom to further reduce the amount of time spent on generating formatted
output?

If LAMMPS would do the right thing, I would not have to do this at all as I

i didn't contest that.

don't need to do that anyway, so you are implicitly accepting that 'append'
command in its current format is a garbage not doing anything.

no, you are wrong here. the append flag works just as advertised.
you are making assumptions about how it would work that are not correct.

I am dealing with these numbers in my own code internally, that's why I

don't need an output file.] In order to keep the file size in control and

save the storage system from crashing, I use the 'dump_modifyu' command

immediately after the dummy dump so that any new output is not appended into

the existing dummy file. But this seems not to be working although I am

invoking 'dump_modify' RIGHT AFTER invoking the original dump, and the

result is the formation of GIGABYTE-size files just in a few hours.

this is not a LAMMPS problem, but a case of PEBCAC.

Please be respectful and non-deragotary!!!!

you didn't pay the proper respect to LAMMPS and its docs.
i'm just giving you some of your own medicine here.

all you do is tell lammps to not append to a (potentially)
existing file, but to overwrite it (which is the default behavior, btw).
the file is being written to for as long as the dump is active,
i.e. undump is called. so the dump_modify command only
applies *before* a an dump is written to for the first time.
this is also what the documentation says.

That's what the system is NOT doing i.e. instead of overwriting the existing
file, it appends the data to it. As you can see in the following script, I
am calling 'dump_modify' immediately after defining the dump and it just
doesn't work, unless a dump is done right before the dump_modify command
without me invoking 'run' or 'minimize'.

as i tried to explain in my previous e-mail.
append refers to appending to a file from
a *previous* dump command that has since
been deactivated with undump or was created
in a previous lammps run. it does *not* mean
that it will write out only one frame and just
keep overwriting it. *that* in turn would be
utterly useless.

regardless of that, have you tried the suggestions
that i made that should work around the "undesired
feature" that you were so heavily complaining about?

I can reproduce the exact same problem for the following test system even

when I am using the LAMMPS binaries- and not the linked library through

within my own code. Here you can find a script that I am using, but again,

the file butane100.dat keeps growing as new configurations are added to its

end despite the dump_modify command.

Regards

Amir

PS, I changed the locations of dump and dump_modify in the script, but I

always get the same outcome despite what the LAMMPS documentation asserts.

sorry, but the issue is in your not reading the LAMMPS documentation
correctly. LAMMPS does behave *exactly* as documented in your input.

axel.

OK, so what is wrong in all what I said? What is wrong in the following
script? If I am doing something wrong, it must be with the following script,
right? But you become derogatory instead of letting me know what is wrong
with my script and/or my approach.

i *already* made a suggestion (write the dump to /dev/null) and
proposed an optimization (define a group with only one atom and
dump out that info). how much more do you want?

if you are that easily offended, you should not
post provocative inquiries to a public mailing list.

axel.

Due to a design problem in LAMMPS which does not allow for compute stress
command to be invoked unless the stresses are dumped into a text file,

I don't think of this as a design "problem", but a feature. LAMMPS does
not invoke it's computes (some of which are costly), except on timesteps
when they are needed. So the question is, how is LAMMPS supposed
to know when your external code, calling it thru the lib interface, is going
to want/need the stresses? Do you have a suggestion as to how that
should work? I'm open to suggestions.

Note that compute stress/atom
is a compute that cannot simply be invoked on the spur of the moment
at the end of a timestep. LAMMPS
must know at the start of the timestep that it will be invoked that
timestep, so that it can tally the virial on a per-atom basis during the
potential evaulation. You could write a library function (for library.cpp)
that made the appropriate LAMMPS calls that tells LAMMPS the future
timestep on which your driver code will want the stresses.

What LAMMPS does now is figure this out for you, based on the various
fixes, computes, output your script defines that use the compute. Obviously,
these commands know nothing about your driver code. But it is not
true that you have to dump all the stresses to a file to invoke this logic.
You could use the compute in a fix ave/atom command that does
no output to a file. You could use it in a compute reduce command,
that outputs a single (summed) value to the screen as part of thermo
output. Or you could reference a single value from the compute in
a fix ave/time command or in thermo output. Etc, etc.

Steve