Question about efficient use of DUMP command

I would appreciate advice about whether I can replace the following dump command with something more efficient as part of a long molecular dynamics run
dump dmp1 Ligroup custom 100 dmp1.dump id xu yu zu vx vy vz
dump_modify dmp1 sort id
dump_modify dmp1 format float %20.15g
Here “Ligroup” is a large group of atoms and the output is processed every 100 time steps. For large simulation cells and long runs the dmp1.dump file is very large. What we really need, at each time step, is the summation of the values of xu over all of the atoms in the group (as well as the summations for yu, zu, vx, vy, vz). I tried looking at fix, compute, and variable, but cannot figure out how to output these sums. Your advice on whether this can be done in lammps or perhaps it is a bad idea (because it would make the run less efficient for example) will be very much appreciated. Thanks very much Natalie Holzwarth

You should be able to use compute reduce, see compute reduce command — LAMMPS documentation, then put that into fix ave/time or fix print which can output to a file.

1 Like

See also:

Thanks for two great suggestions. I will try both. Much appreciated. Natalie

The first suggestion works great. Perhaps there is a more efficient scheme, but the following works great for the atom group Ligroup:

compute 1 Ligroup reduce sum vx
compute 2 Ligroup reduce sum vy
compute 3 Ligroup reduce sum vz
compute 1u Ligroup property/atom xu
compute 11 Ligroup reduce sum c_1u
compute 2u Ligroup property/atom yu
compute 12 Ligroup reduce sum c_2u
compute 3u Ligroup property/atom zu
compute 13 Ligroup reduce sum c_3u
fix 1 all ave/time 1 1 1 c_1
fix 2 all ave/time 1 1 1 c_2
fix 3 all ave/time 1 1 1 c_3
fix 11 all ave/time 1 1 1 c_11
fix 12 all ave/time 1 1 1 c_12
fix 13 all ave/time 1 1 1 c_13

thermo_style custom step time f_1 f_2 f_3 f_11 f_12 f_13

You would only need one compute reduce and one fix ave/time commands, since both can take multiple parameters and thus you would reduce the overhead, especially for the compute reduce command, since that is a collective MPI communication.

Thanks! That suggestion works beautifully! Much appreciated.

2 Likes