problem with compute reduce and run every

Dear All,
the code below looks each 10 time steps to the coordination sphere of a moving atom (including some atom types only), and should calculate the dipole moment of the latter (let’s suppose it is not charged). The fact is that when I try to evaluate myvdx following the compute reduce, lammps (27 Nov 2018) fails badly (see below). Everything is fine otherwise. I am not able to figure out the problem, can please anyone help?
Many thanks in advance.
Stefano Mossa

CEA Grenoble - INAC/SYMMES
17, Rue des Martyrs
38054 Grenoble Cedex 9
France

Dear All,
the code below looks each 10 time steps to the coordination sphere of a moving atom (including some atom types only), and should calculate the dipole moment of the latter (let's suppose it is not charged). The fact is that when I try to evaluate myvdx following the compute reduce, lammps (27 Nov 2018) fails badly (see below). Everything is fine otherwise. I am not able to figure out the problem, can please anyone help?

there is not enough information here to be able to identify the origin
of the segfault.

please try without MPIIO dump. by doing compute reduce and a sorted
dump this overall convoluted and complex processing, you are limiting
parallel efficiency so much, that it is unlike you have any benefit
from it over the regular dump. but the MPIIO version is much less
tested and thus much more likely to misbehave under such extremely
unusual conditions. ... and you are not likely to use thousands of MPI
ranks anyway, right?

if that doesn't make a difference, please try to reduce your input to
the minimum required (number of commands and number of atoms) to
reproduce the segfault (it doesn't have to do anything meaningful),
and then provide this test input deck so we can have a closer look and
use common debugging tools to try and identify what is going on.

axel.

BTW: a lot of the code in the "every" segment is redundant and
invariant, thus it would not need to be executed every time, but
rather could be called just once.

axel.

one more piece of information. from the documentation of the run command:

If your input script changes the system between 2 runs, then the
initial setup must be performed to insure the change is recognized by
all parts of the code that are affected. Examples are adding a fix or
dump or compute, changing a neighbor list parameter, or writing
restart file which can migrate atoms between processors. LAMMPS has no
easy way to check if this has happened, but it is an error to use the
pre no option in this case.

you are creating a compute as part of the every section, that is
listed here as a modification of the system and thus requires an
initialization, which is not done during "every" as that implies 'pre
no post no'

if i setup a test system and change your code to the following, there
is no segfault:

variable rsSOLV equal 3.6

group g1 type 1 10 11 12

shell 'rm info.dat'
shell 'rm spheres.dat'

variable iLi equal 5
variable xi equal x[\{iLi\}\] variable yi equal y\[{iLi}]
variable zi equal z[${iLi}]
compute xd all property/atom xu
compute qd all property/atom q
variable vxd atom c_xd
variable vqd atom c_qd
variable vdx atom v_vxd*v_vqd
compute mydx all reduce sum v_vdx
variable myvdx equal c_mydx

run 30 pre no post no every 10 &
"region sphereSOLV sphere \{xi\} {yi} \{zi\} {rsSOLV} side in units box" &
"group inside1 region sphereSOLV" &
"group inatm intersect inside1 g1" &
"group inatm include molecule" &
"variable natm equal count(inatm)" &
"print \{myvdx\}" & "print '{natm}' append info.dat screen no" &
"write_dump inatm custom spheres.dat mol type xu v_vxd v_vdx modify
sort id append yes" &
"variable natm delete" &
"group inatm delete" &
"group inside1 delete" &
"region sphereSOLV delete"

Dear Axel,
thanks for all your hints. I need, however, the reduce inside the every block and it still does not work. I attach below a minimal code producing the segfault.
Thanks for your help.

Dear Axel,
thanks for all your hints. I need, however, the reduce inside the every block and it still does not work. I attach below a minimal code producing the segfault.

there is a major difference here. in your previous example the compute
reduce was over group all, and thus could be moved outside.
in this example, you use it on a group you newly define. i don't see,
how there currently is a way to do this kind of operation with LAMMPS
scripting using script code conforming to the requirements put forth
in the documentation.

according to the documentation, you *must not* put definitions of
computes or fixes into an "every" block, since they require "run pre
yes", which triggers the initialization, but using "run every" is
essentially a shortcut to doing a loop over "run pre no post no"
commands. so technically speaking, LAMMPS is working correctly,
because you are not using it correctly.

i see three options here:

1) write a custom compute style, that does exactly the kind of
analysis, that you want to do and then do what is done now with
explicit commands internally. this would avoid having to use "run
every" and is going to be the most efficient way to do things, and
most useful in the long run, if you need to do this a lot. it is the
most effort, too, though.

2) you modify the "region sphere" sphere command source code so it can
take (equal style) variables for the center coordinates, not just the
radius, and then use "compute reduce/region" on group all and also
"variable natm equal count(all,mysphere)", and the group inside does
not need to be defined altogether. since it looks like the root of
your issue is that you need a region, that is moving with a specific
particle. that would eliminate the need to use "run every". you could
do a regular run and output what you want with "fix print". this could
be further optimized to combine the implicit reduction operation
included in the count() function with compute reduce/region using a
variable with rmask(mysphere)

3) you hack the "compute reduce" source code to detect when it is
being used without initialization and then trigger the initialization
internally and hope for the best. we should put a check for being used
without initialization into that code, anyway, in order to avoid the
segfault resulting from using the compute in an incorrect context.

axel.

FWIW, i have just implemented option 2 and prepared it to be included
in a future patch of LAMMPS (for good measure and because the
implementations are so similar) i've done the same thing for cylinder
regions as well.

you can see the changes on github:
https://github.com/lammps/lammps/pull/1258/commits/deb21ad4e2643040b90e1a6e07f93b3061454644

and your example input now becomes:

units lj
atom_style atomic

atom_modify map yes

lattice fcc 0.8
region box block 0 4 0 4 0 4
create_box 1 box
create_atoms 1 box
mass 1 1.0

velocity all create 3.0 87285

pair_style lj/cut 2.5
pair_coeff 1 1 1.0 1.0 2.5

fix 1 all nve

thermo 1

variable rs equal 2.5

compute md all property/atom mass
variable vmd atom c_md

variable xi equal x[1]
variable yi equal y[1]
variable zi equal z[1]

region mysphere sphere v_xi v_yi v_zi v_rs side in units box
variable rmask atom rmask(mysphere)
compute cmtot all reduce/region mysphere sum v_rmask v_vmd
variable natm equal c_cmtot[1]
variable mtot equal c_cmtot[2]
fix print all print 1 '\{natm\} {mtot}'

run 3 pre yes post no

Thank you very much!
Stefano