help: fix ave/atom

hello everyone
when I used fix ave/atom, I meet a problem with mpi.
in my input file:
fix 5 all ave/atom 1 5000 50000 x y z
dump 2 all custom 50000 dump.avepz id type f_5[1] f_5[2] f_5[3]
when I use mpich2-1.3.2p1: mpiexec -n 8 ./lmp_linux <myinput
ONE OF THE PROCESSES TERMINATED BADLY: CLEANING UP
APPLICATION TERMINATED WITH THE EXIT STRING: Terminated (signal 15)
when I use openmpi 1.4: mpirun -np 8 ./lmp_openmpi <myinput
mpirun noticed that process rank 7 with PID 26565 on node matlab exited on signal 11 (Segmentation fault).
but: ./lmp_linux <myinput and ./lmp_openmpi<myinput
everything is OK.
if I dont use: fix 5 all ave/atom 1 5000 50000 x y z
dump 2 all custom 50000 dump.avepz id type f_5[1] f_5[2] f_5[3]

both above are OK.
I try openmpi1.4 with intel compiler 11 and icc with mpich2-1.3.2p1, the problem are same.
in my .bashrc I also use: ulimit -s unlimited
fix ave/atom can not use with mpi? so please help me! thank you

2011/3/21 fayu <[email protected]...>:

hello everyone
when I used fix ave/atom, I meet a problem with mpi.
in my input file:
fix 5 all ave/atom 1 5000 50000 x y z
dump 2 all custom 50000 dump.avepz id type f_5[1] f_5[2]
f_5[3]
when I use mpich2-1.3.2p1: mpiexec -n 8 ./lmp_linux <myinput
ONE OF THE PROCESSES TERMINATED BADLY: CLEANING UP
APPLICATION TERMINATED WITH THE EXIT STRING: Terminated (signal 15)
when I use openmpi 1.4: mpirun -np 8 ./lmp_openmpi <myinput
mpirun noticed that process rank 7 with PID 26565 on node matlab
exited on signal 11 (Segmentation fault).
but: ./lmp_linux <myinput and ./lmp_openmpi<myinput
everything is OK.
if I dont use: fix 5 all ave/atom 1 5000 50000 x y z
dump 2 all custom 50000 dump.avepz
id type f_5[1] f_5[2] f_5[3]
both above are OK.
I try openmpi1.4 with intel compiler 11 and icc with
mpich2-1.3.2p1, the problem are same.
in my .bashrc I also use: ulimit -s unlimited
fix ave/atom can not use with mpi? so please help me! thank you

this looks a lot like this is a version of the bug that was fixed
recently in several other fixes, where the simulated system was
such that some processors had no atoms.

as a workaround, you may want to adjust your domain decomposition,
so that every processors has atoms. in general, that is desirable for
better load balancing and thus overall performance.

cheers,
    axel.

thank you very much!

hi,

please try the following change to src/fix_ave_atom.cpp.
i.e.: insert the two lines marked with a '+'.
and let us know if that fixes your problem.

thanks,
    axel.

diff --git a/src/fix_ave_atom.cpp b/src/fix_ave_atom.cpp
index a6f81a7..a8ca8df 100644
--- a/src/fix_ave_atom.cpp
+++ b/src/fix_ave_atom.cpp
@@ -394,6 +394,8 @@ double FixAveAtom::memory_usage()

void FixAveAtom::grow_arrays(int nmax)
{
+ // makes sure nmax > 0
+ if (nmax == 0) nmax=1;
   array = memory->grow_2d_double_array(array,nmax,nvalues,
                                       "fix_ave/atom:array");
   array_atom = array;

A bugfix for this will be in the next patch.

Thanks,
Steve