hello everyone
when I used fix ave/atom, I meet a problem with mpi.
in my input file:
fix 5 all ave/atom 1 5000 50000 x y z
dump 2 all custom 50000 dump.avepz id type f_5[1] f_5[2] f_5[3]
when I use mpich2-1.3.2p1: mpiexec -n 8 ./lmp_linux <myinput
ONE OF THE PROCESSES TERMINATED BADLY: CLEANING UP
APPLICATION TERMINATED WITH THE EXIT STRING: Terminated (signal 15)
when I use openmpi 1.4: mpirun -np 8 ./lmp_openmpi <myinput
mpirun noticed that process rank 7 with PID 26565 on node matlab exited on signal 11 (Segmentation fault).
but: ./lmp_linux <myinput and ./lmp_openmpi<myinput
everything is OK.
if I dont use: fix 5 all ave/atom 1 5000 50000 x y z
dump 2 all custom 50000 dump.avepz id type f_5[1] f_5[2] f_5[3]
both above are OK.
I try openmpi1.4 with intel compiler 11 and icc with mpich2-1.3.2p1, the problem are same.
in my .bashrc I also use: ulimit -s unlimited
fix ave/atom can not use with mpi? so please help me! thank you
hello everyone
when I used fix ave/atom, I meet a problem with mpi.
in my input file:
fix 5 all ave/atom 1 5000 50000 x y z
dump 2 all custom 50000 dump.avepz id type f_5[1] f_5[2]
f_5[3]
when I use mpich2-1.3.2p1: mpiexec -n 8 ./lmp_linux <myinput
ONE OF THE PROCESSES TERMINATED BADLY: CLEANING UP
APPLICATION TERMINATED WITH THE EXIT STRING: Terminated (signal 15)
when I use openmpi 1.4: mpirun -np 8 ./lmp_openmpi <myinput
mpirun noticed that process rank 7 with PID 26565 on node matlab
exited on signal 11 (Segmentation fault).
but: ./lmp_linux <myinput and ./lmp_openmpi<myinput
everything is OK.
if I dont use: fix 5 all ave/atom 1 5000 50000 x y z
dump 2 all custom 50000 dump.avepz
id type f_5[1] f_5[2] f_5[3]
both above are OK.
I try openmpi1.4 with intel compiler 11 and icc with
mpich2-1.3.2p1, the problem are same.
in my .bashrc I also use: ulimit -s unlimited
fix ave/atom can not use with mpi? so please help me! thank you
this looks a lot like this is a version of the bug that was fixed
recently in several other fixes, where the simulated system was
such that some processors had no atoms.
as a workaround, you may want to adjust your domain decomposition,
so that every processors has atoms. in general, that is desirable for
better load balancing and thus overall performance.