clarification on communicate command ( srd/in.srd.pure example )

Hi All,

In a situation where I don’t need to communicate any atomic information between processors, I used the following commands (actually taken from the example srd/in.srd.pure).

atom_modify first empty
group empty type 2
communicate single group empty

Here there are no atoms of type 2 in my system. This way I suppose we are not really communicating anything. Is there anything wrong with this logic ?

Actually I get this logic from the example srd/in.srd.pure. This example program which ran fine on older versions of lammps (15 Jan 2011) is not running with newer versions.
It is just getting terminated out without any error message. I think it is due to the communicate command. Is anything has changed in the newer versions of lammps related to the communicate command.

Thanks,

I just ran examples/srd/in.srd.pure on my box with the
current version on 1 and 3 procs. It ran fine.

Steve

Thanks Steve,

I did everything I could to find the bug in the last couple of days but couldn’t find whats wrong. I ran the examples/srd/in.srd.pure as it is downloaded with out any modifications. Below is the output message. Everything is as expected (compared to the test output given) until the last two lines. The program is getting terminated even before it starts.

Thanks Steve,

I did everything I could to find the bug in the last couple of days but
couldn't find whats wrong. I ran the examples/srd/in.srd.pure as it is
downloaded with out any modifications. Below is the output message.
Everything is as expected (compared to the test output given) until the last
two lines. The program is getting terminated even before it starts.

----------------------------------------------
Job 95264 started at Fri Jan 4 10:33:30 EST 2013
Scratch directory /scratch/avoca/95264 has been allocated
1 Blue Gene/Q compute nodes have been allocated
LAMMPS (14 Oct 2012)

this is an outdated version of LAMMPS.
please upgrade to the current version.
nobody will help you to find a bug that
may have already been fixed.

[...]

  # of rescaled SRD velocities = 0
  ave/max all velocity = 13.2569 24.3562
2013-01-04 10:33:34.327 (WARN ) [0xfffae078be0]
210861:ibm.runjob.client.Job: terminated by signal 11
2013-01-04 10:33:34.327 (WARN ) [0xfffae078be0]
210861:ibm.runjob.client.Job: abnormal termination by signal 11 from rank 0

as segmentation fault can mean multiple things.
a) a bug in the code (unlikely if it works well on other platforms)
b) running out of memory or stack space (unlikely for an example input)
c) a miscompiled executable (not unheard of with IBM XL compilers)

in any case, you should track down the location of the segfault
using the instructions that IBM provides and see if it is a bug in
the code or whether the compiler messed things up. this is
where the rubber meets the road. since few of us have an ibm
bg/q to play around with, you have to do the dirty deed yourself.

axel.

p.s.: isn't it a bit pointless to run with a single MPI task on such a machine??

Thanks Steve and Alex,

The error is due to the optimization level.

I am getting this abnormal termination when lammps is compiled with optimization level O3. The program running fine when lammps is compiled with optimization levels O2 or below …

The machine is : IBM blue gene Q

Best,

Then that's an IBM problem, not a LAMMPS problem.

Steve