Bug in latest dump_cfg ?

Hi developers,

Is there a bug in the latest dump_cfg routine?

If I run in serial there is no problem. If I run in parallel (mpirun -np 12 lammps -in in) when using the dump_cfg command the program always hangs just after "Setting up run ..."

I have tracked the offending code down to the lines:

   if (multiproc) nchosen = nme;
   else MPI_Reduce(&nme,&nchosen,1,MPI_INT,MPI_SUM,0,world);

from the write_header() routine of dump_cfg.cpp. However this is the same code as in the 5Mar12 release. So I believe the problem may rest in the current rewrite of the write() routine of dump.cpp.

Below: is a minimal input script which reproduces the problem ...

echo log
units metal
dimension 3
boundary p p p
atom_style atomic

lattice fcc 3.615
region whole block 0 10 0 10 0 10
create_box 1 whole
create_atoms 1 box

pair_style eam/alloy
pair_coeff * * /home/mmcphie/potentials/Cu_mishin1.eam.alloy Cu
timestep 0.001
thermo 100

dump dump_cfg all cfg 1000 dump.*.cfg id type xs ys zs
dump_modify dump_cfg element Cu

velocity all create 100.0 482748 dist gaussian
fix dynamics all nve
run 5000

yes, CFG got broken by changes to higher level LAMMPS.
I think I fixed it - will be a patch later today.

Thanks,
Steve