[lammps-users] Compute reduce multiplies by number of processors

Long (well not that long I guess) time listener, first time caller . . .

I am having a problem with compute_reduce using keywords ave or sum. I give compute reduce a global vector. The max keyword gives me back the expected result of the maximum value in the vector. However, using either keyword ave or sum gives the result I would expect multiplied by the number of processors used (I have checked using various numbers of processors and printed out all the values in the vector and summed/averaged them). I change only the number I type after the –np flag when calling LAMMPS, and get a different result out of compute reduce. I don’t need compute reduce particularly, but I am wondering if I am having a larger MPI problem.

I have a Mac Pro running Leopard. I thought I was lucky because I simply compiled using make mac_mpi (without changing the makefile) and didn’t get an error. I was using the Jan10 version of LAMMPS but have now updated to the most recent version (Jan24), which did not help.

I am attaching a very small test input script and data file in case people want to see if they can reproduce these results.

Lisa

in.test (776 Bytes)

input.lammps (2.15 KB)

This was a bug - see the 1Feb10 patch. Please try it out
and see if the results are now what you expect.

Thanks,
Steve

Works great—thanks!!
Lisa