Hi Steve,
I think I found a shortcoming in the scatter_atoms() (library.cpp)
coupling routine. This is why I got the problems with mpi computations.
To show that, I took a very simple example: after a few lammps step, I
compute an array of constant velocity in python v=0.005, and then
scatter_atoms() it to lammps.
n3=3*natoms
f=(n3*c_double)()
# Impulse
if me == 0:
for i in range(0,natoms):
f[i*3]=0.005
f[i*3+1]=0.005
f[i*3+2]=0.005
lmp.scatter_atoms("v",1,3,f)
When running in single processor the python program, the velocity is
correctly transmitted to all atoms. However, when I run a mpi
computation, for instance
mpiexec -np 4 plot.py in.lammps
Only the first processor eventually receive the scattered velocity (I
checked it with paraview afterwards)! The same stuff happens with my
scatter_property function (obviously because it is based on scatter_atoms())
I wonder it there is no need for a MPIAllreduce call at the end of the
scatter_atoms() routine to send data to all processor?...
Thanks,
Joris