MPI issue

Dear LAMMPS users,

When I use 12 cores to run a job, I found there is no parallelization, that is, all cores was doing same thing, all lines in screen output has 12 copies.
And the processor grid was:
1 by 1 by 1 MPI processor grid

My job script is like below:
#!/bin/bash -l

#SBATCH --job-name=test
#SBATCH --time=90000200:00:0
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=12

mpirun -np 12 lmp_7Aug19 -echo screen -in in.cool > screen.log

My test lammps inputs and job script was copies of that I used before (before they can run normally), thus I believe there is no problem in my input.

And other users tested lammps on same machine. So the machine has no problem. They used lammps they compiled by themselves.

But I test 3 versions of lammpls I compiled before and I found same issue. All these lammps can run normally before. And recently, it seems I didn’t recompile them.

Could you please give me some suggestions to fix this issue? Thanks in advance!

Best regards,
Zhao

you MUST compile and run your LAMMPS executable with the SAME MPI library.
if there is a mismatch, mpirun will just run many independent serial copies, since those do not know about each other. the way that the MPI programms “connect” to each other is specific to the MPI library and thus has to be consistent with each other and the mpirun/mpiexec program.

axel.

Thanks, Axel, very much as always.

Yes, recently, my mpirun was modified unintentionally. Now I used the original mpirun and the issues disappeared.

Best,
Zhao