I compiled the latest version of LAMMPS (8Feb2023) today and wanted to run it in the same fashion I did with the version i used before, namely the 23Jun2022, like:
mpirun -np nproc lmp -in input
This worked fine for the former version, but with the new version executing the command apparently starts the input in serial nproc times. The execution of just:
mpirun -np 8 lmp
without an input specified gives for the 23Jun2022 version:
LAMMPS (23 Jun 2022 - Update 3)
WARNING: Using I/O redirection is unreliable with parallel runs. Better use -in switch to read input file. (src/lammps.cpp:530)
using 1 OpenMP thread(s) per MPI task
and for the 8Feb2023 version:
LAMMPS (8 Feb 2023)
using 1 OpenMP thread(s) per MPI task
LAMMPS (8 Feb 2023)
using 1 OpenMP thread(s) per MPI task
Total wall time: 0:00:00
LAMMPS (8 Feb 2023)
using 1 OpenMP thread(s) per MPI task
Total wall time: 0:00:00
LAMMPS (8 Feb 2023)
using 1 OpenMP thread(s) per MPI task
Total wall time: 0:00:00
LAMMPS (8 Feb 2023)
using 1 OpenMP thread(s) per MPI task
Total wall time: 0:00:00
LAMMPS (8 Feb 2023)
using 1 OpenMP thread(s) per MPI task
Total wall time: 0:00:00
LAMMPS (8 Feb 2023)
using 1 OpenMP thread(s) per MPI task
Total wall time: 0:00:00
LAMMPS (8 Feb 2023)
using 1 OpenMP thread(s) per MPI task
Total wall time: 0:00:00
To doublecheck I recompiled both versions just now in the exact same fashion, using cmake and the “most”-preset, but the strange behaviour remains. Is this a problem with the version, or might my PC be the issue?
when configuring LAMMPS, CMake could not find the MPI library automatically and has used the MPI STUBS library. This means you have a serial LAMMPS executable and starting multiple of them will run multiple individual runs since they will not try to communicate
when running LAMMPS you are using an mpirun/mpiexec command that is from a different MPI library (e.g. MPICH vs. OpenMPI). Then mpirun cannot communicate the rank assignment and each process will run as if it was launched without.
Please have a look at the output of lmp -h.
At some point it should print out the MPI library and version that was used to compile LAMMPS and provide the hint as to which of the two cases from above applies.
I compiled my version of lmp using cmake after loading mpich. But after compilation the -h command shows the MPI STUBS library
Is there more to making mpich available than the command ‘module load mpich’ ??
Sorry for reviving tis topic but I am experiencing a very similar problem. After launching a simulation with mpirun -np nproc lmp -in input I get an output that looks like:
LAMMPS (10 Dec 2025)
LAMMPS (10 Dec 2025)
LAMMPS (10 Dec 2025)
LAMMPS (10 Dec 2025)
LAMMPS (10 Dec 2025)
...
LAMMPS version will appear as many times as nproc, so probably instead of separating one simulation in separate tasks, nproc independent simulation are launched. The main difference is that LAMMPS is recognising the MPI library. If I check with lmp -h:
I cannot comment on WSL, but I know for a fact that the native pre-compiled version of LAMMPS using the MS-MPI package works for me as it should, and also compiling LAMMPS using MSVC++ and the MS-MPI SDK has not given me problems when I tested it. Since I use Windows in a virtual machine, it seemed to me overkill to also install WSL .
@chrisp Here is a suggestion for some additional debugging. Take the bench/in.lj file and append the line: info config comm and report that output back.
Are you sure that your mpirun command is from MPICH and not OpenMPI?
Ubuntu usually installs OpenMPI as default MPI and if you’re not doing LAMMPS development and have to use valgrind a lot, I would recommend using that.
I don’t know what is going one here. It does not seem to make sense. There must be an explanation, but without having physical access to the machine there is little else that I can do.
Reinstalled with OpenMPI and it seems to work as intended. I am just curious as to why running with in.lj gives once again the output:
Communication information:
MPI library level: MPI v4.1
MPI version: MPICH Version: 4.2.0
MPICH Release date: Fri Feb 9 12:29:21 CST 2024
MPICH ABI: 16:0:4
Comm style = brick, Comm layout = uniform
Communicate velocities for ghost atoms = no
Communication mode = single
Communication cutoff = 2.8
Nprocs = 8, Nthreads = 1
Processor grid = 2 x 2 x 2
Does this mean that LAMMPS for some reaso still recognizes MPICH as the available MPI implementation? I have confirmed from the CMake configuration that OpenMPI directories are used.