First of all, thanks a lot for for your answer.
You’re right, I meant that I was re-compiling changing small things at once, to see what the effect of BUILD_MPI=yes is, or if including python has an impact, etc. I don’t have a lot of experience with this stuff, so sometimes I just try things!
In terms of modules, I load the same exact environment module in my batch script (openmpi-4.0.5).
The run produces two outputs: an .err file and an .out file; the .err file reads:
--------------------------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[50048,1],13]
Exit code: 1
--------------------------------------------------------------------------
and the .out file reads:
ERROR: Processor partitions do not match number of allocated processors (src/lammps.cpp:451)
ERROR: Processor partitions do not match number of allocated processors (src/lammps.cpp:451)
with the number of lines depending on the way I try to partition the cores. I would not compare my LAMMPS version with the one I use on my laptop, because I change many things (including the potential I am using). I was just mentioning the fact that my partition command looks correct to me in terms of syntax, because it works in other cases.
If I understand well, --oversubscrib(ing) MPI could help, but I’m not sure this is the problem here, because when I try it, it still fails. If you had any other advice, I’d be very grateful.
Thanks again for your help, I appreciate it very much!