Hello,
We are using PLUMED to run metadynamics simulations with LAMMPS. The simulation works as expected on one processor, but when going to multiple processors we receive the following error:
[rrnode23.internal:15639] *** An error occurred in MPI_Recv
[rrnode23.internal:15639] *** on communicator MPI_COMM_WORLD
[rrnode23.internal:15639] *** MPI_ERR_TRUNCATE: message truncated
[rrnode23.internal:15639] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
plumed.dat (278 Bytes)
log.lammps (1.85 KB)
Hello,
We are using PLUMED to run metadynamics simulations with LAMMPS. The
simulation works as expected on one processor, but when going to multiple
processors we receive the following error:
[rrnode23.internal:15639] *** An error occurred in MPI_Recv
[rrnode23.internal:15639] *** on communicator MPI_COMM_WORLD
[rrnode23.internal:15639] *** MPI_ERR_TRUNCATE: message truncated
[rrnode23.internal:15639] *** MPI_ERRORS_ARE_FATAL (your MPI job will now
abort)
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 15639 on
node rrnode23.internal exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
The log.lammps file in addition to our plumed input file are attached. We
are using the August 14, 2011 version of LAMMPS compiled with mpic++
(openmpi Makefile). We were wondering if this is:
1) a bug/error on LAMMPS side, and if it is, if there is a relatively
straightforward way to fix it,
no.
2) if this a bug/error on PLUMED side, or finally,
very likely. there have been similar ones.
i remember tracking one down last december.
3) if this is an error on our cluster's side.
no.
Thanks a lot and we look forward to your response,
axel.