Restart MD simulation error

Hi

I am having some problems with the restart command of Lammps. I ran a case and trying to restart the case for longer time. I used the output file as: " read_restart Au_water.400000 ". But it is not working.
Here is the .out file :

lammps/20140801(4):ERROR:105: Unable to locate a modulefile for ‘mkl’
LAMMPS (1 Aug 2014)
WARNING: OMP_NUM_THREADS environment is not set. (…/comm.cpp:79)
using 1 OpenMP thread(s) per MPI task
WARNING: Mixing forced for lj coefficients (…/pair_lj_long_tip4p_long.cpp:1408)
WARNING: Using largest cutoff for pair_style lj/long/tip4p/long (…/pair_lj_long_tip4p_long.cpp:1410)
Reading restart file …
restart file = 1 Aug 2014, LAMMPS = 1 Aug 2014
orthogonal box = (0 0 7) to (326.4 326.4 250)
4 by 6 by 4 MPI processor grid
251072 atoms
65536 bonds
32768 angles
Finding 1-2 1-3 1-4 neighbors …
2 = max # of 1-2 neighbors
1 = max # of 1-3 neighbors
1 = max # of 1-4 neighbors
2 = max # of special neighbors
24075 atoms in group freeze
226997 atoms in group computation
98304 atoms in group water
152768 atoms in group gold
Finding SHAKE clusters …
0 = # of size 2 clusters
0 = # of size 3 clusters
0 = # of size 4 clusters
32768 = # of frozen angles
Resetting global state of Fix 3 Style nvt from restart file info
PPPMDisp initialization …
[gcn-2-11.sdsc.edu:mpirun_rsh][signal_processor] Caught signal 15, killing job
[gcn-2-13.sdsc.edu:mpispawn_2][read_size] Unexpected End-Of-File on file descriptor 21. MPI process died?
[gcn-2-13.sdsc.edu:mpispawn_2][read_size] Unexpected End-Of-File on file descriptor 21. MPI process died?
[gcn-2-13.sdsc.edu:mpispawn_2][handle_mt_peer] Error while reading PMI socket. MPI process died?
[gcn-2-14.sdsc.edu:mpispawn_3][read_size] Unexpected End-Of-File on file descriptor 21. MPI process died?
[gcn-2-14.sdsc.edu:mpispawn_3][read_size] Unexpected End-Of-File on file descriptor 21. MPI process died?
[gcn-2-14.sdsc.edu:mpispawn_3][handle_mt_peer] Error while reading PMI socket. MPI process died?
[gcn-2-15.sdsc.edu:mpispawn_4][read_size] Unexpected End-Of-File on file descriptor 21. MPI process died?
[gcn-2-15.sdsc.edu:mpispawn_4][read_size] Unexpected End-Of-File on file descriptor 21. MPI process died?
[gcn-2-15.sdsc.edu:mpispawn_4][handle_mt_peer] Error while reading PMI socket. MPI process died?
[gcn-2-12.sdsc.edu:mpispawn_1][read_size] Unexpected End-Of-File on file descriptor 21. MPI process died?
[gcn-2-12.sdsc.edu:mpispawn_1][read_size] Unexpected End-Of-File on file descriptor 21. MPI process died?
[gcn-2-12.sdsc.edu:mpispawn_1][handle_mt_peer] Error while reading PMI socket. MPI process died?
[gcn-2-16.sdsc.edu:mpispawn_5][read_size] Unexpected End-Of-File on file descriptor 21. MPI process died?
[gcn-2-16.sdsc.edu:mpispawn_5][read_size] Unexpected End-Of-File on file descriptor 21. MPI process died?
[gcn-2-16.sdsc.edu:mpispawn_5][handle_mt_peer] Error while reading PMI socket. MPI process died?

I really appreciate if someone would like to share the solution to me. Thanks

Hi

I am having some problems with the restart command of Lammps. I ran a case
and trying to restart the case for longer time. I used the output file as: "
read_restart Au_water.400000 ". But it is not working.
Here is the .out file :

lammps/20140801(4):ERROR:105: Unable to locate a modulefile for 'mkl'
LAMMPS (1 Aug 2014)
WARNING: OMP_NUM_THREADS environment is not set. (../comm.cpp:79)
  using 1 OpenMP thread(s) per MPI task
WARNING: Mixing forced for lj coefficients
(../pair_lj_long_tip4p_long.cpp:1408)
WARNING: Using largest cutoff for pair_style lj/long/tip4p/long
(../pair_lj_long_tip4p_long.cpp:1410)
Reading restart file ...
  restart file = 1 Aug 2014, LAMMPS = 1 Aug 2014
  orthogonal box = (0 0 7) to (326.4 326.4 250)
  4 by 6 by 4 MPI processor grid
  251072 atoms
  65536 bonds
  32768 angles
Finding 1-2 1-3 1-4 neighbors ...
  2 = max # of 1-2 neighbors
  1 = max # of 1-3 neighbors
  1 = max # of 1-4 neighbors
  2 = max # of special neighbors
24075 atoms in group freeze
226997 atoms in group computation
98304 atoms in group water
152768 atoms in group gold
Finding SHAKE clusters ...
  0 = # of size 2 clusters
  0 = # of size 3 clusters
  0 = # of size 4 clusters
  32768 = # of frozen angles
Resetting global state of Fix 3 Style nvt from restart file info
PPPMDisp initialization ...
[gcn-2-11.sdsc.edu:mpirun_rsh][signal_processor] Caught signal 15, killing
job
[gcn-2-13.sdsc.edu:mpispawn_2][read_size] Unexpected End-Of-File on file
descriptor 21. MPI process died?
[gcn-2-13.sdsc.edu:mpispawn_2][read_size] Unexpected End-Of-File on file
descriptor 21. MPI process died?
[gcn-2-13.sdsc.edu:mpispawn_2][handle_mt_peer] Error while reading PMI
socket. MPI process died?
[gcn-2-14.sdsc.edu:mpispawn_3][read_size] Unexpected End-Of-File on file
descriptor 21. MPI process died?
[gcn-2-14.sdsc.edu:mpispawn_3][read_size] Unexpected End-Of-File on file
descriptor 21. MPI process died?
[gcn-2-14.sdsc.edu:mpispawn_3][handle_mt_peer] Error while reading PMI
socket. MPI process died?
[gcn-2-15.sdsc.edu:mpispawn_4][read_size] Unexpected End-Of-File on file
descriptor 21. MPI process died?
[gcn-2-15.sdsc.edu:mpispawn_4][read_size] Unexpected End-Of-File on file
descriptor 21. MPI process died?
[gcn-2-15.sdsc.edu:mpispawn_4][handle_mt_peer] Error while reading PMI
socket. MPI process died?
[gcn-2-12.sdsc.edu:mpispawn_1][read_size] Unexpected End-Of-File on file
descriptor 21. MPI process died?
[gcn-2-12.sdsc.edu:mpispawn_1][read_size] Unexpected End-Of-File on file
descriptor 21. MPI process died?
[gcn-2-12.sdsc.edu:mpispawn_1][handle_mt_peer] Error while reading PMI
socket. MPI process died?
[gcn-2-16.sdsc.edu:mpispawn_5][read_size] Unexpected End-Of-File on file
descriptor 21. MPI process died?
[gcn-2-16.sdsc.edu:mpispawn_5][read_size] Unexpected End-Of-File on file
descriptor 21. MPI process died?
[gcn-2-16.sdsc.edu:mpispawn_5][handle_mt_peer] Error while reading PMI
socket. MPI process died?

I really appreciate if someone would like to share the solution to me.

there is not much useful information here, so before anybody can even
suggest a solution, you have to better document that there is a
problem and particularly that this is a problem with LAMMPS and not
your submission script or the machine you are running on (which is
quite possible based on your screen capture).

some hints:

1) try setting up a similar simulation (i.e. with atoms of the same
atom types and the same force field), but with *much less* atoms. just
run it for a few 100 steps. write a restart and try to restart. can
you still restart? try this in serial and with multiple processors.
does it still produce the same restart issue? if yes, proceed to step
2)

2) take the same input and try it on a newer version of LAMMPS. your
version is almost 2 years old. many, many bugs have been fixed since
then and you may be experiencing a bug that has already been
corrected. if the problem persists, proceed to step 3)

3) go to https://github.com/lammps/lammps/issues and create a bug
report issue (have a look at
https://github.com/lammps/lammps/issues/69 for how this could look
like and what information to provide) and attach your input files
(remember to rename them to have a .txt extension, or else upload may
be blocked.

axel.