Problems with read_restart

Hello, I have been stuck in the read_restart command.
Reading restart file …
restart file = 29 Sep 2021, LAMMPS = 29 Sep 2021
WARNING: Restart file used different # of processors: 128 vs. 1 (…/read_restart.cpp:658)
restoring atom style atomic from restart
orthogonal box = (0.0000000 0.0000000 0.0000000) to (27.150000 27.150000 108.60000)
1 by 1 by 1 MPI processor grid
pair style tersoff/omp stores no restart info
3722 atoms
ERROR: One or more atom IDs is zero (…/atom.cpp:894)
Last command: read_restart restartid.equil.mpiio

Even I have used the command “reset_atom_ids” before the last command “write_restart restartid.equil.mpiio”, it still shows error messages. I have used “write_data” order to check the IDs of atoms, but none of them is 0.

Help me please.

Try without using the .mpiio suffix for the restart file.
Using parallel i/o is meaningless for such a small system and when not running in parallel.

Hi, thank you for your advice.
I haven’t tried your suggestions.
Yesterday, when I change the command from “reset_atom_ids” to " reset_atom_ids sort yes", it read_restart successfully.
My system is maybe small, My contemporary mission is run the system for thermal annealing. My timestep is 0.1 fs. I need to simulate about 50 ns. I run the mission on cluster. My arrangement is here. It will took about 48 hours to run this mission, do you have any suggestions? SBATCH --nodes=4

#SBATCH --ntasks=128

#SBATCH --cpus-per-task=1

Wall clock limit:

#SBATCH --time=48:00:00

Command(s) to run:

module load cmake/3.15.1

module load make/4.1.90

module load gcc/8.3.0

module add openmpi

env OMP_NUM_THREADS=64

mpirun -np 128 …/…/src/lmp_omp -sf omp -pk omp 1 -in in.annea

It seems that this arrangement is efficient, but I don’t know whether there is more efficient way. I think contemporary arrangement is costing too much.

Thank you !

Two comments on this:

  1. Please put unrelated questions in a new post with a suitable subject line.
  2. It is impossible to make any comments on efficiency and parallel performance without knowing more about the details of the system (input and log file) and why certain settings are chosen (e.g. length of time step or neighbor settings

So far the only comment I can make is that your settings are contradictory.

Axel