Hi all,

I have several independent runs with the same input file, the same number of particles, the same time steps, and the same number of processors. In a word, everything is the same except initial data file which will be read by lammps.

But the total running time varies from 20 hours to 40 hours. That look strange to me.

Could anyone give me some hints on this problem? Thanks very much.

Best,

Joy

Hi all,

I have several independent runs with the same input file, the same number of

particles, the same time steps, and the same number of processors. In a

word, everything is the same except initial data file which will be read by

lammps.

But the total running time varies from 20 hours to 40 hours. That look

strange to me.

Could anyone give me some hints on this problem? Thanks very much.

there are multiple possible issues:

some are "technical":

- the processors are not identical

- your job is sharing nodes with other calculations (or there are

"runaway" or otherwise rogue calculations on the compute nodes.

- your network for parallel calculations is overloaded, but to a different

degree over time.

- your file i/o is impacted by other calculations that make heavy use

of the same storage that you are using for i/o (and you use the

"flush" flag for file i/o).

simulation setup specific:

- the particle distribution is different and leads to load imbalances.

- the number of neighbors is different for different inputs.

- the required number of neighborlist rebuilds changes.

there are probably more possible reasons...

cheers,

axel.