run time increasing

hi all
I ran a npt simulation for 15 ns . first 5 ns took 25 hours , second took 30 hours and third took about 40 hours to run.
the question is , is it normal?
is getting longer time to run is normal or I made a mistake?
thanks .
omid.

Did you have other fixes or computes running during the second and third runs?

Dear Omid, i think you must mention how did you ran your simulation. What command did you used, and on what platform did you run it. Actually simulation speeds up after some initial time and remains constant through out.
I think it all depends upon traffic on you working node.
And you must mention more about your problem.

I used compute rms (root mean square) and compute rdf (radial distribution function ) and some thermo style ( step press potential energy and total energy lx ly lz).
omid

Hi syed shuja
I ran the simulation for equilibration of ethylene carbonate which is a liquid electrolyte. I only use fix npt for first 15 ns and then fix nvt for second15 ns .
and compute msd and rdf and some useful thermo styles (step , temp , energy , lx ,ly lz).for simulation I used (HPC) cluster with the Linux operating system.
thanks.
omid

Dear Omid, see if you are using extra calculations like compute and fixes other than dynamics, then your simulation should take extra time. Also look at your different parameters you are using during your each run like your saving frequencies for dump,compute,fix etc.
Also look for the traffic at your HPC cluster.
You should sent the complete input file for each run…

differences in run performance can have many reasons. some are internal, some external.
the first step is always to look at the performance breakdown output of your logfiles. it can tell you which part of LAMMPS is using more time, it can tell you about load imbalances, it can tell you about the number of neighbors (more neighbors -> more work -> more time needed). it can tell you about percentage of a CPU used (i.e. if there are “parasitic” processes “stealing” CPU cycles from you) the neighbor list stats are only the info for the last step, so it would be advisable to take a restart file from the beginning (and/or middle) of a run and do a quick continuation to get the neighbor list stats from that part of the run to see, if there is a significant change.

https://lammps.sandia.gov/doc/Run_output.html

axel.

thanks dear professor kohmeyer
it is my output info file for third 5 ns npt run (I ran simulation for equilibrium , three 5ns (totally 15 ns) in npt ensumble):

Performance: 3.483 ns/day, 6.890 hours/ns, 40.318 timesteps/s
70.4% CPU use with 30 MPI tasks x no OpenMP threads

MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total

You are not paying attention (again).
This is mostly useless without seeing the equivalent info from the other two runs.

the first 5 ns run:

Loop time of 89194 on 30 procs for 5000000 steps with 20214 atoms

Performance: 4.843 ns/day, 4.955 hours/ns, 56.058 timesteps/s
84.3% CPU use with 30 MPI tasks x no OpenMP threads

MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total

what should worry you the most is that you do not have ~100% CPU.
you have 84.3% CPU in the first run, 79.7% CPU in the second and , 70.4% in the third.
this explains why the second and third runs are increasingly slower.

there are presumably other processes/calculations running on the node you are running on that either are not supposed to be there at all or that use more resources than what they were allocated and thus they are negatively impacting your performance.

this is also a possible reason, why your calculations are using more time in KSpace, since that has lots of collective MPI communications which will disproportionately slow this part down more with other parasitic processes causing load imbalances (since MPI ranks that can run at full speed have to wait for those that have to share the CPU with other processes).

on a reasonably well managed HPC cluster this should not happen. you need to talk to your HPC sysadmins to resolve the situation. it is not something that can be dealt with on the LAMMPS side.

axel.

thanks a lot.