Hi, everyone, I have a question about the correlation between timestep and computational efficiency. I just run two simulations. The timestep used in the 1st simulation is 1fs, and the 2nd is 2 fs. All the other parameters are same for both simulations. The 1st simulation need 33 hours to run 1000,000 steps. But the 2nd simulation just need 17 hours to run 1000,000 steps.

I don’t know why the computational efficiency enlarge? Can you answer my question, thanks very much!

Hi, everyone, I have a question about the correlation between timestep and computational efficiency. I just run two simulations. The timestep used in the 1st simulation is 1fs, and the 2nd is 2 fs. All the other parameters are same for both simulations. The 1st simulation need 33 hours to run 1000,000 steps. But the 2nd simulation just need 17 hours to run 1000,000 steps.

I don’t know why the computational efficiency enlarge? Can you answer my question, thanks very much!

Were both calculations run under the exact same conditions?

Same hardware, same executable, same OS, no other users and calculations?

Sharing a machine with some “parasite” job is the most likely explanation for the slowdown of the first job.

Axel

Hi, Axel, Thanks very much for your help. Both my calculations run under the same conditions. Yesteday I run another simulation. It might need 33 hours for 1000,000 steps again. It is very strange. I will try more simulations to find the reason.

Best wishes

Hi, Axel, Thanks very much for your help. Both my calculations run under the

same conditions.

how do you check it? how can you prove that there isn't something else

going on on the machine(s) that you are running on that you don't

control?

Yesteday I run another simulation. It might need 33 hours

for 1000,000 steps again. It is very strange. I will try more simulations to

find the reason.

i would be *extremely* surprised that it is due to the simulation and

particularly the choice of time step. if anything a larger timestep

will require more time for the same number of steps, because it would

require more frequent neighbor list updates.

the only way how i can imagine to have that big differences in run

times for otherwise identical inputs would be when your system has a

severe load imbalance. but that rarely results in a doubling of the

walltime. using a node that has a rogue runaway job or something

equivalent on it, is a much more likely explanation.

axel.