Hello all,
I am using two forcefields in my simulation. One is REBO and another Lennard-Jones. I want to compare how much time LAMMPS takes to calculate each of these forcefields in my simulation. Is there any way to reach my goal?
Thanks,
Farshad
Hello all,
I am using two forcefields in my simulation. One is REBO and another
Lennard-Jones. I want to compare how much time LAMMPS takes to calculate
each of these forcefields in my simulation. Is there any way to reach my
goal?
*Always* read the documentation before asking such a trivial question like this.
http://lammps.sandia.gov/doc/Section_start.html#start_8
axel.
Thanks so much Axel for you response.
I looked at my log file and I saw that the CPU time percentage of all pair potentials (REBO+Lennard-Jones) are summed together as 57% and Kspace as 36% at the end of the file. Although it is helpful, I want to know how much of the 57% is taken for REBO and how much for LJ. To do so, it seems that I should modify verlet.cpp, timer.cpp and finish.cpp files.
I think I should start with verlet.cpp file and edit the following lines:
if (pair_compute_flag) {
force->pair->compute(eflag,vflag);
timer->stamp(TIME_PAIR);
}
But, I don’t how to separately call the timer->stamp function for REBO and LJ. Can anyone help me?
Thanks so much Axel for you response.
I looked at my log file and I saw that the CPU time percentage of all pair potentials (REBO+Lennard-Jones) are summed together as 57% and Kspace as 36% at the end of the file. Although it is helpful, I want to know how much of the 57% is taken for REBO and how much for LJ. To do so, it seems that I should modify verlet.cpp, timer.cpp and finish.cpp files.
No.
That won’t do and even if it would work, it may not provide any more useful information beyond what you already have.
please read on…
I think I should start with verlet.cpp file and edit the following lines:
if (pair_compute_flag) {
force->pair->compute(eflag,vflag);
timer->stamp(TIME_PAIR);
}
But, I don’t how to separately call the timer->stamp function for REBO and LJ. Can anyone help me?
You obviously run with a hybrid pair_style, thus you would have to modify that
style itself
and do your own accumulators.
But even then it is not so simple. If you run in parallel, each MPI rank may have a different distribution
between the various sub styles
depending on thechosen
domain decomposition and the structure of your system
. Plus you may also haveto account for
load imbalances (which may be spread out over multiple force computations)
. So breaking down the timing would not be too helpful unless you collect data for each MPI rank individually and quantify the wait times for any synchronizations
.It
may be simpler to use a profiling tool like perf (which works on a per process basis and is Linux-only, but doesn’t require instrumentation) or use a tool TAU (which requires a lot of experience to be used well and thus is not recommended for the casual user).
axel.
You could get a simple estimate, by using
the rerun command and two scripts, each
of which invokes only one of the two pair styles.
Re-process a few snapshots and see how much
time is spent in the pair style by each script.
Steve