Hi Axel,

thanks for your answer. This is exactly what I do in the moment.

I need the Hessian for Force Field fitting and have to evaluate it thousands of times in one run. So its computation is speed relevant. Doing it directly in a fix should speed up the process.

Best

Johannes

Hi Axel,

thanks for your answer. This is exactly what I do in the moment.

I need the Hessian for Force Field fitting and have to evaluate it

thousands of times in one run. So its computation is speed relevant. Doing

it directly in a fix should speed up the process.

have you made some timing test to determine, how much time you spend in

python and how much in LAMMPS. under normal circumstances, the force

computation in LAMMPS should be the dominant part, so amdahl's law will

limit how much speed-up you can achieve, since that parts won't be changed.

axel.

Hi, one thing that you may want to ensure is that you use NumPy array indexing (closely mimicking Matlab’s):

matrix[irow,:] = vector[:]

where the 2nd dimension of “matrix” and “vector” have the same length. This way the element-by-element assignment will be executed directly in C: unless you deal with really short arrays, the speed is indistinguishable. A similar optimization, if you use NumPy functions, is to use the optional “out” parameter of those function.

As Axel said, for typical systems the force computation, which is O(N^2) or O(NlogN) at best, dominates over the cost of creating the Python objects around the LAMMPS arrays, which does not depend on N.

Giacomo

Hi, one thing that you may want to ensure is that you use NumPy array

indexing (closely mimicking Matlab's):

matrix[irow,:] = vector[:]

where the 2nd dimension of "matrix" and "vector" have the same length.

This way the element-by-element assignment will be executed directly in C:

unless you deal with really short arrays, the speed is indistinguishable.

A similar optimization, if you use NumPy functions, is to use the optional

"out" parameter of those function.

As Axel said, for typical systems the force computation, which is O(N^2)

or O(NlogN) at best, dominates over the cost of creating the Python objects

around the LAMMPS arrays, which does not depend on N.

...and since the displacements in finite difference calculations are small

and no time integration is required, one can also save significant

overhead by skipping a lot of extra setup work (e.g. avoid re-initializing

the neighbor lists) with "run 0 pre no post no" instead of "run 0".

axel.

Hi Axel and Giacomo,

thanks to your suggestions I had again a look on the timings.

My question arose from the fact that when I was running 1632 (6*number of atoms as necessary for a double sided finite difference Hessian) of NVE of my system it took me around 13 seconds, and when I was calling 1632 “run 0 post no” it took around 23 seconds.

Thanks to Axels last post, I know now that I should use “run 0 pre no post no”, which is much faster. But then a new problem arises: I get for every distortion the same energy and forces …

In the manual I found the following sentence:

“If your input script changes the system between 2 runs, then the initial setup must be performed to insure the change is recognized by all parts of the code that are affected.”

And this means that I have to run with “pre yes” or?

Or do I do something wrong/bad/stupid?

Best

Johannes

before getting lost in various technical details, you should establish a bottom line, i.e. how fast can your calculation at best get.

for that take your system setup, remove any time integration, tell it to only do the neighbor list build once, and then run those 1632 time steps.

the total time of that is the cost of computing forces 1632 times. is this significantly faster than the 23 seconds you currently are seeing?

here is an example based on the melt input (with a more reasonable cutoff).

units lj

atom_style atomic

lattice fcc 0.8442

region box block 0 5 0 5 0 3

create_box 1 box

create_atoms 1 box

mass 1 1.0

velocity all create 3.0 87287

pair_style lj/cut 4.0

pair_coeff 1 1 1.0 1.0

neighbor 0.3 bin

neigh_modify once yes

run 1800

this takes on my desktop about 1 second.

Good morning Axel,

your example took on my desktop around 2 seconds.

Using this example on my system with 1632 steps tooks around 13 seconds,

running it with:

for i in range(1632): lmps.command(“run 0”)

gives around 23 seconds. The problem with running “run 0 pre no” that by doing

so I get for every displacement the same forces and erngies, since the system

is not updated in between. I think the solution is to run “run 1 pre no” for

every displacement, which takes around 14 seconds.

Using “run 1 pre no” during the hessian evaluation gives the same result as doing

it with “run 0”, so I think I should do it in the future with “run 1 pre no” and

be fine.

Or do you have any other/further suggestions?

Best

Johannes

P.S.: Greetings from Rochus Schmid from Bochum

Good morning Axel,

your example took on my desktop around 2 seconds.

Using this example on my system with 1632 steps tooks around 13 seconds,

running it with:

for i in range(1632): lmps.command("run 0")

gives around 23 seconds. The problem with running "run 0 pre no" that by

doing

so I get for every displacement the same forces and erngies, since the

system

is not updated in between. I think the solution is to run "run 1 pre no"

for

every displacement, which takes around 14 seconds.

Using "run 1 pre no" during the hessian evaluation gives the same result

as doing

it with "run 0", so I think I should do it in the future with "run 1 pre

no" and

be fine.

Or do you have any other/further suggestions?

all suggestions and advice that i have was already included in my previous

e-mail. what you do with it, what conclusions you draw, what potential

improvement you can get from moving more of the computation into the C++

code, and in general how you proceed is all your choice. the steps i have

outlined are straightforward, and the conclusions taken from that should

be, too.

Best

Johannes

P.S.: Greetings from Rochus Schmid from Bochum

please send him my regards. it's been a while...

axel.