Effect of Ewald/pppm splitting parameter on total energy of the system

What you are expecting is only true if your reciprocal space cutoff (i.e. kspace tolerance) and real space cutoff are sufficiently large. Try changing your tolerance from 1e-4 to 1e-10 and your real space cutoff from 10 Angstrom to 20 Angstrom and you will get the behavior you expect for a much wider range of splitting parameters.

When you change the Ewald splitting parameter, you are shifting the computational burden between kspace and real space, and you have to change the cutoffs accordingly to maintain the same level of accuracy in the energy calculation.

Stan

Thanks Stan! After I’ve changed the tolerance from 1e-4 to 1e-6, the energies are converged now. A quick question regarding the splitting parameter though: In the command pair_style coul/long cutoff, isn’t this ‘cutoff’ the splitting parameter? If so, where can I modify the cutoffs? I’m really confused here.

Thanks,
Doris

Normally the g_ewald (or alpha) parameter is referred to as the Ewald splitting parameter. It can be changed by using the ‘kspace_modify gewald’ command. I thought that is what you were referring to, but you are actually changing the Coulombic cutoff instead. I’d recommend taking a look at a book or journal article on the Ewald sum for clarity.

Stan

Thanks Stan! After I've changed the tolerance from 1e-4 to 1e-6, the
energies are converged now. A quick question regarding the splitting
parameter though: In the command pair_style coul/long cutoff, isn't this
'cutoff' the splitting parameter? If so, where can I modify the cutoffs? I'm
really confused here.

no, it isn't. LAMMPS contains an estimator that uses the energy
convergence and the realspace cutoff as input parameters to determine
the remaining parameters. this estimator works well for atomistic
simulations of aqueous systems (or rather systems with a similar point
charge distribution).
the often used energy convergence of 1.0e-4 is a very aggressive
choice, as you have seen. this choice trades computational efficiency
for accuracy and works only reasonably well for more-or-less
homogeneous bulk systems, where a lot of error cancellation is
happening.

as ray suggested, it is better you look up the real real physics
behind this and get a well founded understanding. it will pay off well
in the future. :wink:
there are quite a few messed up simulations happening all the time
done by people that are lacking this understanding or are not as
careful. the fact, that you *did* do checks and noticed and asked
about the inconsistencies in your tests is a very good beginning.

axel.

Thanks a lot Axel and Stan! Another question though: Are there any suggestions on how to select the ewald parameters and cutoffs? How to ensure both the efficiency and accuracy? Thanks again!

Best,
Doris

Stan can correct me if this is overly simplistic, but

you can treat efficiency and accuracy independently.

Choose the accuracy you want. LAMMPS will try
to insure you get that accuracy for any choice of

cutoff, by choosing the PPPM (or Ewald or MSM) grid
sizes appropriately.

All that is left is effficiency. You can shift the Coulomb
cutoff in the pair style up or down to see if the overall

code runs faster or slower. The run-time changes should not
be dramatic for small cutoff changes, and starting with
a typical cutoff (e.g. 10 Angs) is typically OK. Whatever

your cutoff, the accuracy is unchanged.

Steve

Stan can correct me if this is overly simplistic, but
you can treat efficiency and accuracy independently.

Choose the accuracy you want. LAMMPS will try
to insure you get that accuracy for any choice of
cutoff, by choosing the PPPM (or Ewald or MSM) grid
sizes appropriately.

All that is left is effficiency. You can shift the Coulomb
cutoff in the pair style up or down to see if the overall
code runs faster or slower. The run-time changes should not
be dramatic for small cutoff changes, and starting with
a typical cutoff (e.g. 10 Angs) is typically OK. Whatever
your cutoff, the accuracy is unchanged.

Many times, this is reached when kspace time is between a 1/4 to 1/3 of the total time.

The only additional consideration is when running in parallel, especially with a very large number of processors. Then you may want to change to mpi+openmp parallelisation and drive the cutoff higher. Similarly when running with gpu acceleration, try running kspace on the cpu and only pair on the gpu and use a larger cutoff, I’ve seen cases where to optimum was at rather large cutoffs like 25 angstrom.