[lammps-users] PPPM kspace settings for a very large simulation cell

hi everybody,

does somebody here have experience with using
kspace style PPPM for very large systems?

i'm trying to run an MD of a system with 1200x1200x1200 angstrom
simulation cell and that is a bit off the chart for the default heuristics of the pppm kspace module to generate reasonable parameters.

i guess it is needless to say, that those runs are a bit
time consuming, so i would appreciate it a lot, if somebody
could share some alternate "rule of the thumb" metrics to set
the individual PPPM parameters (g-vector, mesh) to something
reasonable to start with when looking for an optimal combination
of accuracy and speed.

thanks in advance,
     axel.

We've run the benchmark in.rhodo problem (replicated) on
2 billion atoms (64K procs), and PPPM does fine. What
params do you think it isn't going to estimate correctly?

Steve

hi steve,

We've run the benchmark in.rhodo problem (replicated) on
2 billion atoms (64K procs), and PPPM does fine. What

yep. and for that system the heuristics seem to work in a
way that look reasonable to me.

params do you think it isn't going to estimate correctly?

the offending code is in src/KSPACE/pppm.cpp line 852:

since i'm doing a coarse grained MD with (some) charged particles,
i'm having a different relation between atom density and cutoff
(15 angstrom). as a result the term inside the logarithm is > 1.0
unless i use a precision parameter of 1.e-7 or lower.

in the the rhodo example one gets a (maximum?) g-vector of about 0.25
and one grid point per about 2.5 angstrom.

with (smooth) PME in other codes (i've never had to deal with PPPM
before) i'm getting pretty consistently good energy conservation
and reliable forces with the corresponding parameters set to 0.1
and about one grid point per angstrom. i have the feeling the PPPM
needs a little more system dependent tweaking, which is why i
was asking for some "rule of the thumb metrics".

for the sake of completeness. another property of the CG runs is
that there are very few charged particles and their charges are
scaled down, so i was hoping that i can get a good energy conservation
with a somewhat smaller grid than for a regular all-atom system...

if the in.rhodo example worked reliably up to that large a size,
i can probably just scale up those parameters and transfer them
to my system and then see how much i can tweak them to give good
throughput at an acceptable accuracy.

thanks,
   axel.

Axel,

Line 852 from pppm.cpp comes from equation 23 of JCP 109, pp 7694-7701
(1998), which is a Deserno and Holm paper about PPPM error estimation.
That paper and the one immediately before it are, in my opinion, the
best references on the topic of PPPM error estimation. Their equation
23 in turn comes from a Kolafa and Perram estimate of error that is
general for any Ewald real space error estimation. (See:
http://www.icpf.cas.cz/jiri/papers/ewalderr/default.htm )

There may be some assumptions of charge homogeneity and density that
go into those error estimation heuristics. Your system is "unusual" in
the sense that you have a small amount of total charged particles
relative to your large system. As long as those particles stay far
apart you'd be fine with a relatively unrefined mesh and large value
for alpha, but if you try to sample a configuration where those
particles are clumped together, you'd need a relatively tight mesh and
smaller value for alpha in order to get the forces right. So, to be
safe, you'd probably want to go with a tight precision and not
necessarily trust those codes that allow lower precision for the sake
of speed.

Of course LAMMPS allows the user to set their own mesh spacing and
g-ewald, so you're free to choose the system parameters that you want.

If you find a better (or more general) method for estimating the rms
force errors for the real space portion of the Ewald sum, please let
us know.

Paul