dear brett,
Dear Steve
Thanks for that.
Dipole: so would we able to point the "point dipole" in the same direction as the next nearest bond vector. I.e. if I have a point dipole of magnitude d, can I pin its orientation in any direction relative to say a bond vector that adjoins the dipole "atom" and the other atom that makes the covalent bond vector? There will be cases where we need to fix perhaps a particular direction not just in the direction of the bond vector. I presume this is do-able from what you said?
one of the strong points of LAMMPS, is that you can modify a system
during the course of the simulation in many ways. this is done through
what is called a "fix". a fix is a class with multiple (optional) methods
that are called at different points during the velocity verlet (or the
equivalent
r-RESPA) integration loop. the regular time integration is done through
this infrastructure, but you can also write a fix that "manages" dipoles in
a way that you see fit. the main limitations are: is the information that you
need available to the (parallel) task, since due to using domain decomposition
only part of the total system information is available to each task, and
how much (serial) overhead (or communication) is incurred by the fix.
i would recommend to start playing around with something simpler first
to understand, how these things work. reading existing code can help
a lot, too.
SSD: With regard to implementation of SSD do you have any idea of how long it should take - we are not familar with the code as yet. Also what rigid body rotator is implemented within LAMMPS?
check out the documentation to the fixes rigid, rigid/nve, rigid/nvt.
Benchmarking: Plus do you have any benchmark information, i.e. how LAMMPS compares to say AMBER, CHARRM and GROMACS or any other code-sets. That would be excellent.
this is _really_ hard to say. most of it is some kind of apples to
oranges comparison.
each of these codes have their strong and weak points. amber and charmm are
very much geared towards bio simulations (and compete there with NAMD).
amber and charmm also have special optimizations, because they can make
assumptions about their system (e.g. what are water molecules and the fact
that there are going to be a _lot_ of them). lammps on the contrary supports
an huge variety of very different potentials, including those that are used in
amber and charmm, but because of its general nature is not always competitive,
particularly with small node counts. but for example with the introduction of
multi-level parallelism (not yet in mainline lammps, only in my
personal branch),
much better scaling and overall performance of lammps for systems with
long-range electrostatics (for point charges) can be achieved.
a code like DL_POLY is probably more comparable to lammps, btw.
gromacs is a code that is particularly focused on absolute speed, especially
on x86 architectures through highly optimized vectorized assembly innerloops,
and many other performance enhancements, some of which come at the
expense of accuracy. it also has a very flexible input file format that makes
simulations of non-bio systems with any of the (simple) supported force
fields or tabulated pairwise additive potentials easy.
absolute benchmark numbers can only give you a part of the picture.
for a plain LJ potential, probably gromacs would be the absolute fastest code,
if you run on a CPU. if you want to run on a GPU instead, it would be
HOOMD-blue,
but only if your system is below a certain size since that code is not parallel.
in short, benchmarks can be very misleading. you first have to find out
exactly what you need to simulation and what your model is, and _then_
you can start looking into what is the best way to pursue it. not always is
the fastest the best, and not always is the easiest the best.
cheers,
axel.