[lammps-users] mix available or make a new style?

Hello, friends,

I found if there is no angle-angle cross-term in "dihedral_style class2".
This is really what prevents
me to use the class2 styles for cvff with cross-terms. Although cvff is not
a typical class 2 FF. But it
has cross-terms implemented. Apparently Materials Studio uses these
cross-terms. To find agreement
in energy calculation between lammps and MS while using cvff, I have to find
a way to add Eangle-angle
term. But it seems cvff is not fully supported in lammps, e.g. Eaa is
missing and maybe others too like
improper dihedral functions which I didn't check in detail.

For the bond-bond and bond-angle terms, now I found a solution. Instead of
using "angle_style harmonic",
which is the right function form for the angle term in cvff, use
"angle_style class2". This allows me using
bond-bond and bond-angle crossterms as well as the 'harmonic' angle term.
But for dihedral_style I was not
that lucky, I couldn't find a angle-angle term in the "dihedral_style
class2". Can anybody help me on this?
Or just point out the possibility? Thanks!

link

The class2 potentials in LAMMPS (bond, angle, dihedral, improper)
do include all the cross terms you mention. See the doc
pages for those styles. I believe these are the same cross
terms defined in the original papers, so they should match
Accelerys software.

Steve

Sorry, Steve. I didn't see the angle-angle cross-term. It's missing there.
Please have a look: http://lammps.sandia.gov/doc/dihedral_class2.html
the Eaat is different from Eaa. Could you please check that?
link

Eaa is in the improper class2.

Steve

I meant something like Eaa = K (theta[ijk]-theta1) (theta[jkl]-theta2).
This function form is used in cvff.
Where can I find the same thing in lammps? link

OK, now I know what is confusing.
In the improper_style of lammps the angle-angle cross-term is DIFFERENT from
the angle-angle cross-term
defined in cvff!
In lammps it means the improper dihedral angle-angle cross-term, whereas in
cvff "out_of_plane-out_of_plane"
cross-term is the equalence.
In cvff there is another cross-term called "angle-angle", it involves two
adjacent angles in one dihedral angle.
This term is missing in lammps. I think it should be implemented in
"dihedral_style class2".
Could you check it?

link

OK, now I know what is confusing.
In the improper_style of lammps the angle-angle cross-term is DIFFERENT from
the angle-angle cross-term defined in cvff!
In lammps it means the improper dihedral angle-angle cross-term, whereas in
cvff "out_of_plane-out_of_plane"
cross-term is the equalence.
In cvff there is another cross-term called "angle-angle", it involves two
adjacent angles in one PROPER dihedral angle.
This term is missing in lammps. I think it should be implemented in
"dihedral_style class2".
Could you check it?

link

Others have matched CVFF to LAMMPS before, so I'm not
clear on this. I need to ask someone more familiar with CVFF.

Steve

Hi. I've posted a few times about the possibility of adding reversible
bond formation to LAMMPS, and the general opinion has been that it is
difficult, and will require a lot of changes to the code in many places.
Well, it's not easy (if one wants to preserve parallelizability: doing it
in a serial version IS easy), but I think I have an idea of how it might be
done in parallel while minimizing changes to the code, and am at a point
where I'd like to hear further comment:

1) Problem: Dynamically changing the overall number of bonds stored (in
atom->bond_atom and neighbor->bondlist) is very difficult.

Solution: Avoid this by including the maximum number of POSSIBLE bonds in
the data (or restart) file. Treat inactive "sticky" bonds as
self-bonds. For example, if atom i is a "sticky" atom with two reversible
bonding sites, include these two in the number of bonds written down in
the data file, and in the "Bonds", include two lines

j stickybondtype i i
j+1 stickybondtype i i

This, I think, is a workable order-N solution as long as atoms have a
limited number of sticky sites, as they should in any "physical" model.
Of course, one has to do something to prevent self-forces (from atoms
being "bonded" to themselves), but this is easy.

2) Using Monte Carlo to form/break reversible bonds is tricky for a
parallel MD simulation:

In a serial simulation, one can simply (within the bond-force routine
such as bond_fene.cpp) loop over the sticky sites, test
whether other sticky sites are nearby (this can be made order-N by using
the pair neighbor list in the search, and there are additional ways of
making it efficient if the concentration of sticky atoms is low). For
nearby sites, use Metropolis MC to break/form bonds, and update
atom->bond_atom and neighbor->bondlist accordingly. This of course has to
be done BEFORE the forces are added up. This appears to work fine.

However, in a parallel simulation, this model breaks down. Suppose you
form a bond between atoms i and j, where i is owned by processor A and j
is owned by processor B. If the MC routine running on A forms the bond,
how does B know the bond has been formed? Also, one has to worry about
whether such pairs are considered more often (specifically, twice as
often) than pairs of atoms owned by
the same processor - one doesn't want to introduce spatial
inhomogeneities.

These are of course typical examples of why parallel MC is hard. I can
think of a couple ways of dealing with this:

1) When it is time to run the MC (I plan to do this infrequently, say once
every hundred timesteps), pause the parallel MD run, collect the ENTIRE
bondlist and set of coordinates into large vectors (i. e. of length
natoms, nbonds), do the MC bond forming/breaking on one processor, then
distribute the new bondlist among the processors and unpause the
MD. I think I know how to do this (main question is whether to do it in a
fix or within the bond-force routine), but it has disadvantages such as the
memory required for the large vectors.

2) Referring to the above paragraph, if processor A forms an i-j bond,
send a message IMMEDIATELY to processor B, so that B can synchronize its
bondlist and atom->bond_atom to the change. This seems more elegant
(and would require less memory) but seems
harder and trickier, with "unknown unknowns" (at least to me). For
example, it would seem to require processor A to know which processor atom
j belongs to. I don't know whether LAMMPS has this functionality, or if
not whether it would be difficult to add. In general, one would be
sending a lot of small MPI messages rather than a few huge ones - I don't
know which is faster. Actually I don't care that much about speed, the
memory issue and "order-N-ness" are more important.

OK, eager to hear any thoughts!

Rob