Voronoi or Delaunay in LAMMPS

Dear all,

I need to perform a Voronoi area calculation every N steps while I am running my simulation. This means that I need some C++ code that contains functions that I can run from within LAMMPS (e.g. from a fix).

Has anybody tried anything similar? Does anybody have any ideas?
It looks like I need a power Voronoi diagram and not a simple Voronoi diagram, but any information would help.
I am currently looking at maybe linking open-source C++ libraries (e.g. CGAL) into LAMMPS.

Best wishes,

George

Don't think there is anything like it in Lammps but check: http://www.qhull.org/
You may be able to interface it with Lammps or call it together with
Lammps as libraries from within your own code.
Carlos

There is another code called voro++: http://math.lbl.gov/voro++/
You could try to link this to LAMMPS. This tool is capable of
creating the power diagram.

Best,

Rolf

To work best with LAMMPS it needs to be
a parallel Delauney/Voronoi generator - don't know
if any of the posts are that. Else you will have
to gather all atom data to one proc and call it
from one proc.

Steve

I presume, Steve, that since each proc has its own atoms plus ghosts from the neighbour list, that each proc could call eg qhull independently.

Nigel

This is probably overkill but there is also the cgal ligrary. See the following:

http://www.cgal.org/
and
http://www.cgal.org/Manual/latest/doc_html/cgal_manual/Triangulation_3_ref/Class_Delaunay_triangulation_3.html

The second link explains how to get a 3D voronoi with cgal.

Finally I found this in the arxiv, did not read through it but it might be useful:

http://xxx.lanl.gov/pdf/cond-mat/0301378v1.pdf

It has some code at the end.

Salo

I think Nigel’s idea is right on. I haven’t tried qhull, but voro++ is fairly simple to use in a compute. it can work in parallel is by making the voronoi tessellation for all the atoms (including ghosts) independently on each proc, where the “container” is the proc domain + ghost cutoff. some of the ghost cells will be distorted since they are near the container wall but, assuming the ghost cutoff is set large enough, the cells of local atoms aren’t affected by the wall. each proc can then save various voronoi data for local atoms to a per atom array

Tim

This all sounds good - having a compute that wrapped a lib
for doing Voronoi/Delauney would be a nice addition to
LAMMPS (so long as the lib is either something we can
distribute via GPL), or that is publicly available.
If someone wants to implement this, that would be
great.

In parallel, you have to account for the possibility that a
proc will have 0 or a couple atoms, and that no other atoms
in the system are close enough to be in its ghost list. E.g.
when the simulation box is not completely full with atoms,
like a gas above a surface, and some procs own (mostly)
empty space.

You also need to think about how the compute will store
the result, so that the rest of LAMMPS can use it. Is it
a per-atom array of some kind?

Steve

Dear all,

Thank you for all the replies.
What I had in my mind of doing was finding an existing library which hopefully had the functions I needed. I am still looking carefully at options but there is the possibility that I will need to code some functionality for Voronoi myself too.

I would then call these functions from a fix (or compute). This is similar to what is done with math_extra.

Steve does this sound like a good way forward? I think what you are implying is a bit different: i.e. that a compute would do all the Voronoi computations and output information (eg an array of coordinates) to other fixes.

Thanks again,
George

George,

presumably there would be one cell for each atom, so the output of the voronoi compute (or fix) could be per-atom array containing cell information, like volume, number of faces, etc. in the easiest case, the compute would only provide coordinates, the external library could find the voronoi cells, and the compute save various output into the array

there could also be more general input (not just coordinates) to make, e.g., mass-weighted voronoi cells.

cheers,

Tim

I want to re-emphasize that defining up-front precisely what
the compute will calculate and store as an "output" is very
important. Ideally it will be one or more values per
atoms, e.g. a per-atom vector or array. So that it works
cleanly with the rest of LAMMPS. I can imagine this is
too simplistic for a Voronoi cell. E.g. maybe you need
the geometry of the entire faceted cell for some applications,
such as drawing it. In which case you might need to output
something more complicated directly to a file. There could
also be options in the compute so that the user can choose
what quantities he wants as "output".

Steve

I wrote a compute to access voro++. It is fairly basic (computes
voronoi volumes and number of faces/neighbors per atom) but should be
a good basis for further extensions (please share).
It uses the local compute using ghost atoms approach (so be careful if
you have low density systems where atoms beyond the ghost cutoff could
influence the voronoi tessellation of local atoms!)
All necessary info is in the README file. The example image shows
atoms colored by voronoi volume (Sigma 5 grain boundary in UO2).
Daniel

user-voro-0.1.tgz (3.02 KB)

vorovol.jpg

sounds great - to release it I need a doc page,
i.e. a compute_voro_atom.txt file for the doc
dir. It should note the parallel issue with possibility of not having
enough ghost atoms to do the local calculation correctly.

Thanks,
Steve

Here’s an idea to generalize the output: There is a user-defined output in voro++ that prints to file, sort of like fprintf. The “format string” could be user-defined, so that any type of output from voro++ can saved into the per-atom array. Probably something like open_memstream could be used to write to a buffer in the compute (instead of a file)

Tim

Here's an idea to generalize the output: There is a user-defined output in
voro++ that prints to file, sort of like fprintf. The "format string" could
be user-defined, so that any type of output from voro++ can saved into the

I don't think that is a feasible solution. Apart from the uglyness of
converting numbers and strings back and forth, there are a whole bunch
of voro++ control sequences such as %p, %P, and %o that output whole
lists of vertices etc. Some control sequences generate redundant ur
useless data such as %x, %y, %z and %i.

You'd have to do a lot of error checking and parsing of the control
sequence string to make sure it contains only safe codes and to
understand how many output values to expect.
It is probably easier to add custom handling of a subset of useful
voro++ quantities. Without the float->string->float roundtrip.

Yes, i agree it would be tedious to convert the strings on return part of the ‘roundtrip’. And there are indeed alot of the control sequences. For me, the volume and neighbors/faces are the most useful. Surface area could be another good one. The vertices could be a nice visual if they can be saved and somehow work with dump_custom

Tim

Just released this as a 25Jan patch, as a VORONOI package
in src.

Daniel - I made a few bookkeeping changes to the src and doc
files. Please take a look, and verify that it builds and runs
correctly? Also, the Voro++ lib does not support 2d systems,
which some LAMMPS users may want it for?

Thanks for creating this compute - I'll also send a note to the
Voro++ author.

Steve

Daniel - I made a few bookkeeping changes to the src and doc
files. Please take a look, and verify that it builds and runs

Two compile errors:
line 85 'local' should be 'nlocal'
line 127 'mask' should be 'atom->mask'

I'll check if it runs correctly and try to adress your previous
questions about free boundaries. It probably depends on how lammps
handles sublo/subhi and ghost cutoffs for shrinkwrap bundaries. But
the results won't be meaningful in any case. In a real system the
voronoi cell volume for these cases would be infinite. At least the
number of neighbor cells should be correct.

oops - I made those changes in the src version,
but must have forgotten to copy the file into
the src/VORONOI master copy.

It looks like you are passing proc-bound +/- cutoff
to Voro++. For non-perioidic, the proc-bound will be the box
bound (shrink-wrapped or fixed) which will
be close to the atom extent or possible far away (for fixed).

If you are saying Voro++ will compute a volume that is
bounded by the domain you pass it, then I think this
is fine (presumably it does nothing in a periodic sense).

It would be more accurate to not add the cutoff
to the box extent for procs at a non-periodic boundary.
However, there is the issue that atoms (owned or ghost) may be slightly
outside the processor box + cutoff between reneighborings.
Does this cause problems for Voro++? I.e. if all atoms
you pass are not inside the domain you pass?

Steve

I set up a non-periodic bounding box for voro++. Atoms passed to
voro++ that lie outside the voro++ bounding box are completely ignored
by voro++

There might be some more bugs. For some reason the include statement
does not get copied into the Makefile.package.settings file (it
previously only compiled because I had an old statement in there.)