Announcing Vipster, a molecular editing software with LAMMPS support

Dear all,

i hope it's okay to do a bit of self-promoting here, otherwise i'd like to apologize.

During the past years, i've been working on a small molecular editing package, Vipster, which you can find here:
https://sgsaenger.github.io/vipster
https://github.com/sgsaenger/vipster

It features a cross-platform desktop client, a less featureful browser-frontend (https://sgsaenger.github.io/vipster/emscripten) and bindings for python (which are not distributed with the binary-releases atm).
The main features are:
- easy editing,
- fast and dynamic handling of bonds
- especially correct with regards to periodic boundary conditions,
- flexible and simple handling of the formats it supports (see `convert` subcommand on command-line)
- reads and writes LAMMPS data-files
- reads dump-files

I hope it can be of use to some of you and welcome any feedback (or contributions)!
Sebastian

Not a problem at all - advertising tools that work with LAMMPS
is a good use of the maillist.

We can add a blurb about your tool to the PrePost page
on the LAMMPS website.

Steve

Not a problem at all - advertising tools that work with LAMMPS
is a good use of the maillist.

We can add a blurb about your tool to the PrePost page
on the LAMMPS website.

Steve

I would be honored, thank you!

On another note, I have some more or less technical questions and hope
someone can help me out. I'd like to use LAMMPS to implement ad-hoc
minimizations and dynamics, akin to the interactive optimization in
Avogadro.
The current idea is to write a wrapper-executable that links against
lammps, which will run in normal mpi-mode, which shall be launched via
MPI_Comm_spawn. Is it possible/supported/reliable to call this from an
MPI singleton?
Do you have other ideas or even recommendations to establish ipc with
comparably low impact on the architecture?

Sebastian

> Not a problem at all - advertising tools that work with LAMMPS
> is a good use of the maillist.
>
> We can add a blurb about your tool to the PrePost page
> on the LAMMPS website.
>
> Steve
>
I would be honored, thank you!

On another note, I have some more or less technical questions and hope
someone can help me out. I'd like to use LAMMPS to implement ad-hoc
minimizations and dynamics, akin to the interactive optimization in
Avogadro.
The current idea is to write a wrapper-executable that links against
lammps, which will run in normal mpi-mode, which shall be launched via
MPI_Comm_spawn. Is it possible/supported/reliable to call this from an
MPI singleton?

why so complicated? why not simply use the library interface and
either link LAMMPS directly or dynamically load it. for dynamic
loading, you only need to handle the windows platform differently. you
can find code to dynamically load objects across platforms in the
USER-MOLFILE package. since you are very unlikely to launch a
graphical tool across multiple nodes, using OpenMP parallelism should
work as well. for ad hoc minimization/sculpting you only need a
minimal force field using, e.g. pair style soft, bond/angle style
harmonic etc.

you should also compile the LAMMPS library with exception handling, so
that your GUI doesn't go down when the LAMMPS library fails.

Do you have other ideas or even recommendations to establish ipc with
comparably low impact on the architecture?

MPI seems overkill to me and has many issues, e.g. that you can only
initialize it once, so it is going to be hard to recover from failures
that require calling MPI_Abort() (which happens in Error::one() in
LAMMPS).

axel.

> The current idea is to write a wrapper-executable that links against
> lammps, which will run in normal mpi-mode, which shall be launched via
> MPI_Comm_spawn. Is it possible/supported/reliable to call this from an
> MPI singleton?

why so complicated? why not simply use the library interface and
either link LAMMPS directly or dynamically load it. for dynamic
loading, you only need to handle the windows platform differently. you
can find code to dynamically load objects across platforms in the
USER-MOLFILE package. since you are very unlikely to launch a
graphical tool across multiple nodes, using OpenMP parallelism should
work as well.
[...]
MPI seems overkill to me and has many issues, e.g. that you can only
initialize it once, so it is going to be hard to recover from failures
that require calling MPI_Abort() (which happens in Error::one() in
LAMMPS).

axel.

Thanks for the input.
Sure, it's complicated, but i don't want to rule it out in the
beginning. There's always something to learn, even if its just MPI's
(or my own) limits. Falling back to serial/omp is always possible.
The main point of the idea is to decouple the GUI from the
worker-processes, because yes, you're unlikely to launch the gui on
your cluster, but having the main parallelism available won't hurt.
The MPI_Abort note is really helpful. Given this, i guess the minimum
requirements would grow to something like this:

GUI-proc <-[custom IPC]-> connector (MPI-singleton) <-[MPI]-> wrapped
lammps processes

which is also bit less funny than intended because of the custom IPC needed...
But i'm not sure how to take the docs for MPI_Abort. Will it also kill
the connector-process if they're only bound by the intercommunicator
from MPI_Comm_spawn?

Sebastian

[...]

Thanks for the input.
Sure, it's complicated, but i don't want to rule it out in the
beginning. There's always something to learn, even if its just MPI's
(or my own) limits. Falling back to serial/omp is always possible.

that seems to me like a reverted logic. why try the complicated
solution, when the simple one turns out to be sufficient? you are
needlessly entering a world of pain for very little gain. but be my
guest. this is your software and you are free to design it as you
like.

The main point of the idea is to decouple the GUI from the
worker-processes, because yes, you're unlikely to launch the gui on
your cluster, but having the main parallelism available won't hurt.
The MPI_Abort note is really helpful. Given this, i guess the minimum
requirements would grow to something like this:

GUI-proc <-[custom IPC]-> connector (MPI-singleton) <-[MPI]-> wrapped
lammps processes

which is also bit less funny than intended because of the custom IPC needed...

with the library interface, you have well defined methods to access
the results from the LAMMPS calculation. with MPI based, you have to
devise a protocol for that on your own. how are you going to do that?
via MPI (which is fragile)? via pipes (which can stall the parent
unless you do multiplexing)? via sockets (which is complex)? ...and
how efficient is this going to be, for interactive use, you need very
fast exchanges of data.

you may want to check out the client server lib bundled with LAMMPS
for some inspiration.

But i'm not sure how to take the docs for MPI_Abort. Will it also kill
the connector-process if they're only bound by the intercommunicator
from MPI_Comm_spawn?

just set up some tests and figure it out.

axel.