Event-driven algorithm?

It looks like all interactions in LAMMPS are time-driven. For absolutely rigid particles, which is a valid approximation for many (is not most) granular systems, an event-driven algorithm would be orders faster. Would an event-driven algorithm be compatible with current LAMMPS architecture/philosophy ?

I don’t know enough about such an algorithm to make any specific statement.
How would you make this work in parallel using the domain decomposition used by LAMMPS?
Also, what would you use instead of the force computation and time integration?

LAMMPS does support alternate ways to move atoms and rigid objects with hybrid Monte Carlo / Molecular Dynamics algorithms (e.g. fix gcmc). But those can be very inefficient except for cases where interactions are only atomic and pairwise and don’t parallelize well on top of that.

You can learn about the LAMMPS architecture/philosophy from the recent LAMMPS paper and the “Information for Developers” sections of the LAMMPS manual.

1 Like

Event-driven algorithm is (primarily) used for absolutely rigid (granular) systems,
which interact via hard potential, i.e. force is either infinite or zero.
Assuming all particles have parabolic trajectories between collision events,
the next nearest collision and its time is predicted after each collision.
So the algorithm timestep is variable and changes from one collision to the next.
Since conventional integration of such forces is not possible in this case,
collision operator is used, which uses normal and tangential restitution coefficients as integrated
equivalents of elastic constants and damping coefficients.
The algorithm seems to be parallelizable in the same way as the time-driven one,
with the only apparent consequence that neighbor list update should be adjusted to variable timestep.
They imply it can be 10000 times faster ( Pöschel, Schwager “Computational Granular Dynamics”, section 5.9)
So I am really wondering why nobody have taken advantage of such algorithm so far ?

Perplexity.ai’s answer is shockingly good for an AI, let down by its looking up event-driven network architecture as a source concept.

But the answer is right in the name. LAMMPS is the Large Atomic/Molecular Massively Parallel Simulator. Atoms and molecules are not particularly well-simulated as hard spheres, so a hard-sphere-based optimisation isn’t broadly applicable across LAMMPS’s use cases.

Then again, fix dt/reset already implements a variable timestep by distance criterion, so all anyone would need to do is plug in a quick way to conservatively estimate the distance-to-next-collision to have what you suggest.

1 Like

Statements like this that sound too good to be true require looking at the fine print at what is compared how and under what specific conditions. For example a setup where a lot of atoms are stationary. But for a software like LAMMPS you can optimize for such cases by skipping the force computation and time integration for such atoms and these kind of things tend to be not done to increase the speedup to report.

This is a rather strong statement without proof. Given the complexity of DEM models these days, I doubt they can be that well approximated with hard spheres. Once you cannot use that approximation, a lot of the benefits from switching away from a standard MD timestepping setup are gone.

If you look at the LIGGGHTS code which is a fork of LAMMPS specifically for DEM and maintained by experts in DEM, doesn’t it seem particular that those folks decided to use (and optimize) code with the current LAMMPS method of propagation, if there would be such large gains when using a different propagation method. I would expect that those folks would love to have their model run that much faster…

But, as the saying goes, “the proof is in the pudding”. Feel free to give it a try to implement an event-driven method. Primarily, it would need creating a new fix style and then setting up a system to not compute forces (e.g. with pair style zero).

1 Like