Lagrangian Binning

Hi LAMMPS community,

I have recently been working on a problem that requires the Langrangian binning of particles in a compressing molecular solid (making part of a thermalized sample a static piston and imposing a negative velocity on the rest of the sample to make the two collide). The way I have been attempting to bin the particles (and have the same bins correspond to the same particles) has been as follows:

variable samplebot equal bound(zmin,all)
variable top equal ${samplebot}+30
variable sampletop equal bound(zmax,all)

region piston block EDGE EDGE EDGE EDGE EDGE ${top}
region piston include molecule
region rest subtract piston 
# I am writing this from memory so the regions might not be right, but this essentially is separating the #sample into a piston region 30 ang tall in the z axis, and the rest region which is the rest of the #sample.

compute rest chunk/atom bind/1d z ${top} 6 nchunk once ids once compress yes units box

This seems like it should work fine (though the chunks won’t correspond to real distances once the ‘rest’ portion of the sample starting moving), since each atom only has its chunk ID assigned once. Thus, I would expect that regardless of where the atoms end up over time, they should still be part of the same chunk. However, when running these simulations there is a disturbance in all of the variables that I am computing using chunk averaging propagating from the free end (i.e. there is a increase in particle velocity near the free end at propagates at the imposed sample velocity, and an decrease near the piston surface). I have been going crazy with freezing variables, redefining bins, attempting to impose a dynamic bin size based on the amount of deformation the sample has endured, but nothing gets rid of these propagating disturbances in velocity, pressure, mass density, or strain near the free end of the sample. Further, these variables return to the expected value once the disturbance has passed.

The disturbance near the piston surface makes sense since there is probably some slop with how the binning origin is defined which causes the average to always include some atoms with zero velocity (thus raising the average), but I am totally at a loss with what is happening at the free end.

It seems to me that the intended functionality of the ‘nchunk once ids once’ phrase is exactly to define bins that follows the particles which it was initially defined on, and this seems to be the case when looking at the chunk/ave output files.

Has anyone done anything similar to this and been successful?

This has turned out to be a surprisingly persistent problem whose cause remains unknown. The attached graph shows the mysterious bumps that I am referring to:

And here is the exact code that I am using to:

Define the piston region as define a variable that captures the position at the top of the piston (where the binning should start):

variable xmin equal bound(all,xmin)
variable pisttop equal ${xmin}+30
variable xmax equal bound(all,xmax)

variable binc equal floor((${pisttop}-${xmax})/6)

region		pist block EDGE ${pisttop} EDGE EDGE EDGE EDGE
group		Pist region pist
group		Pist include molecule
group		Rest subtract all Pist

Compute atom velocities:

compute		vx Rest property/atom vx

Compute Chunks (ldz is my main concern here, since this is what is supposed to function as a ‘langrangian’ method of binning):

compute		ldz Rest chunk/atom bin/1d x ${pisttop} 6 ids once nchunk once compress yes units box
compute		edz Rest chunk/atom bin/1d x ${pisttop} 6 nchunk once compress yes units box

Print chunk-average quantities to file:

fix vxprof Rest ave/chunk 10 5 500 ldz c_vx file vxprofile_1kms.data

I am still confused with what could be causing these strange bumps in my graphed data. These do NOT show up when viewing the system in OVITO, so I am confident that it has something to do with the method of binning. That said, all of the atoms in the sample are being binned, they don’t change chunks, and as a result no free surfaces should ever be entering or leaving the bins to mess up the chunk averages. Doing a 1d spatial bin using molecular COMs yields a similar result, as well as my plots of shear stress, pressure, and density (though, in the case of mass density, these bumps stay fixed in the same location over time).

I feel like I have to be missing something obvious, and I am running out of parameters to tweak in debug runs… Any thoughts?

If you want to get somebody interested in helping you, you have to change your approach. Here are some thoughts as to why this is the case:

  • You are using a “fancy name” as a subject. That will at best attract people to look at your post because they are curious about what you mean by that. There is not much of an explanation, so they are not likely to read on or be able to provide assistance. It would be much more appealing if you could describe what your problem is in simpler terms or provide an explanation at the very beginning what exactly (perhaps with a sketch) differentiates this from “standard” binning, for example.
  • You are not making it easy to reproduce your problem by providing and discussing individual input file fragments. So before somebody can look into what could be changed, that person is forced to build a system that is suitable, but since you are very vague about that, too, (see my previous thought), it is not as easy to do. Making it easy to reproduce is often a key step for getting people to debug your issue.
  • It seems you are discussing and debugging your final simulation system. When you have fundamental problems, as it seems to be the case here, the first step is always to strip away any complexities and complications and reproduce this with the simplest possible setup (e.g. an atomic system instead of a molecular one). With that you can then build your simulation by adding “complications” in small increments while observing carefully, and that should allow you to identify at what step you no longer get the output you expect.
  • You are discussing the situation as if you are discussing with yourself, i.e. with somebody that knows everything about your research, your simulations, your attempts to rectify the issue, your background knowledge etc. But nobody really knows all this context and you are not providing much. Thus you make it very difficult to respond to your questions. You to assume that the people that answer to requests for help here know nothing of that kind and do not have much or not any context. So to get them to help you with anything that is not a trivial syntax error, you have to provide that context and background information and missing details.

The following essay has been around for quite a while and is quite popular to help people explain some of the - perhaps sometimes unexpected - dynamics and behavior that happen when asking for help with open source software. How To Ask Questions The Smart Way. Not all of it applies here, since for scientific software there is the additional complication of having to explain the science in addition of the technical issues of input syntax and software behavior or sometimes unusual behavior of the people trying to support software. But it explains very well the strategies that can improve your chances to get (better) help.

2 Likes

Good advice, certainly. I can definitely understand that my central question has been obfuscated by the lack of context. Since I can no longer edit the original post, allow me to provide some here.

What I am attempting to do is a traditional reverse ballistic shock wave simulation on a nonreactive molecular crystal. This consists of creating a large, thermalized slab of material oriented so that the long axis is in the x-direction and diving this slab into two regions: a fixed ‘piston’ region, which acts as an immobile wall, and the actual ‘sample’, which consists of all molecules not in the piston. At the beginning of the simulation, all non-piston molecules are assigned a constant velocity in the -x direction (the particle velocity) so that they collide with the piston and a shockwave propagates through the sample region in the opposite direction of the particle velocity.

This is all fine and good, but to characterize the changes in the physical properties of interest (such as particle velocity, pressure, shear strain, temperature, etc.) I am dividing the sample region into thin slabs using the atom/chunk 1d command so that I can get average measurements of these properties as a function of distance from the piston.

However, when one does this using traditional binning (i.e. the bins maintain the same spatial position over time and atoms can move between bins), the overall length of the sample (and thus the number of bins) decreases as the sample undergoes shock compression, making it difficult to make meaningful plots of these quantities at long times. In other words, at the material is being compressed, the atom count in the bins near the surface of the piston will grow as the simulation continues, which causes a problematic decrease in spatial resolution for the average quantities. This type of binning is analogous to the Eulerian (control volume) approach of characterizing fluid flows, where material passing in and out of some predefined volume in which quantities are computer.

An alternative approach to this type of binning is defining the bins in such a way that that the bins ‘follow’ the same particles over time and effectively do not remained fixed with respect to the simulation box. In this scheme, analogous to the Langrangian method of characterizing flow, we can avoid the compression of the spatial extent of the bins, which makes analysis of the fine detail of the shock compressed material easier. This can, I think, effectively be accomplished by defining the bins in such a way that the atoms initially assigned to a bin in the unshocked material remain in the same bin between time steps. In this way, the bins ‘follow’ the same atoms which they initially contained.

The way that I have been implementing this second approach in LAMMPS is my central concern here. From what I understand, if I would like the atoms assigned to a bin to remain the same between each time step in the simulation, I can assign the chunk id of each atom only once at the start of the simulation, which necessitates that the number of chunks remains constant as well. From what I understand in the documentation, it seems that the way to do this is through:

compute		ldz Rest chunk/atom bin/1d x ${pisttop} 6 ids once nchunk once compress yes units box

where the pisstop variable denotes the x-coordinate of the top of the piston region (so that only the sample material is being binned).

One day later…

After doing some more testing, it seems that my problems is actually completely unrelated to my method of binning, which works as intended, but rather due to the scheme I was using to relax the structure. Further testing has shown that NPT relaxation of the replicated supercells results in these weird bumps, while a much longer NVT relaxation does not.

Turns out that this was an issue relating to my relaxation method which causes these weird compressive waves in the material.