Lagrangian Binning

Hi LAMMPS community,

I have recently been working on a problem that requires the Langrangian binning of particles in a compressing molecular solid (making part of a thermalized sample a static piston and imposing a negative velocity on the rest of the sample to make the two collide). The way I have been attempting to bin the particles (and have the same bins correspond to the same particles) has been as follows:

variable samplebot equal bound(zmin,all)
variable top equal ${samplebot}+30
variable sampletop equal bound(zmax,all)

region piston block EDGE EDGE EDGE EDGE EDGE ${top}
region piston include molecule
region rest subtract piston 
# I am writing this from memory so the regions might not be right, but this essentially is separating the #sample into a piston region 30 ang tall in the z axis, and the rest region which is the rest of the #sample.

compute rest chunk/atom bind/1d z ${top} 6 nchunk once ids once compress yes units box

This seems like it should work fine (though the chunks won’t correspond to real distances once the ‘rest’ portion of the sample starting moving), since each atom only has its chunk ID assigned once. Thus, I would expect that regardless of where the atoms end up over time, they should still be part of the same chunk. However, when running these simulations there is a disturbance in all of the variables that I am computing using chunk averaging propagating from the free end (i.e. there is a increase in particle velocity near the free end at propagates at the imposed sample velocity, and an decrease near the piston surface). I have been going crazy with freezing variables, redefining bins, attempting to impose a dynamic bin size based on the amount of deformation the sample has endured, but nothing gets rid of these propagating disturbances in velocity, pressure, mass density, or strain near the free end of the sample. Further, these variables return to the expected value once the disturbance has passed.

The disturbance near the piston surface makes sense since there is probably some slop with how the binning origin is defined which causes the average to always include some atoms with zero velocity (thus raising the average), but I am totally at a loss with what is happening at the free end.

It seems to me that the intended functionality of the ‘nchunk once ids once’ phrase is exactly to define bins that follows the particles which it was initially defined on, and this seems to be the case when looking at the chunk/ave output files.

Has anyone done anything similar to this and been successful?

This has turned out to be a surprisingly persistent problem whose cause remains unknown. The attached graph shows the mysterious bumps that I am referring to:

And here is the exact code that I am using to:

Define the piston region as define a variable that captures the position at the top of the piston (where the binning should start):

variable xmin equal bound(all,xmin)
variable pisttop equal ${xmin}+30
variable xmax equal bound(all,xmax)

variable binc equal floor((${pisttop}-${xmax})/6)

region		pist block EDGE ${pisttop} EDGE EDGE EDGE EDGE
group		Pist region pist
group		Pist include molecule
group		Rest subtract all Pist

Compute atom velocities:

compute		vx Rest property/atom vx

Compute Chunks (ldz is my main concern here, since this is what is supposed to function as a ‘langrangian’ method of binning):

compute		ldz Rest chunk/atom bin/1d x ${pisttop} 6 ids once nchunk once compress yes units box
compute		edz Rest chunk/atom bin/1d x ${pisttop} 6 nchunk once compress yes units box

Print chunk-average quantities to file:

fix vxprof Rest ave/chunk 10 5 500 ldz c_vx file vxprofile_1kms.data

I am still confused with what could be causing these strange bumps in my graphed data. These do NOT show up when viewing the system in OVITO, so I am confident that it has something to do with the method of binning. That said, all of the atoms in the sample are being binned, they don’t change chunks, and as a result no free surfaces should ever be entering or leaving the bins to mess up the chunk averages. Doing a 1d spatial bin using molecular COMs yields a similar result, as well as my plots of shear stress, pressure, and density (though, in the case of mass density, these bumps stay fixed in the same location over time).

I feel like I have to be missing something obvious, and I am running out of parameters to tweak in debug runs… Any thoughts?

If you want to get somebody interested in helping you, you have to change your approach. Here are some thoughts as to why this is the case:

  • You are using a “fancy name” as a subject. That will at best attract people to look at your post because they are curious about what you mean by that. There is not much of an explanation, so they are not likely to read on or be able to provide assistance. It would be much more appealing if you could describe what your problem is in simpler terms or provide an explanation at the very beginning what exactly (perhaps with a sketch) differentiates this from “standard” binning, for example.
  • You are not making it easy to reproduce your problem by providing and discussing individual input file fragments. So before somebody can look into what could be changed, that person is forced to build a system that is suitable, but since you are very vague about that, too, (see my previous thought), it is not as easy to do. Making it easy to reproduce is often a key step for getting people to debug your issue.
  • It seems you are discussing and debugging your final simulation system. When you have fundamental problems, as it seems to be the case here, the first step is always to strip away any complexities and complications and reproduce this with the simplest possible setup (e.g. an atomic system instead of a molecular one). With that you can then build your simulation by adding “complications” in small increments while observing carefully, and that should allow you to identify at what step you no longer get the output you expect.
  • You are discussing the situation as if you are discussing with yourself, i.e. with somebody that knows everything about your research, your simulations, your attempts to rectify the issue, your background knowledge etc. But nobody really knows all this context and you are not providing much. Thus you make it very difficult to respond to your questions. You to assume that the people that answer to requests for help here know nothing of that kind and do not have much or not any context. So to get them to help you with anything that is not a trivial syntax error, you have to provide that context and background information and missing details.

The following essay has been around for quite a while and is quite popular to help people explain some of the - perhaps sometimes unexpected - dynamics and behavior that happen when asking for help with open source software. How To Ask Questions The Smart Way. Not all of it applies here, since for scientific software there is the additional complication of having to explain the science in addition of the technical issues of input syntax and software behavior or sometimes unusual behavior of the people trying to support software. But it explains very well the strategies that can improve your chances to get (better) help.

2 Likes