I made a metal-slit model(size was xyz=2501090 Å^3), which filled with water molecules. What I want to do is culculate water msd at different chunks so I used msd/chunk command. I tried several times and found that the result of msd/chunk command was too small.
For chacking in detail, next step I set chunk as 1 and reran my script. However, the result I got was different with that of compute msd command: it was too small again.
In my thought, the results would be the same.
Could anyone explain the difference? And how I calculate correctly using msd/chunk command?
Here is a part of script. #msd/chunk command
compute chunkID water chunk/atom bin/2d x lower 1 z lower 1 units reduced nchunk every ids every
compute myChunk all msd/chunk chunkID
fix 1 all ave/time 1 1 1000 c_myChunk[*] file msd.out mode vector
compute ms water msd
fix 2 all ave/time 1 1 1000 c_ms[*] file msd.dat
Both calculations are performed on the same input file and the conditions are as follows
If you need additinal infomation, please let me know, thank you.
Using compute msd/chunk only makes sense if the atoms never switch chunks, as noted in the documentation:
This compute stores the original position (of the center-of-mass) of each chunk. When a displacement is calculated on a later timestep, it is assumed that the same atoms are assigned to the same chunk ID. However LAMMPS has no simple way to insure this is the case, though you can use the ids once option when specifying the compute chunk/atom command. Note that if this is not the case, the MSD calculation does not have a sensible meaning.
This is why most people use chunk/atom with the molecule keyword. You should note, however, that compute msd/chunk with compute chunk/atom molecule will still not give you exactly the same result as compute msd, because the former computes the MSD of the COM of the molecule, while the latter computes the MSD of each atom.
You mention, however, that
I have no idea what this means, so you should explain more precisely, maybe with a diagram. Computing the MSD of atoms (or molecules) within a spatial bin doesn’t make sense.
Thank you for your answer. I understood my current approach didn’t make sense. But then another question occur. Why lammps has “ids every” in msd/chunk command? Is there any situation to use this keyword?
The model is such that water molecules are packed between two parallel walls, just like a sandwich.
What I would like to do is, for example, if there are three chunks between the walls, I would like to divide the water molecules by position and calculate the MSD there: a chunk near the wall, a chunk in the middle from the wall, and a chunk far from the wall (near the opposite wall).
I would then like to find the diffusion coefficient and compare the diffusion coefficient of water near the wall and not near the wall. If there is a better way to accomplish this, please let me know.
There are several other compute */chunk commands which make good use of spatial binning.
In terms of what you want to accomplish, one option could be to select a group of atoms near each wall initially, spread that group to include whole molecules, and compute the MSD of that group of molecules over a short burst of time. Then you could repeat the process, redefining the groups and computes in a loop.
There is a problem with this on the conceptual level. The Einstein relation used to derive D from averaging over the MSD of individual particles only holds in the long time limit. However, if you divide your system into spatial segments, particles will move in an out of those segments. In the long time limit, if you keep following particles based on their initial assignments, this will become an increasingly inconsistent assignment to segments and thus the resulting MSD has to eventually become the same for each segment and for the entire system. If you, on the other hand, add a constraint to only consider particles that remain in the same spatial segment, you are adding a bias toward a lower value of D, since only particles that diffuse less will conform to that condition.
Thus computing D as a property for this kind of thing is conceptually flawed. You would have to come up with some different property to get a measure for mobility and be aware that you have to make choices in how you define and limit this and how well you can identify this property with individual segments as particles are moving in and out of them. One example that does not require a “history” would be monitoring the kinetic energy. You can use time averaging to remove noise. Another property to look at could be the velocity auto-correlation functions. Strictly speaking, they have the same long time behavior (in fact, you can transform a VACF into an MSD by multiple integration or an MSD into a VACS by multiple differentiation), but they have a somewhat different convergence behavior with respect to the history. The VACF becomes increasingly noisy with correlation time, so if you stick to short correlation times, you will have some error to the result, but not as bad as for MSDs.
The standard (starting) reference for doing this sort of calculation is Liu, Harder and Berne: https://pubs.acs.org/doi/abs/10.1021/jp0375057 . Please note that this is easiest to do in post-processing (and only requires positions), but you will definitely have to write your own code with a suitable plugin (e.g. Python with MDAnalysis or TRAVIS).