I’m currently working on a simulation to plot the pressure distribution of an Ar liquid-vapor system, and I’m encountering some discrepancies between two different methods for calculating the pressure distribution.
Here are the two methods I came across in the documentation and searching:
Approach 1:
compute myChunk all chunk/atom bin/1d y lower 2 units box
compute stress all stress/atom NULL
compute binpress all reduce/chunk myChunk sum c_stress[* ]
fix avg all ave/time 1 20000 20000 c_binpress[* ] mode vector file ave_stress.txt
Approach 2:
compute myChunk all chunk/atom bin/1d y lower 2 units box
compute stTensors all stress/atom NULL
fix avg_stress all ave/chunk 1 60000 60000 myChunk c_stTensors[1] c_stTensors[2] c_stTensors[3] c_stTensors[4] c_stTensors[5] c_stTensors[6] file stTen.txt
The first approach (using reduce/chunk) gives results that are fairly close to what I found in the literature, but the second approach (using ave/chunk) produces results that are quite different and nowhere near the actual result.
So, my questions are
Is the discrepancy between these two methods likely due to something wrong in my post-processing, or are there fundamental differences between the two approaches that could explain the variation?
From reading the documentation for ave/chunk, it seems to do something similar to reduce/chunk, so I’m wondering why the results are so different between the two. Is my understanding wrong?
Any advice or insights would be really helpful in understanding this better. I’m just trying to make sure I’m using the right approach, and any clarification would be much appreciated!
Here are two ways I get the pressure profile for a planar liquid-vapor interface. I’ve never used reduce/chunk (your approach #1).
compute xbins all chunk/atom bin/1d x lower ${delbin} units box
compute speratom all stress/atom NULL
variable PN atom c_speratom[1]
variable PT atom 0.5*(c_speratom[2]+c_speratom[3])
fix stressprof all ave/chunk ${nevery} ${nrepeat} ${nfreq} xbins density/mass v_PN v_PT norm none ave running file stressprof_H_${JOBID}.dat
Then you need to scale the values like this in a post-processing step:
vol = Lx*Ly*Lz
scale = vol*1.0/nbins
Note that the stress/cartesian method below is the Irving-Kirkwood method which agrees with mechanical equilibrium, while the ave/chunk (your approach #2) is not technically correct (though may still be useful).
compute p all stress/cartesian x ${delbin} NULL 0
fix 3 all ave/time ${nevery} ${nrepeat} ${nfreq} c_p[*] ave running file stressprof_IK_${JOBID}.dat mode vector
I would suggest starting with a bulk liquid system and be sure you can match the value of your pressure profile output to the average pressure from LAMMPS, then go to a two-phase system.
I am not surprised. In the first case you compute the sum over the stress components for the given chunks (which is correct), in the second case the average over the chunks (which is not correct). Check out the compute stress/atom documentation that explains how to compute the total pressure from stress/atom and compare this to the directly computed pressure. You have to do the same thing for chunks instead of the entire box.
If I read the docs correctly, the difference is the “norm none” which turns the average into a sum.
If the norm setting is none, a similar computation as for the sample setting is done, except the individual “average sample values” are “summed sample values”. A summed sample value is simply the chunk value summed over atoms in the sample, without dividing by the number of atoms in the sample.
Been there, done that, can confirm. Since I struggled a bit with this a couple years back, I think a couple formula can help illustrate the doc.
The all default is doing the “total sum/total count” thing which is something like \frac{\sum_{i=1}^N P_i}{N} with P_i the contribution from atom i to the chunk value and N the total number of contributing atoms across all chunk’s evaluation.
The sample mode is would then be \frac{1}{K}\sum_{k=1}^K\frac{\sum_{i=1}^{N_k}P_i}{N_k} where K is the number of N_{repeat} evaluation and N_k the number of atom in the chunk for each evaluation. Hence the “average of an average” name.
In the none mode, the formula simply becomes \frac{1}{K}\sum_{k=1}^K\sum_{i=1}^{N_k}P_i. It is then obvious that the compute reduce is equivalent to the second sum in the formula, and why ave/time does the averaging correctly.