# Repeated calculations of pressure, stress yield different results. Why?

Hi colleagues:
Please forgive the new-user-ish nature of this question. Not sure how the back-end computes elastic properties and perturbed that issuing two commands like
run 200
run 200
and then viewing results from the last of the first 200 run steps, and the first of the second 200 run steps, yields:
same positions, velocities, forces, strain (we are straining at a fixed rate)
different pressure, components of the averaged shear tensor.

The difference is in the 4th significant figure, but c’mon … these should be identical.

If I haven’t bored anyone with this question, read on for a few details of our script:

1. find shear stress via
compute shearStr all stress/atom thermo_temp virial

2. for a group of particles “midparticles” find elements of shear tensor
(We are in 2d)
compute shearPerAtommid midparticles reduce ave c_shearStr[1] c_shearStr[2] c_shearStr[3] c_shearStr[4] c_shearStr[5] c_shearStr[6]

3. store some values e.g. “shearXYmid”
variable shearXYmid equal c_shearPerAtommid[4]

4. prepare to write some variables during run … you can see the variable “shearXYmid” among others
thermo_style custom step temp pe etotal v_posxpart v_posypart v_vxpart v_vypart v_fxpart v_fypart press v_pviaSxxSyymid v_pviaSxxSyymidmove v_strain v_shearXYmid v_shearXYmidmove v_shearXYpins

5. The pair of successive run commands that I issued to try and debug
run 200
#Amy hack June 17 do another run immediately following the first
run 200

6. The baffling results for step 200 … from the end of the first run and the start of the second run. From the variable pressure onward to other quantities dependent on calculations of shear … slightly different numbers. Why is this OK?

From end of first run of 200 steps
200 7.1804652e-06 0.0010179681 0.0010251388 24.382689 4.289681
-2.6758269e-05 0 0.0010188402 -0.00041719303 *0.0041372664 *
*0.0037520348 0.00370207 2e-06 0.00044548529 0.00047612634 *
0.00020610211

From start of second run of another 200 steps
200 7.1804652e-06 0.0010179681 0.0010251388 24.382689 4.289681
-2.6758269e-05 0 0.0010188402 -0.00041719303 *0.0041370008 *
0.0037516941 0.0037017325 2e-06 0.00044539039 0.00047602809 0.00020603331

There is not enough general information here and then too many details without sufficient context to be able to tell anything.

LAMMPS has a large variety of potentials and there are many different ways to set up a simulation etc. so the inconsistency you are seeing can have many different explanations.

For a case like yours it would be best if you could construct a minimal test case showing exactly (and only) the inconsistency you are seeing. If it is easy to review the entire calculation and rerun it from scratch, it will be easy to provide advice and/or an explanation.

Could you perhaps give a couple of “explanations” for a case where thermo would report that a group of particles have
same x,y, vx, vy, fx, fy
different press
?

Please note that nothing has been redefined … we just ask for this information twice.

That is, the input script has two commands that follow each other directly:
run <some number of steps, N>
run <some other number of steps, M>

The difference in the variable press is in the Nth step of the first set of runs, and the first step of the second set of runs. Our positions, velocities, forces … are identical to 10 sig figs. But press is the same to only 5 significant figures.

`````` Take care!
``````

.

Not without seeing a representative input deck and having a closer look. There are too many possible reasons. I don’t want to speculate and be wrong and then others who look this up later will get the wrong idea.

To demonstrate my point. When I modify the “melt” example to have after the `fix nve` line these commands:

``````thermo_style custom step pxx pyy pzz pxy pxz pyz fmax
run             200 post no
run             200 post no
``````

I get the output:

``````Setting up Verlet run ...
Unit style    : lj
Current step  : 0
Time step     : 0.005
Per MPI rank memory allocation (min/avg/max) = 2.706 | 2.706 | 2.706 Mbytes
Step          Pxx            Pyy            Pzz            Pxy            Pxz            Pyz            Fmax
0  -3.6758692     -3.74368       -3.6905021     -8.4991896e-05 -0.0045207203   0.0057723796   9.7505337e-14
200   5.7410369      6.074666       5.8259265     -0.081604507   -0.10256553    -0.18256999     154.06744
Loop time of 0.115732 on 4 procs for 200 steps with 4000 atoms

Generated 0 of 0 mixed pair_coeff terms from geometric mixing rule
Setting up Verlet run ...
Unit style    : lj
Current step  : 200
Time step     : 0.005
Per MPI rank memory allocation (min/avg/max) = 2.706 | 2.706 | 2.706 Mbytes
Step          Pxx            Pyy            Pzz            Pxy            Pxz            Pyz            Fmax
200   5.7410369      6.074666       5.8259265     -0.081604507   -0.10256553    -0.18256999     154.06744
400   5.9896497      5.8283778      5.809605       0.17546034     0.01533225    -0.090485448    204.48186
Loop time of 0.106521 on 4 procs for 200 steps with 4000 atoms
``````

Which show that settings matter.

If you claim that something is wrong you have at the very least provide enough evidence, that it can be independently confirmed.

Thank you, Axel for this demonstration of when things “go right”. We so appreciate your effort!

I have not closely followed the “melt” example, but I get the gist. I dived into our code and turn our specialized dpd pair_style into a good old lj pair_style. Voila, the problem went away and we saw what you did … consistent pressures when runs restarted.

My wise colleague Katharina Vollmayr-Lee remarked that because our potential depends on velocity, there is some subtlety when the pressure, stress … calculations are done. There is use of half-time-step values which are not totally obvious to the user who is only looking at the full-time-step values. This produced the baffling result that same positions, velocities and forces produced a different pressure.

Again, we are so very grateful for your attention to our issue! Take care!

Had you reported your full input or at least the fact that you are using DPD, then you could have saved us both, me and you, some time and effort.