What precision does Berendsen thermostat use?

Dear All

Using LAMMPS 5th June 2019

I have an APL program which does velocity verlet integration on an LJ melt with a Berendsen thermostat along the lines of

I have modified the LAMMPS script provided at the bottom of the page linked above to more precisely match the description above. My modified script is attached to this email.

In an attempt to verify my results, I begin with creating randomly positioned atoms in LAMMPS. I then use the output of a high precision dump for step 0 as the starting positions in my APL version. For a non-thermostat’d simulation I seem to get identical numbers (position dump x y for 2D sim) well up into 5000 steps (using 64 bit binary floats). However, when I implement a thermostat my results deviate after only a handful of steps.

One thing I noticed is that despite believing I am using the same method to calculate the temperature, I seem to have some kind of precision/rounding error when comparing to the temperature dump from LAMMPS (thermo).

E.g. calculate the temperature for step 0

APL: ```1.440000000000000155291799654865146 (Using ⎕FR←1287 128-bit decimal float) `

LAMMPS: 1.44000000000000016875389974302379414 (Where do the additional digits come from?)

From my searches LAMMPS uses double-precision floats by default, so where does this extra precision come from? How is the temperature calculated from the velocity in the Berendsen thermostat?

``

I also noticed that the help page says that compute temp uses KE=dim/2 N k T - but I get the outputted result for KE=dim/2 (N-1) k T

(i.e. when calculating T, I use [T←(+/,real_vel*2)÷dim×(n-1)]. Why does the temperature calculator divide by N-1 instead of N? Is it something to do with Root Mean Squared?

I apologise for the weird formatting of this email.

Kind regards,

Rich Park

``

``

ljmelt.in (2.13 KB)

Dear All

Using LAMMPS 5th June 2019

I have an APL program which does velocity verlet integration on an LJ melt with a Berendsen thermostat along the lines of

APL? wow! now there is a “blast from the past”. :wink:

http://nznano.blogspot.com/2017/11/molecular-dynamics-in-python.html

I have modified the LAMMPS script provided at the bottom of the page linked above to more precisely match the description above. My modified script is attached to this email.

In an attempt to verify my results, I begin with creating randomly positioned atoms in LAMMPS. I then use the output of a high precision dump for step 0 as the starting positions in my APL version. For a non-thermostat’d simulation I seem to get identical numbers (position dump x y for 2D sim) well up into 5000 steps (using 64 bit binary floats). However, when I implement a thermostat my results deviate after only a handful of steps.

One thing I noticed is that despite believing I am using the same method to calculate the temperature, I seem to have some kind of precision/rounding error when comparing to the temperature dump from LAMMPS (thermo).

E.g. calculate the temperature for step 0

APL: ```1.440000000000000155291799654865146 (Using ⎕FR←1287 128-bit decimal float) `

LAMMPS: 1.44000000000000016875389974302379414 (Where do the additional digits come from?)

From my searches LAMMPS uses double-precision floats by default, so where does this extra precision come from? How is the temperature calculated from the velocity in the Berendsen thermostat?

there is no extra precision. the numbers are bogus. with 64-bit floating point, you can represent only a little over 15 digits precision in decimal system, the remaining digits are just artifacts from converting base 2 numbers to base 10. not all numbers in base 10 scientific format can be exactly represented in base 2. the classical example is 0.1. the APL output doesn’t look that much different actually. could it be, that numbers are converted to doubles before they are printed?

temperature in the berendsen thermostat comes from an instance of compute temp (unless replaced by a different compute instance via fix_modify.

``

I also noticed that the help page [https://lammps.sandia.gov/doc/compute_temp.html](https://lammps.sandia.gov/doc/compute_temp.html) says that compute temp uses KE=dim/2 N k T - but I get the outputted result for KE=dim/2 (N-1) k T

(i.e. when calculating T, I use [T←(+/,real_vel*2)÷dim×(n-1)]. Why does the temperature calculator divide by N-1 instead of N? Is it something to do with Root Mean Squared?

a bulk system is invariant to translation, and thus the number of degrees of freedom is 3N-3 and not 3N
compute temp takes that under consideration.

axel.