Hi,

I am testing the thermal conductivity example in.langevin of KAPPA with some appropriate changes.

The details as follows:

I have defined lattice and region

lattice sc 1.20

region box block 0 10 0 10 0 50

Then heat layers

region hot block INF INF INF INF 0 5

region cold block INF INF INF INF 25 30

compute Thot all temp/region hot

compute Tcold all temp/region cold

Then I run for two million steps (default step length dt=0.005) with nvt at T=1.0

fix 1 all nvt temp 1.0 1.0 0.5

run 2000000

To establish the heat current between the heat layers, I thermostat the hot and cold regions through following fixes upto 5 milliion steps

fix 1 all nve

fix hot all langevin 1.05 1.05 1.0 59804 tally yes

fix cold all langevin 0.95 0.95 1.0 287859 tally yes

fix_modify hot temp Thot

fix_modify cold temp Tcold

run 5000000

Then I assume that steady state has been maintained and I start collecting data. Hence, I reset the fixes hot & cold and their modifies for further ten million steps.

fix ave all ave/time 5 100 500 v_tdiff ave running

thermo_style custom step temp c_Thot c_Tcold f_hot f_cold v_tdiff f_ave

run 10000000

I calculate the average of c_Thot and c_Tcold for the last 10 million steps. I expect the averages to be close to 1.05 and 0.95. But they come out to be 1.04038672 and 0.963709826.

Can anyone explain that why the average values are too far from expected values? And would you please suggest me that what can I do to get more precise averages.

N.B.- Any point which I have not mentioned in this mail, assume they are same as in the example in.langevin. I even tried this example with berendsen thermostat in place of langevin thermostat. In that case I get averages of Thot and Tcold little better but they are still far from 1.05 and 0.95.

Vinay vaibhav

PhD Student, Theoretical Physics

The Institute of Mathematical Sciences Chennai, India

Hi,

I am testing the thermal conductivity example in.langevin of KAPPA

with some appropriate changes.

The details as follows:

I have defined lattice and region

lattice sc 1.20

region box block 0 10 0 10 0 50

Then heat layers

region hot block INF INF INF INF 0 5

region cold block INF INF INF INF 25 30

compute Thot all temp/region hot

compute Tcold all temp/region cold

Then I run for two million steps (default step length dt=0.005) with

nvt at T=1.0

fix 1 all nvt temp 1.0 1.0 0.5

run 2000000

To establish the heat current between the heat layers, I thermostat

the hot and cold regions through following fixes upto 5 milliion steps

fix 1 all nve

fix hot all langevin 1.05 1.05 1.0 59804 tally yes

fix cold all langevin 0.95 0.95 1.0 287859 tally yes

fix_modify hot temp Thot

fix_modify cold temp Tcold

run 5000000

Then I assume that steady state has been maintained and I start

collecting data. Hence, I reset the fixes hot & cold and their

modifies for further ten million steps.

fix ave all ave/time 5 100 500 v_tdiff ave running

thermo_style custom step temp c_Thot c_Tcold f_hot f_cold v_tdiff f_ave

run 10000000

I calculate the average of c_Thot and c_Tcold for the last 10 million

steps. I expect the averages to be close to 1.05 and 0.95. But they

come out to be 1.04038672 and 0.963709826.

Can anyone explain that why the average values are too far from

expected values? And would you please suggest me that what can I do to

get more precise averages.

you are doing a *non-equilibrium* simulation and a 1% deviation is not

that much for the choice of parameters. the averages *are* precise.

apart from that, why do you worry? for the computation of the thermal

conductivity you are to use the averaged *actual* temperatures anyway.

axel.

Hi Vinay,

For a local Langevin thermostat in nemd, I also noticed a small mismatch

between the specified target temperature and the nemd average of the

respective slab. From my experience this difference depends on the

choice of your damping parameter. Try using a smaller value for damp to

see if you get better agreement.

However, if you want to specify the temperature exactly, you might want

to consider temp/rescale, applied locally using dynamic groups, instead

of fix langevin. This basically corresponds to a local Gaussian

thermostat where you fix the kinetic energy exactly (apart from some

higher-order terms in dt).

Peter

Hi Vinay,

For a local Langevin thermostat in nemd, I also noticed a small mismatch

between the specified target temperature and the nemd average of the

respective slab. From my experience this difference depends on the

choice of your damping parameter. Try using a smaller value for damp to

see if you get better agreement.

However, if you want to specify the temperature exactly, you might want

to consider temp/rescale, applied locally using dynamic groups, instead

of fix langevin. This basically corresponds to a local Gaussian

thermostat where you fix the kinetic energy exactly (apart from some

higher-order terms in dt).

temp/rescale is a bad idea. it gives the group you thermostat a "kick"

every time it hits.

axel.

Hi

As Axel told, 1 percent error is not that much especially in MD which suffers from statistical uncertainties. The suggestions of Peter are very helpful.

I also suggest you to use “compute temp/region” for calculating the temperatures.

In fact, if you compare the results of the method used in in.langevin (“compute ke/atom” command) with “compute temp/region” command, you will notice that the result of the former is 1/Natoms*Temp smaller (which for a region of 100 atoms creates about 1% error). This is because the latter, by considering ”fix nve” as a constraint, reduces the degrees of freedom of the region by one which is not considered in in.langevin.

Or equivalently, you could multiply “variable temp” by Natoms and divide by (Ntoms-1) which is the real degrees of freedom.

Albeit I am not 1completely sure about the above statements. Therefore, any corrections will be appreciated.

regards

I should add:

Maybe the difference in the results of two commands is duo to three constraints imposed by conserving the linear momentum of the system along 3 coordinate axes. This causes the number of degrees of freedom (dof) to reduce by three. subtracting them from 3N means dof=3N-3=3(N-1) which looks like Natoms is reduced one.

Therefore, multiplying “variable temp” by Natoms then diving it by (Ntoms-1) would be equivalent to use “compute temp/region” command.

Hi,

As per lammps documentation section 3.8, the pair style lj/smooth/linear cannot be accelerated using the GPU package. Currently, I am using this pair potential for my simulations using lammps with cpus.

Can anyone suggest me some way so that this pair potential could be accelerated using gpu?

How difficult it is to write pair_lj_smooth_linear_gpu.cpp and pair_lj_smooth_linear_gpu.h following the steps in pair_lj_cut_gpu.cpp and pair_lj_cut_gpu.h respectively?

With thanks and regards,

Vinay Vaibhav

Ph.D. Student, Theoretical Physics

The Institute of Mathematical Sciences Chennai, India

Hi,

As per lammps documentation section 3.8, the pair style lj/smooth/linear

cannot be accelerated using the GPU package. Currently, I am using this pair

potential for my simulations using lammps with cpus.

Can anyone suggest me some way so that this pair potential could be

accelerated using gpu?

you could try using the pair_write command to write out your potential

to a table file and then use the GPU accelerated table style.

How difficult it is to write pair_lj_smooth_linear_gpu.cpp and

pair_lj_smooth_linear_gpu.h following the steps in pair_lj_cut_gpu.cpp and

pair_lj_cut_gpu.h respectively?

that depends entirely on your programming abilities.

axel.

You could also write a Kokkos version of pair_lj_smooth_linear_gpu.cpp. There are several examples to follow such as /src/KOKKOS/pair_lj_cut_kokkos.cpp. You write the pair style in C++, and then Kokkos library will map it onto CUDA so it can run on GPUs.

Stan

Stan’s suggestion is super simple if you follow this document.

porting-pair-styles-to-kokkos.txt (13.5 KB)