thermal conductivity calculation in lammps

Hi
We are having issues applying the thermal conductivity calculation to our ZnO bulk system in lammps. We have studied the example input for solid Argon thermal conductivity calculation. Solid Argon is a homogeneous system while ZnO is not. I don’t think we can compute myKE by ke/atom any more. The reasoning also works for myPE and myStress. So now, I’m wondering, should we have an alternative way to treat this? Could anyone please help us with this?

------------------------example for Argon-------------------------

# Sample LAMMPS input script for thermal conductivity of solid Ar 
units       real
variable    T equal 70
variable    V equal vol
variable    dt equal 4.0
variable    p equal 200     # correlation length
variable    s equal 10      # sample interval
variable    d equal $p*$s   # dump interval 
# convert from LAMMPS real units to SI 
variable    kB equal 1.3806504e-23    # [J/K] Boltzmann
variable    kCal2J equal 4186.0/6.02214e23
variable    A2m equal 1.0e-10
variable    fs2s equal 1.0e-15
variable    convert equal ${kCal2J}*${kCal2J}/${fs2s}/${A2m} 
# setup problem 
dimension    3
boundary     p p p
lattice      fcc 5.376 orient x 1 0 0 orient y 0 1 0 orient z 0 0 1
region       box block 0 4 0 4 0 4
create_box   1 box
create_atoms 1 box
mass        1 39.948
pair_style   lj/cut 13.0
pair_coeff   * * 0.2381 3.405
timestep     ${dt}
thermo      $d 
# equilibration and thermalization 
velocity     all create $T 102486 mom yes rot yes dist gaussian
fix          NVT all nvt temp $T $T 10 drag 0.2
run          8000 
# thermal conductivity calculation, switch to NVE if desired 
#unfix       NVT
#fix         NVE all nve 
reset_timestep 0
compute      myKE all ke/atom
compute      myPE all pe/atom
compute      myStress all stress/atom virial
compute      flux all heat/flux myKE myPE myStress
variable     Jx equal c_flux[1]/vol
variable     Jy equal c_flux[2]/vol
variable     Jz equal c_flux[3]/vol
fix          JJ all ave/correlate $s $p $d &
             c_flux[1] c_flux[2] c_flux[3] type auto file J0Jt.dat ave running
variable     scale equal ${convert}/${kB}/$T/$T/$V*$s*${dt}
variable     k11 equal trap(f_JJ[3])*${scale}
variable     k22 equal trap(f_JJ[4])*${scale}
variable     k33 equal trap(f_JJ[5])*${scale}
thermo_style custom step temp v_Jx v_Jy v_Jz v_k11 v_k22 v_k33
run          100000
variable     k equal (v_k11+v_k22+v_k33)/3.0
variable     ndens equal count(all)/vol
print        "average conductivity: $k[W/mK] @ $T K, ${ndens} /A^3" 
*---------------------end of example------------------------------*

Thanks!

Zhou

You can look at section 6.20 of the manual. It describes 3 ways
to compute thermal cond with LAMMPS. Maybe another method
would work better for your system.

Not sure why you say compute ke/atom or stress/atom or pe/atom
won't work for ZnO. Ke/atom is 1/2 m v^2, and that uses the
mass of Zn or O as appropriate. Ditto for the other computes.

Steve

Hi Zhou,

I do not understand why ZnO isn't homogenous system in a contrast to Ar.
It doesn't depend of how many basis atoms you have in the unit cell.
It's still periodic system.
It's a question how to apply the GK formalism to calculate thermal
conductivity in
non uniform system (boundary, cracks ....). But it's not the case.

Best,

German

Hi Steve
I’m just wondering, when we set ke/atom for example, how can we define which atom (Zn or O) to use in the calculation? Or, do we need to use the weight average of them?

Zhou

Hi German
The reason I have problem with this is because we are getting a pretty inaccurate result by doing this calculation. In ZnO system, there is three types of interaction: Zn-Zn, Zn-O, and O-O. While in Ar, there is only one type of interaction Ar-Ar. I don’t know if lammps realize this sort of difference automatically. It may or may not affect the final result.

Regards,
Zhou

Hi Steve
I'm just wondering, when we set ke/atom for example, how can we define which
atom (Zn or O) to use in the calculation? Or, do we need to use the weight
average of them?

i don't understand your question. ke/atom is collected - like the documentation
says - independently for *each* *single* atom. how you group and/or combine
it depends solely on how *you* do it. see the docs on the computes for reduce
and reduce/region for more details.

axel.

Hi German
The reason I have problem with this is because we are getting a pretty
inaccurate result by doing this calculation. In ZnO system, there is three
types of interaction: Zn-Zn, Zn-O, and O-O. While in Ar, there is only one
type of interaction Ar-Ar. I don't know if lammps realize this sort of
difference automatically. It may or may not affect the final result.

what does how you compute the individual force components
have to do (directly) with the kinetic energy and how it is transferred
from atom to atom? if you compute the flux of kinetic energy that
is just your property to look at. it doesn't care which kind of atoms
are causing it.

please make sure that you don't mix up the different methods
to compute transport of kinetic energy. in some the mass of
an atom matters, in others it doesn't.

axel.

Hi Zhou,

The number of interatomic pair interactions has nothing to do with the feasibility of green-kubo method. First of all, you should check whether your adopted potential is able to reproduce some simple but important properties like structural parameters, before you can use the potential to explain the thermal conductivity. For GK calculation, you may refer to our paper: Computational Materials Science 60 (2012) 123–129, there is one part discussing GK method.

Hope it helps.
c

Hi Zhou,

It also can be problem with post processing your data.

1) Did you use enough "samples" to calculate averaged value <JJ>?
Usually you need about 50000 "samples".
2) Is the time shift between "samples" is large enough?
3) Did you take "samples" from data in thermal equilibrium?

I did calculations for SiC with MEAM potential and it worked.

Best,

German

Dear German,

I am particularly interested in your 2nd suggestion. How did you implemented time shift between ‘samples’ in lammps?
Thanks.

Regards,
Christopher

Dear Christopher,

I cut first 10^6 iterations - equilibration.

From the rest of J data each set shifted by 200 iterations (in case of

SiC with MEAM and timestep 1 femtosec) was
interpreted as a new "sample". this shift was enough to interpret the
"samples" as independent.
25000 "samples" was used for averaging.
Thus the "samples" overlap.

I other words:

from 10^6 to the end - sample 1
from 10^6 + 200 to the end - sample 2
from 10^6 + 400 to the end - sample 3
....
....

Best,

German

Hi Christopher
I’m quite interested in your paper. However I don’t have access to it in my school network. Do you mind sending it to me? Thanks!

Zhou

在 Aug 1, 2012,1:58 AM,Christopher Goh <christopher_gohwf@…8…> 写道:

Hi German
Do we still need to cut the first 10^6 iterations in GK method? If so, how could we cut them?
Thanks!

Regards,
Zhou
在 Aug 1, 2012,3:19 PM,German Samolyuk <[email protected]...> 写道:

Hi Zhou,

It was in my case. If you started to save J values after your system
reach the equilibrium you don't need it.

Best,
German

Dear German,

Thank you for your explanation. Is this means that your sample size is decreasing with number of sample? As far as I understand, your next sample size will be 200 less than that of the preceding sample. Shouldn’t we maintain a constant sample size throughout?

Actually, I never implement the concept of time shift between samples in my simulations. I used, for example, after equilibrium, 1 to 20000 = sample 1, 20001 to 40000 = sample 2 and so on. Did you checked your result without time shift?

Also, as far as I know, they tried to make time shift as smaller as possible. For example, see [D. Nevins et al., Accurate computation of shear viscosity from equilibrium molecular dynamics simulations, Molecular Simulation, Vol. 33, No. 15, December 2007, 1261–1266], ts in their figure 2 describe the ‘time shift’ that you mean, and they implemented a constant sample size throughout the simulation. The choice of ‘time shft’ is discussed in section 3.3, in which they showed that time shift should < 100fs (in their simulation) in order to get an error < 2% (also Figure 4).

Thanks.

Regards,
Christopher

Dear Christopher,

Dear German,

Thank you for your explanation. Is this means that your sample size is
decreasing with number of sample? As far as I understand, your next sample
size will be 200 less than that of the preceding sample. Shouldn't we
maintain a constant sample size throughout?

Yes sure. I keep the length of the samples equal to the shortest one.
I took this approach from this publication. It has good Illustration
with overlapping samples.
H. Kaburaki, in Handbook of Materials Modeling, edited by S. Yip
(Springer, Dordrecht, 2005), p. 763.

Actually, I never implement the concept of time shift between samples in my
simulations. I used, for example, after equilibrium, 1 to 20000 = sample 1,
20001 to 40000 = sample 2 and so on. Did you checked your result without
time shift?

If you use overlapping samples with reasonable time shift you can have
much more samples
from one run.
I did not check the results with non overlapping samples, but I always
check dependence of thermal conductivity
of time shift. For MEAM SiC 200*1 fs is enough.

Also, as far as I know, they tried to make time shift as smaller as
possible. For example, see [D. Nevins et al., Accurate computation of shear
viscosity from equilibrium molecular dynamics simulations, Molecular
Simulation, Vol. 33, No. 15, December 2007, 1261–1266], ts in their figure 2
describe the 'time shift' that you mean, and they implemented a constant
sample size throughout the simulation. The choice of 'time shft' is
discussed in section 3.3, in which they showed that time shift should <
100fs (in their simulation) in order to get an error < 2% (also Figure 4).

I should read it. As I was thinking the large shift is better.

Best,

German

Dear German,

Thank you for your useful explanation. Now I understand that we can have more samples from a single run with sample overlaping concept, that’s why you have 25 thousands samples for averaging, which is quiet a lot… I should implement overlaping sample with shifting in the future. Also,the paper is worth a read.

Thanks.

Regards,
Christopher