Please always remember to CC the list so that the discussion is fully recorded for the future benefit of other people.

Adding to Carlos’s comments,

Michael, do you have a physical interpretation of why the thermal conductivity of moving Cu particle will be different that not moving particle? I would like to know.

The overall thermal transport may be different as there is convection effects included in the moving particle. But I am still trying to understand why do you need to move the particle to calculate its thermal conduction properties, which by definition (as in solids) assumes the atoms are vibrating about their mean position and not diffusing.

Best Regards,

Vikas

Mike, you might also have a look at the decay of your autocorrelation and convergence of its integral. That should give you hints about the goodness of your sampling settings.

Tim

Dear all,

I (foolishly) thought that my extremely simplistic description would suffice. I apologize for any time wasted due to that.

What I am actually simulating is a copper particle dispersed in argon (a nanofluid) and I am trying to calculate the thermal conductivity of the liquid atoms, the copper particle and of the whole system. When I only have the argon atoms, the thermal conductivity is very close (less than 5% error) to the experimental data. When I add the particle, its thermal conductivity is extremely high as is that of the whole nanofluid.

In calculating the thermal conductivity I simply followed the instructions given in the heat/flux compute manual page. However in the case of the particle, this doesn’t really work as the compute also calculates heat transferred by convection (as you have mentioned in your comments). Does anyone know how to subtract this convective term from the heat flux?

The formula on the heat/flux manual page will work well for a single component fluid like argon. For multi-component fluids (nanofluid), directly integrating the autocorrelation isn’t very reliable. There are several papers in the literature that describe corrections, but I would still start by looking closely at the correlation and integral. Otherwise, it won’t be clear how sensitive kappa is to your correlation time. The same settings that work for argon aren’t guaranteed for the nanofluid. You’ll probably need to experiment to find a good sampling procedure for both systems.

Tim

Thanks Tim. This might be a silly question but what exactly should my correlation and integral look like?

Furthermore, from going through some papers in the literature I realized that the heat flux formula should also be subtracting the average enthalpy. Is it possible that this term makes such a huge difference? I am still getting the feeling that the fact that the particle is moving messes up with the calculation of the heat flux.

Mike

There are so many points of failure in this calculation that you have

no chance of getting it right by just adding various corrections that

you happen to stumble upon. In order to have any hope of coming close

to the right answer, you should set up a suitable test: For example,

put a sphere of copper in vacuum, give it a non-zero velocity, and

calculate it's thermal conductivity. Do this initially for a tiny

sphere, so you can fail fast. Then try successively larger spheres as

you get more confident. You should see the conductivity converge

accurately to the value for bulk copper *calculated with the same

potential*. If you don't get a good match, find the next error.

Forget about comparing to experiments, that comes when you write the

paper.

Aidan

As Aidan said, there are a few layers to this type of G-K calculation. For the argon case: a time correlation function must be found for the heat flux of the fluid system, and it needs to have friendly properties for the G-K formula. It should decay to zero (or close), and it should ideally be smooth and well-resolved so that it can be numerically integrated. The integral of the correlation should have a plateau (or close) with increasing time. Now, if particles are introduced into the fluid, the partial enthalpy has to subtracted from the heat flux before creating the correlation function. Otherwise your correlation will not decay to zero, and its integral (the T.C) will be huge. Its a very important correction. This is only a starting point, and for fluids or amorphous solids.

HTH,

Tim

I started tweaking my sampling rates for the correlation function and indeed it makes a massive difference to the results. Is there a way to determine these values or is it simply trial and error? Also, what are the units of the correlation function? I am using LJ units and although the function seems to decay quite rapidly now (with the changes I have made) it doesn’t seem to go very close to zero

Have you tried your simulation with a few different box sizes? Finite size effects can introduce an extra, very slowly decaying, component which prevents your HCACF from reaching zero.

Also, what are the units of the correlation function?

Units of what parameter? If you mean Nevery, Nrepeat, Nfreq

then as the doc page says, those are counts of timesteps.

So they are unitless. Your script defines what units you use (e.g. LJ)

and what the timestep size is. But Nevery is just 10 timesteps (or whatever).

Steve

No. I am referring to the result of the autocorrelation. The one that is supposed to be converging to zero.

Are there any empirical rules on how to go about sampling for the correlation function?

Thanks again

Here is one empirical rule that will help: start by reproducing a

reliable published result

Based on your help, I started paying close attention to my correlation values. What I am trying to do now is to calculate the thermal conductivities of the argon and the copper particle separately and see if those match the equivalent bulk properties. What I have noticed is that the thermal conductivities (especially that of the copper) doesn’t actually converge at all; instead it randomly jumps values (jump up to 500 W/mK in the case of the particle). Following this, I checked the correlation function to see if it decays to zero. When I plot it, I get a very good decay, converging to zero, up until the last timesteps where it goes crazy and jumps (not continuously) to extremely high or extremely low values. What I noticed is that after the correlation completes all the specified timesteps (Nfreq), the file then repeats some of the last timesteps with different values. My initial thought was that for some reason more than one processors are attempting to write on the file mixing everything up. Could that be the case? And if so, would that also be an issue in the actually program is it only relevant when printing.

Thanks again for any help

Mike… My two cents…

In principle, when the correlation goes to zero (which I am assuming happens much before the last timestep), the thermal conducitivity value should saturate. You should use that value. One of the reasons the large fluctuations at the very end could be associated with very few number of data points at large time difference interval.

If I am not wrong, it has been an unsaid rule that for green kubo calculations, in order to get good statistics, you have to run simulations much longer than usual and several independent cases. For example, if your heat flux correlation goes to zero ~ 100 ps, it is advised to run the simulations for ~1-2 ns to get good statistics time correlation. I hope this helps a bit.

Regards

Vikas

Could you also give an example of the ave/correlate output that you think is strange, and the command that you used to get it?

This sounds like the problem with keyword overwrite that Steve fixed

back in June. If you don't have these lines in fix_ave_correlate.cpp,

you will need to update your code:

+ if (overwrite) {

+ long fileend = ftell(fp);

+ ftruncate(fileno(fp),fileend);

+ }