# How to implement heat flux compute properly

I am trying to calculate the heat flux across a thin SiO2 slab using the Tersoff potential in metal units, where I define two small control volumes, a source of 1,000 K and a sink of 200 K using NVT, and NVE in the intermediate transport region. I use the compute heat/flux found in the user guide, namely the example for solid Ar.

From my understanding, this should be the net transport of heat and taken as a volume average. However, when I look at the results (per time step), I observe that the total fluxes (convective + virial terms) fluctuate a lot, going from positive to negative and back, even though the local temperatures in my two control volumes overall stay around their setpoints.

Note that I use Tersoff potential and I have a vacuum space in the z-direction. I also observe that the SiO2 slab tends to drift a bit during the course of the simulation.

My question is to verify we should expect the total fluxes to eventually stabilize to some consistent value as the system reaches steady state and we have a stable driving force, e.g. two different temperatures or a heat input/output rate at the two control volumes, whether as a negative or positive value based on the flux vector direction.

Are there any known circumstances where we may run the simulation wrong that we can expect wild fluctuations in the fluxes? I noticed that overall the average KE/atom and PE/atom look fairly stable but the average stress/atom does fluctuate a lot, in terms of magnitude and sign.

First off, I think the SiO2 Tersoff potential is only suitable for bulk systems where there are no surfaces with the resulting problems to model them correctly for oxides. You would need a more complex potential like COMB to model those correctly.

That can be addressed by using fix spring on the slab so it will be restrained.

Yes, but you have added multiple complications: you have a harder medium and you have reduced the area through which the flux can pass. So you have more fluctuations to begin with and less data to average over, so it can take much longer or may need a much larger system to get converged, averaged results.