Temperature in FixGCMC

Hello LAMMPS users, (version: 7Aug19)

I am comparing the temperature specified as a parameter for fix gcmc to the run average of the temperature calculated using compute_temp. I thought the average would approach the specified temperature, but it does not. The deviation increases with decreasing atom count.

From what I can tell all inserted atoms are assigned velocities from a gaussian distribution corresponding to the specified temperature.
The compute_temp then uses the velocities to compute the temperature by
Ke = 3(N-1)kT, where the -1 is due to the 3 less degrees of freedom from zero linear momentum. I tried to normalize by N instead of N-1 but, there is still deviation.

Should the temperatures not be the same?
Thank you in advance,


Test case:

Control variables

variable MU equal -4.0
variable TEMP equal 2.0
variable TRANS_MAX equal 1.0
variable NTRANS equal 40
variable NEXCH equal 40
variable EQRUN equal 10000
variable PRODRUN equal 40000
variable NEVERY_THERMO equal 1
variable NFREQ_THERMO equal 1000


pair_style lj/cut 3.0
region box block 0 5 0 5 0 5
create_box 1 box
pair_coeff 1 1 1.0 1.0
mass 1 1.0

group g_gcmc type 1

fix gcmc g_gcmc gcmc 1 {NTRANS} {NEXCH} 1 29494 {TEMP} {MU} ${TRANS_MAX}

run ${EQRUN}

unfix gcmc
reset_timestep 0

variable T equal c_gcmc_temp
variable T2 equal v_T^2
variable T_block2 equal f_block[1]^2
variable T_ave equal f_block_ave[1]
variable T_ave_var equal f_block2_ave[1]-v_T_ave^2
variable T_ave_std equal sqrt((v_T_ave_var>0)*v_T_ave_var)

variable N equal count(g_gcmc)
variable N2 equal v_N^2
variable N_block2 equal f_block[2]^2
variable N_ave equal f_block_ave[2]
variable N_ave_var equal f_block2_ave[2]-v_N_ave^2
variable N_ave_std equal sqrt((v_N_ave_var>0)*v_N_ave_var)

variable trans_accept equal f_gcmc[2]/f_gcmc[1]
variable insert_accept equal f_gcmc[4]/f_gcmc[3]
variable delete_accept equal f_gcmc[6]/f_gcmc[5]

compute gcmc_temp g_gcmc temp
compute_modify gcmc_temp dynamic yes

Averages and error estimates (10 blocks)

fix block all ave/time {NEVERY_THERMO} (v_PRODRUN/10/v_NEVERY_THERMO) (v_PRODRUN/10) v_T v_N fix block_ave all ave/time (v_PRODRUN/10) 10 {PRODRUN} f_block[1] f_block[2] fix block2_ave all ave/time (v_PRODRUN/10) 10 ${PRODRUN} v_T_block2 v_N_block2

fix gcmc g_gcmc gcmc 1 {NTRANS} {NEXCH} 1 29494 {TEMP} {MU} ${TRANS_MAX}

thermo_style custom step v_T v_N
thermo ${NFREQ_THERMO}

run ${PRODRUN}

print “-----------------------------------------”
print “Run Summary”
print “-----------------------------------------”
print “N_ave: {N_ave} +- {N_ave_std}”
print “T_ave: {T_ave} +- {T_ave_std}”
print “trans_accept: {trans_accept}" print "insert_accept: {insert_accept}”
print “delete_accept: ${delete_accept}”
print “-----------------------------------------”

I’m CCing Aidan in case he has a comment about the fix gcmc context.

It looks like you have flagged the temp compute as dynamic which
is necessary if N is changing. You should do that for any compute
temp command which you are using to monitor the temp of the added/deleted
atoms/molecules. Is it possible the temperature is changing so rapidly
in your model that the time average is deceptive? Can you turn off
the GCMC fix and see if a snapshot of the simulation equilibrates
to the target temperature?


Thank you for the reply.

I am not sure what you mean by: “Is it possible the temperature is changing so rapidly
in your model that the time average is deceptive?”
I suppose the temperature will change rapidly when the atom count is low because the kinetic energy of an added/deleted atom is a larger fraction of the total kinetic energy. But why do you think that would affect the time average?

“Can you turn off the GCMC fix and see if a snapshot of the simulation equilibrates
to the target temperature?”.

I assume you mean with fix nve instead of gcmc. The temperature does not equilibrate to the set temperature if I do this.

I ran multiple simulations and plotted the average temperature as a function of the average number of particles. For particle numbers larger than about. 30, the temperature is about equal to the set temperature. However, for particle numbers below this, the temperature increases rapidly until it is about 25% higher than the set temperature. So you wouldn’t notice the difference between the calculated and set temperatures until you have low particle counts, but then it is very noticable.


Aidan can comment on the GCMC part. If you run with just fix ave
there is no “set” temperature, so the system can equilibrate to
whatever temperature qui-partitions the KE and PE.

Can you run your tiny systems ( N < 30) atoms with a thermostat
(e.g. Langevin) and get your set temp? Then try turning on fix gcmc
and see if the temp stays there (even with no insertions or deletions)


Without seeing actual data, I can’t know what the magnitude of the effect is. Possible issues are:

  1. Statistical fluctuations

  2. Equilibration

  3. O(1/N) effects like that associated with Ndof used to define temperature

  4. Gross errors due to using wrong atom count when computing temperature

In your case, you seem to have eliminated 4, so that is good. The closest you get to addressing 3 is:

“I tried to normalize by N instead of N-1 but, there is still deviation.”

If the overall effect you are seeing is O(1/N) then it may be something subtle, like deleted atoms having slightly different kinetic energy than inserted atoms, on average (although in the absence of dynamics, that should not be possible). Note that in true MC, velocity is not considered at all, so as soon as you try to compare MD quantities with MC quantities, you should not be surprised to see small differences. Whether or not you care about those differences and what to do about them will depend on the science question.


Although there is still deviation, the values are much closer when the degrees of freedom for the temperature calculation are changed from 3(N-1) to 3N. I would not know where to start with finding the cause of the remaining deviation, and it is probably not worth the effort.

Thank you for your help and suggestions!