Issue in ramping temperature from 0K to 100K using NVT

Dear all, I am trying to ramp the temperature of a system from 0K to 100K. I am assuming the starting structure is in 0K, since it is result of a minimization. Now, I ramp by :
a) Assigning velocity corresponding to a small temp of 0.1 K to all atoms
b) ramping from 0.1 to 100K using nVT.

However i observe that the temperature remains very close to 0K throughout. I am not sure if I am doing something wrong.

The relevant part of code is :

# Initial temperature
velocity all create 0.1 88 mom yes rot yes

# Equilibration settings
timestep 0.0001
fix 1 all nvt temp 0.1 100.0 0.1
thermo 100
thermo_style custom step temp pe ke etotal press

run 100000 

I think your Tdamp parameter is too small high, the suggested value is 100*dt and yours is 10 times smaller larger.

Link: fix nvt command — LAMMPS documentation

EDIT: Corrected the logic of the comment.

1 Like

Hi, @mkanski thanks for your reply! I don’t think Tdamp is the factor, as I have even tried Tdamp =1 also. I am attaching the log file of the same, please see that the temperature is close to 0 throughout. Is it because, at every step, the initial temperature is getting set to 0.1 ?
combined_temp_100K.log (12.7 KB)

Argh, my bad. I meant that your Tdamp value is too high and the value you used is 10 times higher than the recommended one. The larger the Tdamp value is, the longer it takes to relax your system to the desired temperature.

Additionally, your system is far from equilibrium (which can be deduced by a high starting pressure and quick potential energy drop at the beginning) making thermostatting even more difficult.

Hi! I have now used the version of the structure i get by using “minimize” with tolerances of order 1e-8. I have also set the Tdamp to 0.01 ( I think the high pressure stems from the fact that I have frozen some atoms during such minimization, i.e, selective dynamics - so the structure is not at its most relaxed state ) . Yet the problem persists! Is the code I am using is correct to ramp temperature ?

combined_temp_100K.log (12.7 KB)

The fix nvt command works just fine when I add it to the in.eam example in the bench folder:

# bulk Cu lattice

variable        x index 1
variable        y index 1
variable        z index 1

variable        xx equal 20*$x
variable        yy equal 20*$y
variable        zz equal 20*$z

units           metal
atom_style      atomic

lattice         fcc 3.615
region          box block 0 ${xx} 0 ${yy} 0 ${zz}
create_box      1 box
create_atoms    1 box

pair_style      eam
pair_coeff      1 1 Cu_u3.eam

thermo          50

reset_timestep 0

velocity        all create 0.1 88 mom yes rot yes dist uniform

neighbor        1.0 bin
neigh_modify    every 1 delay 5 check yes

fix             1 all nvt temp 0.1 100.0 0.01

timestep        0.0001

run             10000

log.eam (20.1 KB)

So the logical explanation would be that there is something else draining kinetic energy from your system.

Thank you for your reply! Is there any possible pointers you can suggest that I should particularly look into to fix this draining of KE?

The system you are simulating is initially very highly strained, losing about 25 keV of potential energy in 200 femtoseconds. You are then imposing a very low temperature Nose-Hoover thermostat on top of the dynamics (the kinetic energy doesn’t ever exceed 1 eV – ten thousand times smaller than the potential energy lost!).

My initial guess is that the NH thermostat’s internal velocity becomes extremely negative during these first 200 femtoseconds, and does not have sufficient time over a 10,000 step run to return to equilibrium. With a negative velocity the thermostat will keep draining kinetic energy from the system regardless of setpoint – the setpoint will only determine how long the thermostat returns to equilibrium.

You can verify this for yourself by printing out the thermostat’s eta and eta_dot variables in the thermo output, as described in the documentation. For your simulation this may look something like:

thermo_style custom step temp pe ke f_1[1] f_1[4]

(since the LAMMPS default is a three-chain thermostat). In the broader picture your simulation method does not seem to correspond to any physically plausible scenario and you should have a long and serious think about the methodology. :slight_smile:

2 Likes

In addition to @srtee’s comments, please also note that a Nose-Hoover thermostat is optimized for sampling the correct thermodynamic ensemble for a system in equilibrium, so doing a ramp or having to work through a significant change at the beginning of the run can be very challenging. In that sense, you may be lucky that the ficitious degrees of freedom still can do time integration without divergence. We see often enough that this can cause crashes, too.

A better choice for moving kinetic energy in and out of systems are dissipative thermostats like fix langevin (or one of the many variants of that). They tend to be much more resilient, since excess kinetic energy is just dissipated.

2 Likes

Hi,
I have the same problem with my system and I could not solve it by the comments here.
@ rik_ghosh May I kindly ask you if you found any solution?