I want my temperature for the simulation to increase linearly

I am performing a melting simulation for cdh23 over a range of 298K to 400K. I have initially simualated at specific temperature from 300K to 400K at an interval of 5K. there seems to be no change in the protein structure. To perform the before mentioned simulation i had used fix nvk to keep the kinetic energy constant to observe the various conformations and change in potential energy. Now i want a simulation such that it heats the system from 298K to 400K but there are way too many fluctuations and I want the temperature to increase linearly.
I am attaching the code below

units real
boundary p p p
atom_style bond
bond_style fene/sop
pair_style hybrid lj/cut/sop/native 200 lj/cut/sop/nn 200
special_bonds lj 0.0 1.0 1.0
read_data p47_new.data

neighbor 3.0 bin
neigh_modify delay 0 every 5 check yes
compute native all pair lj/cut/sop/native epair
compute nonnative all pair lj/cut/sop/nn epair
fix 1 all nve
velocity all create 298.0 123456 dist uniform
fix 2 all langevin 298 400 1.0 123456

dump 2 all atom 1000 cdh23.lammpstrj
timestep 01.0
thermo_style custom step pe ke temp ebond c_nonnative c_native
log log_375.lammps
thermo 10000
run 1000000

Outside the NVK ensemble, temperature fluctuates for any nanosystem (like many intensive properties of interest), and the smaller the system the larger the fluctuation.

It is certainly possible to rig up equations of motion to constrain the system’s KE to be a specified value that slowly increases over time. (As Von Neumann famously didn’t say: with atom-style variables I can model an elephant, and with fix addforce I can make it wiggle its trunk.)

But that approach is error-prone and difficult to describe or troubleshoot remotely, and it’s hard to argue that it’s worthwhile. You’ve already seen the same phenomenon (your force field doesn’t capture thermal denaturation) using both equilibration at different temperatures and a temperature ramp. Why do you think a perfectly smooth temperature ramp will make much difference?

(Apologies in advance if these questions seem a little sharp – it’s useful to defend your ideas against some arguments, which is how good peer review should also work.)

For anyone who does want an exact ramping kinetic thermostat and can justify it: you subtract the system’s virial (from say compute pressure) from the desired KE ramp rate and then divide it by the system’s current KE to get a time-dependent damping rate, and then use it to damp each particle’s velocity as a frictional contribution with fix addforce. The derivation below is pretty standard (conceptually inspired by Evans and Morriss’s Stat Mech of Non-equilibrium Liquids, chapter 3).

I’ve left the implementation details vague (and not bothered checking for what’s certain to be a sign error) because if you know enough to justify it you know enough to implement it, and vice versa.

Derivation follows:

Suppose we altered our system’s equations of motion by adding an additional force f_i to each particle:

m_i \ddot{r_i} = F_i + f_i

where F_i are the “original” forces. Ideally our system is “minimally” constrained. One possible interpretation is that we want the set of f_i that fit our constraint but has the smallest possible total square – that is, we want a constrained minimisation of the sum of square constraint forces \frac{1}{2} \sum f_i^2 . We can do this by the usual method of Lagrange multipliers.

We need to state the constraint. Let’s say we want the rate of change of KE to be some function of time h(t). The KE of the system is \frac{1}{2} \sum m_i \dot{r_i}^2, so its rate of change (assuming masses don’t change) is \dot{KE}= \sum m_i \dot{r_i} \ddot{r_i} = \sum \dot{r_i} (F_i + f_i). The constraint on KE rate of change is thus

0 \overset{!}= G \equiv h(t) - \sum \dot{r_i} F_i - \sum \dot{r_i} f_i.

We thus want to minimise the Lagrangian extension

L(f_i, \lambda) \equiv \frac{1}{2} \sum f_i^2 - \lambda G(f_i)

in both f_i and \lambda. Minimising against each f_i gives

f_i = \lambda \dot{r_i}

and then enforcing the constraint (or minimising against \lambda) then gives

0 = h(t) - \sum \dot{r_i} F_i - \lambda \sum \dot{r_i}^2 \Rightarrow \lambda = \frac{h(t) - \sum \dot{r_i} F_i}{\sum \dot{r_i}^2}.

If your purpose is to determine the melting point, you are in trouble, anyway, because this approach will not work. Melting (same as freezing) is an activated process and thus you have a hysteresis due to lack of nucleation. Who has not seen the classic physics experiment of the water bottle from the freezer where the water remains liquid until the cap opened or the bottle is shaken despite temperatures well below the freezing point?

That said, there is a simple way to reduce temperature fluctuation: increase your system size. Temperature is only well defined in the large system limit anyway. This can be easily done for most cases with the replicate command — LAMMPS documentation

I thank you everyone for your reply. My final purpose is to determine the melting temperature. I am performing MD simulations on this one. I shall try troubleshooting the problem and get back to you.
Thank you everyone!

Dear sir,
Can you elaborate more on why this approach will not work. I am using Coarse grained structure using SOP-SC model. My mentor did not suggest to make the system larger as it would defeat the purpose of coarse graining. I would really appreciate if I got a little more on more defined way of approaching this problem. My ultimate goal is to determine the melting temperature.

You cannot have no fluctuations of the kinetic energy and a small system with a weak thermostat coupling. That is determined by statistical thermodynamics and no clever programming trick can work around this without violating the laws of physics.

Discussing how to do your science has nothing to do with LAMMPS and thus is off-topic for the LAMMPS forum categories. Even more so, since you are using models that are not part of the official LAMMPS distribution. How to do your research is a topic for discussing with your mentor.

There is a wealth of publications on determining melting points. A popular method is to do coexistence simulations.

…and to mention the elephant in the room: is there any value to a melting point for a coarse grained system? already all-atom models are notoriously bad unless the system under investigation is trivially simple.

The oxDNA CG model is pretty good for DNA “melting” (really double helix hybridisation) temperatures, but it was fitted to experimental thermodynamics so that just proves your point.