[lammps-users] Why does temperature of copper plate rise?

I try to simulate copper plate at 300K, but through 250fs (timestep is 0.001fs) the temperature rises to 400K
Why?

Is it normal, that i use MEAM potential to simulate copper crystal?
And is it normal, that it deforms?

in.file in attachments

in2.test (811 Bytes)

I added

min_style cg
minimize 1e-25 1e-25 1000 1000

Now timestep is 0.05 fs and through 500fs (10000 steps) temperature is 157K. This is so low

NVT forcibly fixes temperature, or no? If yes, I need simulation of heat transfer and it does not suit me. Or I’m missing something?

вс, 25 июл. 2021 г. в 20:39, a.pramos <[email protected]>:

It is likely that the lattice parameter you assigned doesn't match the one estimated by the potential. As you didn't minimize and used and NVE ensemble, your system is deforming and increasing the temperature to adjust to such mismatched conditions. Try using an NVT ensemble or minimizing before

NVT and NPT fix temperature. If you want to simulate heat transfer, take a look at 8.3.5. Calculate thermal conductivity — LAMMPS documentation

Yes, it’s a useful thing, but I need to simulate 2 real copper plates too

вс, 25 июл. 2021 г. в 21:32, a.pramos <[email protected]>:

Simulating the copper plates and applying a heat transfer are not independent processes. In that document there are listed several ways to simulate your conditions, but depending on your system one might end up being better than the others. I suggest you first perform a minimization at the desired temperature and then calculate the heat transfer through any of them

I understood in my simulation atoms scatter and dont stay in the lattice. But why?
I use MEAM potential, must I use static bond anyway? Or is the problem in another thing?

вс, 25 июл. 2021 г. в 22:03, a.pramos <[email protected]>:

in3.test (682 Bytes)

You should analyze your dumps with Ovito or a similar program and check how your structure changes. I expect it will be suffering some distortion because the lattice parameter you assigned it will be different than that estimated by your potential at your given temperature, but it is always better that you see it with your own eyes. Such changes during minimization don't mean your simulation is not valid, just that it is "adjusting" itself

Yes, I watched it through OVITO. The plate does not just deform, the atoms just fly apart - they behave like a gas

вс, 25 июл. 2021 г., 22:36 a.pramos <[email protected]>:

Now it’s getting cold.
I use minimization and then run with 0.001ps timestep
After 1000 steps it is 100K, although at first is was 300K

вс, 25 июл. 2021 г. в 22:44, a.pramos <[email protected]>:

Units are metal

вс, 25 июл. 2021 г. в 23:04, Егор Перевощиков <[email protected]>:

Now you have changed your units, check the system's evolution during minimization. You can also try allowing for small volume changes during the minimization with fix box/relax

Full LAMMPS output. First the temperature goes down and then goes up
It is with the minimization with fix 5 all box/relax iso 0.0 vmax 0.001

LAMMPS (3 Mar 2020)
Created orthogonal box = (0 0 0) to (100 40 100)
1 by 1 by 1 MPI processor grid
Lattice spacing in x,y,z = 3.65 3.65 3.65
Created 8112 atoms
create_atoms CPU = 0.0146144 secs
Reading potential file library.meam with DATE: 2012-06-29
Reading potential file Cu.meam with DATE: 2007-06-11
WARNING: Using ‘neigh_modify every 1 delay 0 check yes’ setting during minimization (src/min.cpp:190)
Neighbor list info …
update every 1 steps, delay 0 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 6
ghost atom cutoff = 6
binsize = 3, bins = 34 14 34
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair meam, perpetual
attributes: full, newton on
pair build: full/bin/atomonly
stencil: full/bin/3d
bin: standard
(2) pair meam, perpetual, half/full from (1)
attributes: half, newton on
pair build: halffull/newton
stencil: none
bin: none
Setting up cg style minimization …
Unit style : metal
Current step : 0
Per MPI rank memory allocation (min/avg/max) = 17.67 | 17.67 | 17.67 Mbytes
Step Temp PotEng TotEng
0 300 -26475.106 -26160.577
10 300 -26509.079 -26194.55
20 300 -26511.229 -26196.7
30 300 -26512.711 -26198.183
40 300 -26513.644 -26199.116
50 300 -26515.063 -26200.534
60 300 -26516.35 -26201.821
70 300 -26517.26 -26202.731
80 300 -26517.963 -26203.434
90 300 -26518.626 -26204.097
100 300 -26519.628 -26205.099
110 300 -26520.426 -26205.897
120 300 -26521.24 -26206.712
130 300 -26522.418 -26207.889
140 300 -26523.16 -26208.631
150 300 -26524.143 -26209.614
160 300 -26524.92 -26210.392
170 300 -26525.665 -26211.136
180 300 -26526.809 -26212.28
190 300 -26526.843 -26212.314
200 300 -26526.845 -26212.317
210 300 -26526.846 -26212.317
220 300 -26526.846 -26212.317
230 300 -26526.846 -26212.317
240 300 -26526.846 -26212.317
250 300 -26526.846 -26212.317
258 300 -26526.846 -26212.317
Loop time of 169.956 on 1 procs for 258 steps with 8112 atoms

96.9% CPU use with 1 MPI tasks x no OpenMP threads

Minimization stats:
Stopping criterion = max force evaluations
Energy initial, next-to-last, final =
-26475.1055741 -26526.846123 -26526.8461231
Force two-norm initial, final = 11.917 3.32555
Force max component initial, final = 0.349315 0.443255
Final line search alpha, max atom move = 0.0564009 0.025
Iterations, force evaluations = 258 1003

MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total

Pair | 169.34 | 169.34 | 169.34 | 0.0 | 99.64
Neigh | 0 | 0 | 0 | 0.0 | 0.00
Comm | 0.0043665 | 0.0043665 | 0.0043665 | 0.0 | 0.00
Output | 0.0067887 | 0.0067887 | 0.0067887 | 0.0 | 0.00
Modify | 0 | 0 | 0 | 0.0 | 0.00
Other | | 0.6088 | | | 0.36

Nlocal: 8112 ave 8112 max 8112 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 0 ave 0 max 0 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 242122 ave 242122 max 242122 min
Histogram: 1 0 0 0 0 0 0 0 0 0
FullNghs: 484244 ave 484244 max 484244 min
Histogram: 1 0 0 0 0 0 0 0 0 0

Total # of neighbors = 484244
Ave neighs/atom = 59.6948
Neighbor list builds = 0
Dangerous builds = 0
Setting up Verlet run …
Unit style : metal
Current step : 0
Time step : 0.001
Per MPI rank memory allocation (min/avg/max) = 18.47 | 18.47 | 18.47 Mbytes
Step Temp PotEng TotEng
0 300 -26526.846 -26212.317
1000 199.05683 -26420.984 -26212.287
2000 224.73523 -26447.913 -26212.294
3000 233.83423 -26457.456 -26212.298

вс, 25 июл. 2021 г. в 23:40, a.pramos <[email protected]>:

I see no big energy changes so I assume your system is behaving like a solid and everything is going as planned, but check the dump files. If you want to start your actual simulation at 300 K, you can let the system rest at 300 K for a few fs under NPT/NVT ensemble (depending on your objective) after the minimization.

I cannot figure out why the plate is cooling and heating. It should not change its temperature so much if nothing acts on it

пн, 26 июл. 2021 г. в 00:05, a.pramos <[email protected]>:

It is normal than some magnitudes suffer relatively big changes when changing the ensemble, conditions... Expect something similar, to a greater or smaller degree, in most simulations you perform. That's why sometimes it's better to let the system relax with s given ensemble after the minimization

it is called equilibration. you really need some tutoring in basic MD simulation methodology and best practices.
what you have demonstrated in your recent emails is a demonstration of how many mistakes one can make without the proper training and understanding of the basic principles.

a) if you want to study transfer of heat, you best avoid any kind of irrelevant boundary and surface effects. that means, you’ll be best off using periodic boundaries, not walls
b) if you create a system geometry, you must make certain that you don’t create overlapping positions, and that includes overlaps across periodic boundaries. when you define your box geometry to include lattice points, it is almost guaranteed that you get such close contacts causing extremely high forces that will catapult atoms through the system at extremely high velocity. this can be avoided by either defining the box and region boundaries with a small offset or redefine the lattice with a shifted origin
c) when you create a geometry and want the optimal positions for a given material, its potential and parameters and the conditions you want to study, you best use a geometry that represents the equilibrium at those conditions. this can be determined by doing a simulation of the bulk system with variable cell (fix npt) at the desired temperature and then use the averaged lattice constant once the system has equilibrated.
d) when you create initial positions they are likely not at the minimum of the potential energy, so even without adding some kinetic energy through the velocity command, the system will try to equilibrate which means that some of the potential energy will be converted to kinetic energy. if you create a geometry with high potential energy, there is going to be correspondingly more kinetic energy created while the system equilibrates
e) when, on the other hand, you run a minimization (and including relaxing of the box dimensions), then you are removing potential energy from your system. thus if you add kinetic energy with the velocity command, some of it will be converted to potential energy during equilibration.
f) when you have a 2d-system (plate) it makes no sense at all to do isotropic box relaxation. you only want to relax the directions where the plate is periodic in.
g) when you want to study heat transfer, you ideally want to have a near infinite bulk with a surface, then your second material. however, this is not easily possible with the time and length scales available. thus you use a periodic system with a sufficiently thick slab so you can attach a (dissipative?) thermostat to the “core” of the metal plate and thus mimicking the behavior of a much bigger slab where kinetic energy can be exchanged with the bulk. how to proceed from there depends on the method you choose to determine the heat exchange.
h) keep in mind that when you have relaxed your system to the bulk lattice at the desired temperature, you will still have relaxation (and possibly reconstruction) of the layers at the surface. thus even if you would theoretically have perfect positions, you will still see some excess kinetic energy generated because of that. starting with a minimization would be counterproductive in this case, as it removes potential energy, and thus re-equilibrates while consuming some of the kinetic energy added. in short, it is extremely difficult to predict the exact amount of kinetic energy to be added, and thus it is common practice to run for a while with a thermostat (and LAMMPS offers a large variety of those) until the system is in equilibrium.
i) when a system is in equilibrium and has the proper simulation settings, continuing from the equilibration run with a thermostat with just fix nve should preserve the (average) temperature and total energy. if not, then either the system is not fully equilibrated or the simulation settings are such that the necessary energy conservation is not given.
j) there are more points to discuss about accurately measuring heat flux, but it is pointless to discuss those for as long as you have not managed to set up a meaningful simulation of just your base system that is maintaining the expected geometry, has proper dimensions and sizes and can conserve energy to a sufficient degree.

most of the points mentioned above have been discussed in some form or another and many of them multiple times on this mailing list, but more importantly, I would expect that somebody aiming to do an advance study using MD methodology (and studying heat transfer qualifies as such), has learned those beforehand, and most appropriately through tutoring by a competent person, most of the time that would be the adviser or a senior/experienced colleague. expecting to learn this through the mailing list is not likely to work, since people usually don’t have the time to study your inputs and monitor your actions at the level of detail necessary. more importantly, you so far only noticed problems because they were very obvious because of rather obvious mistakes or misconceptions. however, doing accurate studies of advanced MD simulations requires much more attention to detail and you may get completely bogus results from simulations that appear to have run correctly. MD simulations are as much a craft as they are science and thus - like in any other craft - a certain type of apprenticeship and transfer or knowledge and experience (especially of the kind that cannot easily be written on paper) is required to be successful. otherwise you will see problems where there are none, or don’t notice when things go wrong.

HTH,
Axel.