Minimising monolayer graphene with vacancy defects

Dear all,

For the past couple of days, I have been running MD simulations to calculate the Young’s modulus of graphene at various percentage vacancy defects.

Prior to the deformation stage, I ran NVT followed by NPT. Although the fluctuation in the x and y dimension (z was fixed) was within an acceptable range, all the structures were minimising to the same lengths.

This surely is wrong since previous studies have indicated that increasing the percentage defects should induce more ripples in the sheet, thereby lowering the size of the x and y dimension[1].

Yesterday, I thought that maybe I had to resort to the minimize command prior to using NPT but this still was of no use; I have tried the minimisation with and without the fix box/relax command and also varied the dmax parameter. Now, I have actually run out of options, at least given my current understanding of the software.

The pertinent commands are:

velocity all create 300 10248676 dist gaussian

run 0
velocity all scale 300

fix 1 all box/relax x 0.0 y 0.0 couple none vmax 0.01 nreset 100

min_style cg
min_modify dmax 2.0 line quadratic
minimize 0 0 1000 100000

fix 1 all nve/limit 0.1
fix 2 all langevin ${init_temp} 300 0.0006 123457

fix 1 all npt temp {init_temp} 300 0.1 tchain 6 x {p_x} 0.0 1.0 y ${p_y} 0.0 1.0 couple none pchain 6 drag 1.0 nreset 1
fix 2 all momentum 1 linear 1 1 1 rescale

The tested timesteps were 0.5 fs and 1 fs, which produced similar results i.e. 201.4 Å and 198.6 Å for the x- and y-dimension, respectively. The systems which were modelled had approximately 15000 atoms and the AIREBO force-field was used.

Regards,
Michael

In other words, how may I be sure that a graphene sheet with certain atoms missing is properly minimized if I am getting practically the same sheet size irrespective of how many atoms I’m deleting? More deleted atoms means more buckling in the sheet which theoretically corresponds to a decrease in the sheet’s length and width.

Regards,
Michael

In other words, how may I be sure that a graphene sheet with certain atoms
missing is properly minimized if I am getting practically the same sheet
size irrespective of how many atoms I'm deleting?

most likely you need to change your simulation setup and/or protocol.
it is a well known property of minimizers to not guarantee finding the
global minimum, but getting trapped in local minima, especially with
shallow potential hypersurfaces.

More deleted atoms means more buckling in the sheet which theoretically corresponds to a decrease in the sheet's length and width.

but how can you be certain, that this is not an activated process
and/or only observed as an average at temperatures different from 0K
(which is what a minimization corresponds to)?

it looks a lot like you are looking at your simulations from the
macroscopic perspective and disregard how things may be different on
the atomic length scale and typical time scales.

axel.

Dear Dr Kohlmeyer,

Thank you for your reply. I have tried to not use the minimise command and went straight to NVT followed by NPT but the sheet size still came out to be the same.

What I have noticed was that when I used the minimise command after NVT + NPT, the system shrunk at first by NPT but then returned to its original size prior to the second NPT. I analyzed the data as an average taken over 10 ps.

If my simulation protocol is incorrect, what are your suggestions to ensure that the system is adequately minimized and in response to your reply, should a minimization be carried out prior to and after initializing the system at a certain temperature?

Regards,
Michael

Dear Dr Kohlmeyer,

Thank you for your reply. I have tried to not use the minimise command and
went straight to NVT followed by NPT but the sheet size still came out to be
the same.

What I have noticed was that when I used the minimise command after NVT +
NPT, the system shrunk at first by NPT but then returned to its original
size prior to the second NPT. I analyzed the data as an average taken over
10 ps.

If my simulation protocol is incorrect, what are your suggestions to ensure
that the system is adequately minimized and in response to your reply,
should a minimization be carried out prior to and after initializing the
system at a certain temperature?

you are missing my point. you are obsessing about small details, but
it looks to me, that you are not getting the big picture right. under
these circumstances it is always best to take a step back and start
over.
thus i think rather than exploring on your own and making assumptions
that don't seem to be working, you should start by *exactly*
reproducing the simulated system and simulation protocol described in
the paper you were referencing, and observe and learn from that. if
you are not reproducing the results, you know, that you are not
following the prescribed procedure close enough.

axel.

Dear Dr Kohlmeyer,

Your reply was very insightful. I now believe that the reason why my system is not responding to the various percentage defects is that each time the minimizer is finding the same local minimum and getting stuck there. Hence, a possible way forward might likely be to carry out a quenching procedure.

Since my intention is to simulate bulk graphene rather than nanoribbons, I chose to follow a different methodology than the one mentioned in the paper.

Regards,
Michael