Molecular crystal relaxation

Dear GULP users and developers,

I’ve experienced a problem with the relaxation of molecular crystal structure (simultaneous optimisation of atomic positions and cell parameters) using a dreiding force field.

Execution of my input (0.gin) with a combination of options (“optimisation molecule”) fails with the error
“!! ERROR : Coulomb subtraction within periodic molecules is not allowed”.

Relaxation with the same input without the option (“molecule”) can be performed, but gives an unreasonable mash of atoms and broken bonds, which is probably caused by the inclusion of the 1-2/1-3 Coulomb terms.

I checked the connectivity, and everything is fine, the atoms of the different molecules are sufficiently far apart.
The single point ("single molecule:) calculation fails with the same result
“!! ERROR : Coulomb subtraction within periodic molecules is not allowed”.

But when I perform only the (“single”) calculation, the code works successfully and the output structure matches what I intended to perform.

This seems a bit strange because I think dreiding ff performs well on molecular crystals.

Thank you in advance for your help!

Best,
Alexey
0.gin (4.5 KB)

Hi Alexey,
The problem is that you put “molecule” as a keyword in your input file with a periodic molecule. If you look in the drieding library file you should see that the keyword is already specified there and should be “molmec”.
Regards,
Julian

1 Like

Dear Prof. Gale,

Thank you for your answer and GULP in general! :slightly_smiling_face:

I am sorry for two more questions, but I could not find any information about them anywhere.

I noted the unusual behavior of the default optimization algorithm, which occurred at the parallel version of GULP (compiled via intel-parallel-studio. 2020.4).

While I execute the same input file (constrained molecular crystal relaxation) with a different number of cores, I get different results (energy and geometry).

The difference happens after 3–4 relaxation steps; for instance, here is a small comparison of output executed at 8 and 10 cores:

8 cores:
Cycle: 0 Energy: 220.895317 Gnorm: 0.158248 CPU: 0.531
Hessian calculated
Cycle: 1 Energy: 220.893199 Gnorm: 0.155718 CPU: 0.569
Hessian calculated
Cycle: 2 Energy: 220.807090 Gnorm: 0.173120 CPU: 0.626
Hessian calculated
Cycle: 3 Energy: 220.574387 Gnorm: 0.172520 CPU: 0.680
Hessian calculated
Cycle: 4 Energy: 220.232424 Gnorm: 0.116935 CPU: 0.727
Hessian calculated
Cycle: 5 Energy: 218.856032 Gnorm: 0.152991 CPU: 0.791

10 cores:
Cycle: 0 Energy: 220.895317 Gnorm: 0.158248 CPU: 0.252
Hessian calculated
Cycle: 1 Energy: 220.893199 Gnorm: 0.155718 CPU: 0.287
Hessian calculated
Cycle: 2 Energy: 220.807090 Gnorm: 0.173120 CPU: 0.336
Hessian calculated
Cycle: 3 Energy: 220.246853 Gnorm: 0.169928 CPU: 0.384
Hessian calculated
Cycle: 4 Energy: 220.036451 Gnorm: 0.144967 CPU: 0.422
Hessian calculated
Cycle: 5 Energy: 217.328074 Gnorm: 0.276808 CPU: 0.489

I can’t understand what happened after Cycle 2, because Energy and Gnorm before that step in both cases are the same.

Similar differences happened between serial and parallel implementations, but here it’s logical due to the difference between 2D hessian and lower half triangular.

May I ask for a possible reason for that and what the most reliable setting is for parallel calculations?

Keywords of my calculations:
optimise molmec rigid kcal
stepmx 0.1

My other question regarding rigid relaxation with the same input setting as previously
Very often, when I perform such calculations, it looks like the calculation will converge to a certain energy value, but instantly energy enormously increases and tries to escape from that further.

Here is an example:
Cycle: 137 Energy: 210.087631 Gnorm: 0.003501 CPU: 19.464
Cycle: 138 Energy: 210.087504 Gnorm: 0.003393 CPU: 19.611
Cycle: 139 Energy: 210.087478 Gnorm: 0.003382 CPU: 19.733
Cycle: 140 Energy: 287842468.380507 Gnorm:************** CPU: 20.092
Hessian calculated
Cycle: 141 Energy: 286876014.965150 Gnorm:************** CPU: 20.364
Hessian calculated

Hessian calculated
Cycle: 152 Energy: 187110.248387 Gnorm: 507962.052378 CPU: 23.428
Hessian calculated
Cycle: 153 Energy: 3650.966662 Gnorm: 7442.229716 CPU: 23.818
Hessian calculated
Cycle: 154 Energy: 312.351900 Gnorm: 105.817730 CPU: 24.209
Hessian calculated
Cycle: 155 Energy: 247.417067 Gnorm: 19.156643 CPU: 24.425

Cycle: 241 Energy: 201.442604 Gnorm: 0.299935 CPU: 38.733
Cycle: 242 Energy: 200.534914 Gnorm: 0.194973 CPU: 38.908
Cycle: 243 Energy: 200.046791 Gnorm: 0.140701 CPU: 39.034

I checked it in with a series of different input crystal geometry configurations but the same compound, and very often optimization gets stuck in such a “high energy” configuration.
Sometimes the method can jump away from it.
In general, it is related to molecules overlapping, but I thought because I chose rigid body relaxation, the program avoided such trapping. Input geometries far away from being overlapped.

Please, can you give some insight on how to avoid it?

Sorry again for such a lot of text.
Thank you in advance!

Best,
Alexey

Hi Alexey,

The first part of your message is easy to address. It’s important to remember that what matters in a minimisation is that the end point is the same regardless of how you run it, and there is no special significance to the pathway that gets you there. The reason that the pathway can vary is because when minimising you have to fix 1 atom due to translational invariance, but depending on which one you choose the sequence of energies and forces will vary. In serial GULP fixes the first atom by default, whereas in parallel it was easier to fix the last atom. If you want to get the same behaviour in both then just add the following to your input:

fix_atom last

For the second part, it’s hard to know what is going on without having your input files. Usually when this kind of thing happens it means you have a discontinuous energy surface (e.g. due to potential cutoffs). Minimisers assume that the PES is smooth and continuous & so if you give it something that isn’t it will struggle and may get stuck when a line search jumps over the discontinuity. Make sure you use tapers to try to smooth any cutoffs, etc.
Regards,
Julian

1 Like

Dear Julian,

Thank you a lot!

Yes, I thought that fix_atom was set automatically. Now it works.
By the way, fix_atom was not applied if flags for geometric variables were set. For example, (1 1 0).
And code behavior still differs from number of used CPU.

Regarding taper. Yes, I set improper values. Now it works perfectly!

Regard,
Alexey

Dear Alexey,
Glad that things are better now. If the user is setting the flags manually then the fix_atom option doesn’t apply as this handles GULP’s choice of flags with conv/conp. If there is different results (i.e. the final energy is different) in parallel then if you post the input I can take a look. As mentioned, before if it’s just the pathway to the same value that changes then that’s not really an issue.
Regards,
Julian

Dear Julian,

I would be very glad if you looked at the files.
Here is my input file. To be clear, I also add my results on how the last optimization step, as well as geometry (unit cell volume), differ if the number of cores (-np) changes.

Best regards,
Alexey

input (2.9 KB)

results (4.7 KB)

Dear Alexey,
I’ve had a quick look and I think the issue may be your system in some ways. There seem to be a lot of instabilities (imaginary modes) and soft modes which make the convergence behaviour odd. From what I see, the systems track each other for several cycles regardless of the number of cores and then start to deviate due to the numerics in the line search. What is a red flag is that even at the end the gnorm goes up a bit from the last cycle to the final point (even though the energy has gone down, and the gnorm is small enough to satisfy the default tolerances) in some runs. This suggests the energy surface is not well behaved in the region being probed. If you add the “rfo” keyword, which is designed to check the energy surface and better handles soft modes, then I find that all my runs go to the same answer regardless of the number of cores & the energy is the lowest value that you previously found (for 17 cores). That said, the system still has several imaginary phonons at gamma and so needs to have the symmetry broken & be re-minimised to find the true minimum.
Hope that goes some way to clearing things up.
Regards,
Julian

1 Like

Dear Julian,

Thank you so much for your help and explanation!
I’ve already tested it with different systems, and it works as well as you described.
Yes, in some sort, such behavior can be expected.
However, it’s more than enough to cover my purposes. I would never understand how to do it properly without you.
Thank you a lot again!

Best regards,
Alexey

Dear Alexey,

Glad that works for you. Just a further note, if you have a system where the gnorm is initially large then it’s best to change to “rfo” later on in the optimisation which can be achieved using an option like:

switch rfo gnorm 0.1

rather than adding the keyword.
Best regards,
Julian