[lammps-users] Low density problem - polymer - NPT - Dreiding

Dear all,

I have a problem with low density while running NPT in polymers

I am running a small simulation of bulk polymer using 3D periodic cell. The polymer has 3 chains of 50 monomers for a total of 2256 atoms. Force field is Dreiding with harmonic bonds, angles, torsions and impropers; and a buck/coul/cut VdW potential.

Samples are created in C2 and equilibriated with a series of NVT compressions-->minimizations-->dynamics (i.e., I believe I have fairly stable samples). I stop when I get the sample to have the density of bulk PMMA +15% (hence high internal pressure); then switch to Lammps for the production NPT runs. At the bottom you will find the script with the simulation set-up, the first pressure cycle (which is a decompression to atmospheric pressure) and the first temperature cycle. This is followed by 2 pressure cycles and a final T cycle.

My question is: Density from the simulation is ~25% lower than expected across all the pressure range. I should be getting 1.30g/cc at 10,000 atm, and I am getting that value at 25,000 atm instead. Density at atmospheric pressure should be 1.15g/cc, I get 0.8.

I played with the Pdamp factor (from 10.0 to 5000.0) and with the drag factor (from 0.2 to 14!) and though I got some improvements, the density is still 25% off. I recently tried reseting the velocities in every run and it helped: Initially, density was going to 0.1g/cc during the T cycles due to the uncontrolled expansion generated while decompressing; now it only goes down to 0.7g/cc.

I double checked my FF parameters, thrice, and they are right. I went back to 2006 in the Lammps mail-list archives and found some tips, but none in particular about low density or problems with the buck potential. Now I am running the same simulation in C2 in order to compare the energies and decide whether the problem is in the model or in the algorithm.

Funny thing, I created a coarse grain FF based on the same Dreiding parameters, replaced the atoms in my sample by beads and ran the same simulation using the coarse grained model/FF. The densities are much closer to the experimental values.

Any help would be greatly appreciated.

CAS

Script:

Don't see any obvious problem with what you're doing.
The choice of Pdamp factor shouldn't effect your final
state point, just how long it takes to get there.

For a cutoff Coulomb potential, there's a big contribution
to P for atoms beyond the cutoff. Are you taking that
into account in your target pressure or density?

You could also run NVE or NVT at the desired density
and see what the pressure is.

Steve

Thanks for the suggestions. I will look into that.

In fact I am already running samples with longer cut-off and k_space. But I am not expecting big changes.

I got my results from C2 and saw something interesting: Both Lammps and C2 calculated the same kinetic energy (which is expected since temperatures were the same); but the potential energy calculated by Lammps is 10x higher. Each one of the components in the potential energy (bonds, angle, VdW, etc.) is ~10x higher in Lammps than in C2.

I also noticed that C2 minimization is much more effective: C2 halves the potential energy in hundred steps, while lammps minimization only lowers the energy by 10% (with my current convergence criteria). May it be the case that I need to drastically reduce the minimization criteria in lammps?

Thanks again,
CAS

Thanks for the suggestions. I will look into that.

In fact I am already running samples with longer cut-off and k_space.
But I am not expecting big changes.

I got my results from C2 and saw something interesting: Both Lammps
and C2 calculated the same kinetic energy (which is expected since
temperatures were the same); but the potential energy calculated by
Lammps is 10x higher. Each one of the components in the potential
energy (bonds, angle, VdW, etc.) is ~10x higher in Lammps than in C2.

hmmm.... perhaps the two codes have different conventions
on specifying units and you picked the wrong ones in LAMMPS.

I also noticed that C2 minimization is much more effective: C2 halves
the potential energy in hundred steps, while lammps minimization only
lowers the energy by 10% (with my current convergence criteria). May
it be the case that I need to drastically reduce the minimization
criteria in lammps?

if your forces are off, everything is off...

cheers,
   axel.

Re: 10x factor - it does sound like a units issue - you'd need to
check that in the 2 codes.

Re: minimization - are you using the most current (fully patched)
LAMMPS? Some changes were made to the minimizer recently,
though it was for robustness more than for speed. The only
parameter (in the current version) that might help is min_modify dmax
which will allow atoms to move farther per iteration. That might
help if your system has strong overlaps initially, but it won't matter
at the end when it's converging.

Steve

Thanks for the suggestions.

I've found the problem: Wrong implementation of torsions, impropers and VdW. Let me explain for the record.

Torsions: Dreiding specifies total torsion energy barrier. You need to divide it by the number of configurations. For example, in a torsion IJKL where J and K are both C_3, there are 9 possible configurations. The total energy of the torsion has to be divided by 9.

Impropers: Dreiding angle coefficients are enough to take care of most of the inversions (impropers). I was using an inversion coeff for planar molecules, which generated energy 350 times higher than necessary.

VdW: Dreiging has 2 VdW forms: LJ and exp-6. The later has also two variants: one in which the third coefficient (Xi) ranges from 12.5 to 14; and a second variant in which the third coefficient is fixed at 12 thus long range interactions that match LJ. I was using the first variant with no success; then switched to the second and it is working

All these generated higher energies leading to lower densities. Now energies and densities are right on.

CAS