Possible a bug in using tip4p together with verlet/split

Hi,

Here I want to report a problem (possibly a bug) in using tip4p together with verlet/split for LAMMPS version 24 May 2013 (I’ve also tried 11Jul13, and it is the same)
Before showing the input files (the in files are listed as below, with the coordinate in the attachements), I’ll describe the problems generally.
The systems is water droplet on graphene. With Dreiding like force field and rigid water model, I tried the following 3 set ups

  1. tip3p for water, with verlet/split.
  2. tip4p for water, with just verlet
  3. tip4p for water, with verlet/split.

, with the following run command
mpirun /apps/lammps/24May13/lmp_openmpi -screen scr.log -in in -reorder nth 4 -partition 4 4

(for run with run_style verlet, the following cmd was used
mpirun /apps/lammps/24May13/lmp_openmpi -screen scr.log -in in )

While the first two runs well, the last one, in the file scr.log.1, I got the following error after the minimization was finished,

ERROR on proc 0: TIP4P hydrogen has incorrect atom type (…/pppm_tip4p.cpp:490)

In my mind, I think this is probably a bug as the first two set ups word well.

here is the in file for tip4p for water, with verlet/split, the setting for the first 2 are commented,

water between two graphite layers: looking for the minimum pressure situation

System and potential definition

units metal
boundary p p p
atom_style full
processors * * * part 1 2 multiple

the above line wasn’t used when run without partition

read_data ./water-filled-filtered

pair_style lj/cut/tip4p/long 1 2 2 2 0.1546 10.0

for tip3p

pair_style lj/cut/coul/long 10.0

bond_style hybrid morse harmonic

C-C bond

bond_coeff 1 morse 4.964 2.1867 1.418

O-H bond

bond_coeff 2 harmonic 19.5138 0.9572

angle_style hybrid cosine/squared harmonic
angle_coeff 1 cosine/squared 2.9084 120.0
angle_coeff 2 harmonic 2.3850 104.52

dihedral_style charmm
dihedral_coeff 1 0.1302 2 180 0.0

pair_coeff for units metal

pair_coeff * * 0.0 0.0
pair_coeff 1 1 8.0338e-3 3.1589
pair_coeff 1 3 3.0e-3 4.01
pair_coeff 1 2 0.0 0.0
pair_coeff 2 2 0.0 0.0
pair_coeff 2 3 0.0 0.0
pair_coeff 3 3 4.21e-3 3.3611

special_bonds dreiding

mass 1 15.9994
mass 2 1.0080
mass 3 12.0

kspace_style pppm/tip4p 1.0e-5

for tip3p

kspace_style pppm 1.0e-5

neighbor 2.0 bin

global group definition

group gra type 3
group oxygen type 1
group hydrogen type 2
group flow type 1 2

global output definition

compute tmpall all temp
compute flow_temp flow temp
compute gra_temp gra temp
compute flow_comtemp flow temp/com

variable define

nowm : number of water molecules

variable nowm equal count(oxygen)

Set gra to be rigid

thermo_style custom step cpu etotal pe ke
thermo 2000
thermo_modify flush yes

minimization

minimize 1.0e-15 1.0e-15 10000 10000

equilibrate

velocity gra create 298.0 35281345 temp gra_temp
velocity flow create 298.0 35281345 temp flow_temp
fix 3 flow nvt temp 298.0 298.0 0.2
fix_modify 3 temp flow_comtemp
fix 4 gra nvt temp 298.0 298.0 0.2
fix_modify 4 temp gra_temp
fix g1 gra momentum 1 linear 1 1 1
fix 31 flow shake 0.001 20 1000000 b 2 a 2

dump 1 all atom 100 atom.lammpstrj
thermo 100

timestep 2.0e-3
run_style verlet/split

for verlet

run_style verlet

run 200

Best
Ming

tip4p.rar (481 KB)

Hi,

Here I want to report a problem (possibly a bug) in using tip4p together
with verlet/split for LAMMPS version 24 May 2013 (I've also tried 11Jul13,
and it is the same)

the "bug" is not in verlet split, but in how you use it.

you first do a minimization (which creates class instances the regular
way) and *then* do an MD with verlet/split run style. this is not a
supported use and thus you get the error. this will also run your
minimization *replicated* on the two partitions, which is very likely
very inefficient.

axel.

BTW: it would have been a *really* big help, if your input wasn't so convoluted.

i have one more question on your input. after visualizing the data
file, i am asking myself, why do you use kspace at all?

you could just set a cutoff of ~50 angstrom without using kspace at
all and have all coulomb interactions included *without* any spurious
contributions from periodic images.
if the graphene starts to become a problem, use a hybrid pair style
and neighborlist style multi

that should allow you to run even faster.

axel.

Hi Axel,

Thanks a lot for all your so professional suggestions and I really
appreciate it! I will try all your suggestions and send you my statistics
(including the mailing list as you suggested for sure).

Best
Ming

Hi Axel,

Thanks a lot for all your so professional suggestions and I really
appreciate it! I will try all your suggestions and send you my statistics
(including the mailing list as you suggested for sure).

there is one more thing. if you want a *real* droplet, you must not
use periodic boundaries in z-direction, and you probably also want to
use processors * * 1
and perhaps fiddle with the balance command. to optimize your load balance.

axel.

Hi Axel,

Thanks a lot for all your so professional suggestions and I really
appreciate it! I will try all your suggestions and send you my statistics
(including the mailing list as you suggested for sure).

i did some quick checks with your input and have some good and some
bad news for you.

the most efficient way to run your input "as is", i.e. with kspace, is
actually using PPPM/disp/omp and lj/long/tip4p/long/omp
using 6 threads per MPI with 32 MPI tasks. the major problem is that
you have a lot of empty space and many particles without a charge, but
with pppm you have go over all of them. with pppm/disp at least, you
can make the real space cutoff short. kudos to the excellent work of
rolf isele-holder and his student that contributed those styles.

using lj/cut/tip4p/cut seems to have problems with threads, and after
looking at the specifics of your input, it would be best to run it yet
differently, but the pair style that you need for that doesn't exist
(yet). i have run something similar in the past with SPC/E and that
would beat everything hands down.
https://www.youtube.com/watch?v=_iRo_UNHgrw
this was deliberately using a (sufficiently but not too long) cutoff
only, because i didn't want the periodic images of the droplet to
interact.

i cannot make any promises right now, but to do some realistic
testing, i would need to know what kind of machine you are running
your simulations on and how many processors overall can you or do you
want to maximally use. you also need to very carefully think about
what level of accuracy and what boundary conditions you want to model
for the graphene sheet. sticking to the regular LJ cutoff scheme, may
bot be accurate enough, as that usually depends on error cancellation
in dense systems. this doesn't hold for your system and that is one of
the applications of treating even LJ interactions with long-range
solvers. another question is how flexible you want the graphene sheet
to be. right now you keep it stretched due to the boundary conditions.
it that is in accord to the system that you want to model, you are
good. otherwise you need to think about how you include the fact that
sticking a water droplet on a "free" (small) piece of graphene can
induce a dent or a fold.