Potential Bug in fix bond/create

Dear Lammps Users,

I recently built up a gel system with LJ beads and let them crosslink using fix bond/create to create a highly crosslinked polymer gel. However I found an interesting result in which the simulation box is divided into an lattice structure and the total cell number is exactly equal to the number of CPUs I use (please check the attached snapshots).

The running command I use is:

mpirun -np 36 lmp_ompi_g++ -sf gpu -pk gpu 4 -in system.in

My input script is:

----------------- Init Section -----------------

units lj
atom_style full
pair_style lj/cut 2.5
pair_modify shift yes
bond_style harmonic
boundary p p p
timestep 0.001
neighbor 2.5 bin
neigh_modify every 1 one 10000
special_bonds lj 0 1 1 extra 50

comm_modify cutoff 5

----------------- Atom Definition Section -----------------

read_data “system.data”
pair_coeff * * 1.5 1.0 2.5 #epsilon sigma cutoff 1KT KJ/mol in unit
bond_coeff * 30 1.0

----------------- Run Section -----------------

equalibrium

group monomer type 1 2 3 4 5 6

group ms type 1 2 3 4 5 6

fix 1 ms nve
fix 2 ms langevin 1.0 1.0 10.0 1234 #KT=2910

thermo 1

fix bl all balance 1000 1.1 shift z 100 1.1

min_style fire
minimize 1.0e-4 1.0e-6 1000 1000

timestep 0.005

reset_timestep 0

#M1 + M1 = M2 + M2
fix p1 all bond/create 1 1 1 1.12246 1 iparam 1 2 jparam 1 2 prob 1 2
#M2 + M1 = M3 + M2
fix p2 all bond/create 1 2 1 1.12246 1 iparam 1 3 jparam 1 2 prob 1 3
#M3 + M1 = M4 + M2
fix p3 all bond/create 1 3 1 1.12246 1 iparam 1 4 jparam 1 2 prob 1 4
#M4 + M1 = M5 + M2
fix p4 all bond/create 1 4 1 1.12246 1 iparam 1 5 jparam 1 2 prob 1 5

#====M2 + M2 = M3 + M3
fix p5 all bond/create 1 2 2 1.12246 1 iparam 1 3 jparam 1 3 prob 1 6
#M3 + M2 = M4 + M3
fix p6 all bond/create 1 3 2 1.12246 1 iparam 1 4 jparam 1 3 prob 1 7
#M4 + M2 = M5 + M3
fix p7 all bond/create 1 4 2 1.12246 1 iparam 1 5 jparam 1 3 prob 1 8

#====M3 + M3 = M4 + M4
fix p8 all bond/create 1 3 3 1.12246 1 iparam 1 4 jparam 1 4 prob 1 9
#M4 + M3 = M5 + M4
fix p9 all bond/create 1 4 3 1.12246 1 iparam 1 5 jparam 1 4 prob 1 10

#====M4 + M4 = M5 + M5
fix p10 all bond/create 1 4 4 1.12246 1 iparam 1 5 jparam 1 5 prob 1 11

dump myDump all atom 1 dump.atom

run 2
write_data final.data.*

bottom.jpg

front.jpg

Dear Lammps Users,

I recently built up a gel system with LJ beads and let them crosslink using
fix bond/create to create a highly crosslinked polymer gel. However I found
an interesting result in which the simulation box is divided into an lattice
structure and the total cell number is exactly equal to the number of CPUs I
use (please check the attached snapshots).

The running command I use is:

mpirun -np 36 lmp_ompi_g++ -sf gpu -pk gpu 4 -in system.in

do you get the same result, when not using GPUs?

axel.

Hi Axel,

I tested the same system with CPUs only, the result looks the same as before. see attachments.
The lammps version I am using is 10Aug15.
Thanks.

Zilu

front-CPU.jpg

Hi Axel,

I tested the same system with CPUs only, the result looks the same as
before. see attachments.
The lammps version I am using is 10Aug15.

ok. this was a wild guess by me, since with the GPU command line
flags, you would need to build neighbor lists for both GPU and CPU and
for the latter the pair cutoff would be ignored. i now see that you
are using an extremely large (for lj units) neighborlist skin. that
will slow down things needlessly. i highly recommend to stick to
running on the CPU when using LAMMPS to build a system (you can switch
back to GPU once you created the final interlinked system and written
it out to a data file). that way, you can stick to a normal neighbor
list skin and matters are simpler.

without spending significant effort on debugging your code piece by
piece (and having your data file at hand as well), it will be
difficult to track this down.
however, you should note that there have been some updates to the fix
bond/create code in late october 2015, so before you may be able to
entice somebody to have a closer look, you should verify that the
issue persists with the latest patch level.

axel.

OK, I see. Let me try and dig out more information. Thank you Axel.

Tim Sirk (CCd) can probably comment on your

strategy using multiple fix bond/create commands.

I think what you are saying is that the resulting

connectivity of the bonded structure has artifacts

from the processor boundaries. The multiple fix bond/create

commands do not share info with each other, so it

could be that is causing bond to form or not form

at processor boundaries.

Steve

Hi Steve, Axel and Sirk,

I did another test by replacing the multiple fix bond/create command into one command below:

fix p1 all bond/create 1 1 1 1.12246 1 iparam 4 5 jparam 4 5 prob 1 2

Then I wrote a program to change the atom type according to how many bonds it has.

The result looks reasonable and no such processor boundary observed (see attached snapshot).

However, I still don’t know what’s the difference between multiple fix bond/create and the single one mentioned above. I supposed once a fix is done, all data should be synchronized across all processors.So these two situations should get the same results. If not, then there should be something missing on fix_bond_create code.

Am I right? Thanks.

Zilu Wang

Department of Polymer Science
University of Akron

single-cmd.jpg

Hi Zilu,

It looks like you want a set of coupled reactions. There are other examples of this on the mail this - in my opinion running multiple fix bond create (or break) commands that are operating on the same atoms should be an undefined behavior. Each of these commands track their bond counts independently, so bonding decisions based on a current bond count are potentially flawed. I’m not sure why there would be bond differences at the domain boundaries, but I wouldn’t assume the domain edges are incorrect, it could also be the other way around.

The first thing I would check is the number of new bonds, and the new bonds per atom in your final structure, from a serial run. These might be more than you expect, based on your imax parameters.

Tim