Suitable neighbor setting in confined box

Thanks for the quick reply.

I'm simulating a very long polymer (my goal is N=2000 channal with box
size extended only in x direction 2200(X)x20(Y)x20(Z)) in channel with long
range electrostatic interaction.

​this raises more questions than it answeres:​

​what does N=2000 mean? 2000 atoms in total? 2000 atoms per polymer? how
many polymers are in your system? are they aligned in parallel or
entangles? what does "long range electrostatics" mean in this context, just
a very long cutoff?

*Any suggestion to boost the performance is welcome.*
*Tanks.*

This is my input:
(pair_style CGlongrange 18.20 4.3186 19.53
Is a pair force I add which needs cutoff around 19.53.)

#Semi-flexible in low ionic strength
#Use FENE beadspring

units lj
atom_style full
special_bonds fene
boundary p s s

​like i mentioned, if this is a 1-d system, you can just use fixed
boundaries. ​

bond_style fene
angle_style cosine

read_data system.data

bond_coeff 1 30.0 1.5 1.0 1.0
angle_coeff 1 4

atom_modify sort 0 100
neighbor 0.4 nsq
neigh_modify every 1 delay 1

#1:monomer, 2:Post
pair_style CGlongrange 18.20 4.3186 19.53

​because you use a custom pair style, nobody can debug it.​
​try to reproduce the same issue with lj/cut and somebody may take a closer
look.​

pair_coeff 1 1

pair_modify shift yes

#group
group mobile type 1

fix 1 mobile nve
fix 2 mobile langevin 1.0 1.0 10.0 2501
fix ywall mobile wall/reflect ylo 0.0 yhi 20
fix zwall mobile wall/reflect zlo 0.0 zhi 20

​i don't understand the point of using shrinkwrap​ boundaries with walls.
if you want the polymer to only move in one dimension, just set the forces
and velocities for the other dimensions to zero. you many not want to used
fix langevin or a customized version of it. or write something similar to
fix enforce2d, that just acts in 1d.

​you system is rather small, and i don't quite see, where you are running
in performance bottlenecks.​

before worrying too much about it, do some testing and profiling.

axel.

Thanks for the quick reply.

I'm simulating a very long polymer (my goal is N=2000 channal with box
size extended only in x direction 2200(X)x20(Y)x20(Z)) in channel with long
range electrostatic interaction.

​this raises more questions than it answeres:​

Let me try to make it clear.
This code can be runned using neighbor nsq and I try to further optimize
this.

​what does N=2000 mean? 2000 atoms in total? 2000 atoms per polymer? how
many polymers are in your system? are they aligned in parallel or
entangles? what does "long range electrostatics" mean in this context, just
a very long cutoff?

Jost 1 polymer and with a very long cutoff.

*Any suggestion to boost the performance is welcome.*
*Tanks.*

This is my input:
(pair_style CGlongrange 18.20 4.3186 19.53
Is a pair force I add which needs cutoff around 19.53.)

#Semi-flexible in low ionic strength
#Use FENE beadspring

units lj
atom_style full
special_bonds fene
boundary p s s

​like i mentioned, if this is a 1-d system, you can just use fixed
boundaries. ​

This is still 3d system. Thanks for your suggestion.
I think I should use p f f instead after I check with the doc.

bond_style fene
angle_style cosine

read_data system.data

bond_coeff 1 30.0 1.5 1.0 1.0
angle_coeff 1 4

atom_modify sort 0 100
neighbor 0.4 nsq
neigh_modify every 1 delay 1

#1:monomer, 2:Post
pair_style CGlongrange 18.20 4.3186 19.53

​because you use a custom pair style, nobody can debug it.​
​try to reproduce the same issue with lj/cut and somebody may take a
closer look.​

It is very similar to pair_style coul/debye .

pair_coeff 1 1

pair_modify shift yes

#group
group mobile type 1

fix 1 mobile nve
fix 2 mobile langevin 1.0 1.0 10.0 2501
fix ywall mobile wall/reflect ylo 0.0 yhi 20
fix zwall mobile wall/reflect zlo 0.0 zhi 20

​i don't understand the point of using shrinkwrap​ boundaries with walls.
if you want the polymer to only move in one dimension, just set the forces
and velocities for the other dimensions to zero. you many not want to used
fix langevin or a customized version of it. or write something similar to
fix enforce2d, that just acts in 1d.

​you system is rather small, and i don't quite see, where you are running
in performance bottlenecks.​

before worrying too much about it, do some testing and profiling.

I have tested this code it would need 12 days for N=640 single chain if I
use omp with 8 threads . I try to increase to an extra 3 times , so I guess
I'm running in performance bottlenecks.

Thanks.

before worrying too much about it, do some testing and profiling.

I have tested this code it would need 12 days for N=640 single chain if I
use omp with 8 threads . I try to increase to an extra 3 times , so I guess
I'm running in performance bottlenecks.

Thanks

just doing one test run like you describe is not even remotely coming to
close to proper performance testing/profiling. ​even if you had implemented
OpenMP support in your custom pair style, i doubt that it will scale well
to that many threads for your kind of system. ​

but a simple look at the timing output will tell you where all the time is
spent.
for demonstration purposes, i just set up a simple box with 640 atom in
linear chain, each 2.5 sigma apart
to get a box of length 1600 sigma (by 20 by 20 sigma) with fixed boundaries
and reflecting walls.
i just use a simple lj/cut potential with a cutoff of 30.0 sigma.

here is the input:
units lj
boundary p f f

read_data chain.data

pair_style lj/cut 30.0
pair_coeff * * 1.0 1.0

neighbor 0.4 nsq
neigh_modify delay 0 every 1 check yes

fix f1 all nve
fix f2 all langevin 1.0 1.0 10.0 23123
fix f3 all wall/reflect ylo EDGE yhi EDGE
fix f4 all wall/reflect zlo EDGE zhi EDGE

timestep 0.01

run 100000

now when i run this on my (5 year old!) desktop with 1 processor i get the
following performance data:

Loop time of 36.9152 on 1 procs for 100000 steps with 640 atoms
100.1% CPU use with 1 MPI tasks x 1 OpenMP threads

MPI task timings breakdown:
Section | min time | avg time | max time |%varavg| %total