Simulation is getting killed due to huge memory demand on using wall/srd with a mixture of srd and colloidal particles

Dear experts and my fellow LAMMPS users,
I am simulating a pseudo 2d system composed of srd fluid and colloidal particles. The mixture is confined in z-direction by walls. I do not understand why my simulations are getting killed If I use wall/srd to model the walls. I am pasting here the minimum working input deck:

# 3d confined PBPs and fluid
# with WCA potential

variable 	rand equal ceil(random(1423,8674437,756349))
variable        temp string 1.0
units           lj
atom_style      sphere   
atom_modify	first big
dimension       3
boundary	p p f
#newton off

region          box block 0 128 0 128 0 12
region		plane block 0 128 0 128 6 6
create_box      2 box
create_atoms    1 random 65 ${rand} plane overlap 11
set             type  * diameter 10.0
set             type  * mass 75.0
group           big type 1
velocity        big create ${temp} ${rand} loop geom
velocity        big set NULL NULL 0.0

# more careful with neighbors since higher diffusion in abps
neighbor        1.0 bin
neigh_modify    every 1 delay 1 check yes

# WCA potential (purely repulsive)
pair_style hybrid/overlay lj/cut 4.489848 yukawa/colloid 1.4 35 colloid 35 #11.762734
pair_coeff 1 1 colloid 7.3 1 10 10 35 #11.762734
pair_coeff 1 1 yukawa/colloid 20 35 #11.762734
pair_coeff 1 2 lj/cut 1.0 4 4.489848
pair_coeff 2 2 lj/cut 0.0 1.0 
pair_modify shift yes

# overdamped brownian dynamics time-step
fix         langT all langevin ${temp} ${temp} $(100*dt) ${rand} #omega yes
fix		z0 big setforce NULL NULL 0.0
fix	 	step big nve

#equilibration
timestep        0.01
thermo          10000
run             500000
unfix 		langT
unfix 		step
unfix		z0

#inserting SRD
region          fluid block 0 128 0 128 0.1 11.9
create_atoms    2 random 3100 ${rand} fluid
set             type 2 mass 1.0
set             type 2 diameter 0.0

group           small type 2

velocity        small create 1.0 ${rand} loop geom

# delete overlaps
# must set 1-2 cutoff to non-zero value

delete_atoms    overlap 5 small big
write_dump 	all atom dump.atom

reset_timestep 	0
neighbor        0.1 multi
neigh_modify    delay 0 every 1 check yes

comm_modify     mode multi group big vel yes
neigh_modify    include big

timestep        0.01
fix             3 small srd 10 big ${temp} 2.0 ${rand} &
                  radius 0.88 shift yes ${rand} search 0.2 overlap yes collision noslip inside warn tstat yes

fix		zwalls small wall/srd zlo EDGE zhi EDGE
# main run
fix		z0 big setforce NULL NULL 0.0
#velocity        big set NULL NULL 0.0
fix             step big nve
run             100000
reset_timestep 	0
thermo          10000
dump            dump3 big custom 1000 pbp_prod.lammpstrj id type x y  
run		2000000

Can somebody point out the reasoning for this?

With what error message ? And which LAMMPS version ?

Actually, I could not retrieve any error message. My workstation stalls and then after coming to life, my screen simply displays “BAD TERMINATION OF YOUR APPLICATION”. I am trying to figure out the error. The version is 28 Mar 2023.

I run your simulation using LAMMPS (7 Feb 2024 - Update 1) and found a huge increase in memory usage when the simulation creates the SRD particles:


From that point onwards, on my system the simulation uses almost 60 GB of RAM on a single processor. That’s probably why yours is killed.

1 Like

I do now understand the memory issue, but why? It is related to neighbor lists? Without the walls, there are no such memory issues. Creating SRD particles could never be the problem.

Switching to a wall/reflect style, setting a processors * * 1 grid, and running on 4 cores decreases the memory usage to 20 GB.
Also, on the main run, you may want to apply the NVE integrator to all particles, instead of just the group big.
Here is the breakdown of time after a short run with 2000 steps:

Loop time of 74.4415 on 4 procs for 2000 steps with 2622 atoms

Performance: 23212.874 tau/day, 26.867 timesteps/s, 70.445 katom-step/s
99.9% CPU use with 4 MPI tasks x no OpenMP threads

MPI task timing breakdown:
Section |  min time  |  avg time  |  max time  |%varavg| %total
---------------------------------------------------------------
Pair    | 0.028419   | 0.030002   | 0.033324   |   1.1 |  0.04
Neigh   | 0.018435   | 0.019085   | 0.019639   |   0.4 |  0.03
Comm    | 0.044021   | 0.047419   | 0.049811   |   1.1 |  0.06
Output  | 0.0024935  | 0.0025975  | 0.0027012  |   0.2 |  0.00
Modify  | 74.336     | 74.337     | 74.338     |   0.0 | 99.86
Other   |            | 0.00583    |            |       |  0.01

I have to implement no slip boundary conditions to my SRD particles.

My main run corresponds to SRD+COLLOIDS mixture, where Colloids needs integrator for the positions updates and small SRD particles are updated via fix srd.
I think I need to be more informed about how walls are coded here! Why such memory demand exploded by just inclusion of walls is something that I don’t understand at the moment.

Thanks for the clarification, I am by no mean expert on granular simulations, but happy to learn something about it. My point here is not to thinker with your simulation, but to understand what causes the problems you reported. From what I see:

  1. The wall/srd increases significantly the memory consuption.
  2. The breakdown of time usage shows that the comm_modify is eating all the computing time.

The rest is up to you, I am afraid.

Many thanks @hothello for your genuine efforts. I have in haste have posted this problem thinking of getting some direct help. Thank you for pointing out to the stats and now will try to solve this problem.

Well, I am integrating hydrodynamics explicitly using SRD technique into my colloidal dispersion. I am trying to model an experimental observation regarding colloidal dispersion in an aqueous solution.

Thanks for your help.

1 Like