Get stuck with reaxff

Dear LAMMPS users,

I’m using lammps (23 Jun 2022) to simulate CVD on graphene. In my system, there is a flat graphene layer and benenze molecules are deposited continuously. I submitted the job with 56 cores, and somehow, calculation got stuck after 7500 steps with no error generated. The top command shows the job is still running, with the cpu fully occupied. However, the log.lammps file stops updating.

Then I tried to run the job with single core and the job completes successfully.

Thank you for any advice on solving the issue.

Here is my input file:

# input file
units			real
dimension		3
processors		* * *
boundary		p p p 
box 			tilt large


# read data
atom_style		full
read_data		layer.data 


# potential
pair_style		reax/c NULL
pair_coeff	 	* * ffield.reax.002.CHO C H

# md parameters
neighbor		2 bin
neigh_modify	every 10 delay 0 check yes	
fix             	qeq all qeq/reax 1 0.0 10.0 1e-6 reax/c


# outputs
thermo		100
thermo_style	custom step atoms temp pe press cpu cpuremain
dump			dump_traj all custom 100 md.lammpstrj id type x y z


minimize 		1.0e-4 1.0e-6 100 1000
velocity 		all create 300.0 48459 


fix			simu all nvt temp 1300 1300 10
molecule 		benenze benenze.txt

region 		slab	block 0 42.6 0 49.2 5 60
fix			depo all deposit 80 0 500 29423294 region slab near 1.5  mol benenze vz -0.02 -0.03

timestep		1	
run			40000
unfix			simu

Regards

Xia

The ReaxFF implementation in LAMMPS makes some assumptions about how systems behave during simulations and one assumption is that the number of atoms, bonds, hydrogen bonds, angles, dihedrals and so on do not change much per sub-domain over the course of a simulation. With the default domain decomposition for your kind of system, this assumption does not hold true when using many MPI processes.

You can try to minimize the impact by making sure you have no subdomains without atoms initially. That could be done by using a processors * * 1 setting, so there is no domain decomposition in z-direction. This should help with parallel efficiency and against load imbalance, too. Please also note that the KOKKOS implementation has a modified memory allocation algorithm, that is more robust (you can compile KOKKOS also for non-threaded or OpenMP multi-threaded operation). Also, the (optional) pair style keywords safezone, mincap, and minhbonds can be used to modify the memory allocation heuristics.

If you could use a different force field (AIREBO?), things would be much easier, though.

Thank you for your kind reply, which is professional and of great help. I will try it according to your suggestions to solve the problem. :blush: