reax/c running on HPC

Dear all,

I have a problem using reax/c potential on HPC. When I run my simulation on ONE node it works without any error, But when I increase the number of nodes I receive this error :

p39: not enough space for bonds! total=231336 allocated=230956
application called MPI_Abort(MPI_COMM_WORLD, -14) - process 39
/apps/rrze/bin/mpirun_rrze-intelmpd: line 679: 10241 Segmentation fault ${I_MPI_ROOT}/bin64/mpiexec -configfile $TMPDIR/mpiexec-cfg.$$

This HPC has 560 compute nodes, each with two Xeon 2660v2 „Ivy Bridge“ chips (10 cores per chip + SMT) running at 2.2 GHz with 25 MB Shared Cache per chip and 64 GB of RAM . The version of LAMMPS is 31MAR17. It was compiled using the Intel compiler (version 17.0.2) with GCC 4.7 available in the background

How could I solve this problem. I really appreciate any comment.


What is your system size, what is the number of cores per node, and how many nodes are you running?


Unless you tell us exactly how you run it on multiple nodes, I don’t think anyone can help you. Also, your HPC administrator might be a better person to talk to than the mailing list.

Dear Stefan,

If I install user-omp and user-intel I run my simulation on HPC like this :

module load intel64/17.0up05

N=2 #nbr of nodes
NP=80 # nbr of cores
NPN=40 # nbr of cores/node

mpirun_rrze $LMP -sf omp -pk omp 1 -in $PARAM
mpirun_rrze -np $NP -npernode $NPN $LMP -sf intel -in $PARAM

If I exclude user-omp and user-intel this is the command for ruunig lammps :

module load intel64/17.0up05

mpirun $LMP -in $PARAM

I also attached the Make file for the lammps with user-omp and user-intel.



Makefile.intel_cpu_intelmpi (3.63 KB)

You could trying the Kokkos version of ReaxFF, it is more robust and works in places where the USER-REAXC code fails.


FYI the USER-INTEL package doesn’t have support for ReaxFF yet, so I’d just use USER-OMP alone.