[lammps-users] No output after setting up

Hello All,

I am trying to model the MD of electrolytes interacting with a cathode using ReaxFF. I made the input data file by converting it from a poscar relaxed by VASP. LAMMPS runs fine until it gives the following output and doesn’t show any output after this.

Neighbor list info …
update every 1 steps, delay 0 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 12
ghost atom cutoff = 12
binsize = 6, bins = 2 3 7
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair reax/c/omp, perpetual
attributes: half, newton off, ghost, omp
pair build: half/bin/newtoff/ghost/omp
stencil: half/ghost/bin/3d/newtoff
bin: standard
(2) fix qeq/reax/omp, perpetual, copy from (1)
attributes: half, newton off, ghost
pair build: copy
stencil: none
bin: none
Setting up cg style minimization …
Unit style : real
Current step : 0

I have attached the input. I have also attached a stack trace as I saw from one of the earlier posts. However, I do not know how to make use of this information by myself. I tried this using LAMMPS (30 Jun 2020), LAMMPS (24 Dec 2020) and the recent Windows GPU binary code and got the same result every time.

(gdb) where
#0 0x00002aaaaad2503f in ompi_request_default_wait () from /cm/shared/apps/openmpi/intel/4.0.2/lib/libmpi.so.40
#1 0x00002aaaaad69891 in PMPI_Wait () from /cm/shared/apps/openmpi/intel/4.0.2/lib/libmpi.so.40
#2 0x00000000005aa4c2 in LAMMPS_NS::CommBrick::reverse_comm (this=0xe4d580) at /data/home/mpeiris1/SOFTWARE/LAMMPS/Projects/lammps5/lammps/src/comm_brick.cpp:541
#3 0x000000000049a18f in LAMMPS_NS::Min::setup(int) () at /data/home/mpeiris1/SOFTWARE/LAMMPS/Projects/lammps5/lammps/src/min.cpp:319
#4 0x000000000049bb6c in LAMMPS_NS::Minimize::command(int, char**) () at /data/home/mpeiris1/SOFTWARE/LAMMPS/Projects/lammps5/lammps/src/minimize.cpp:57
#5 0x0000000000455076 in LAMMPS_NS::Input::command_creator<LAMMPS_NS::Minimize> (lmp=, narg=4, arg=0xe7c200)
at /data/home/mpeiris1/SOFTWARE/LAMMPS/Projects/lammps5/lammps/src/input.cpp:871
#6 0x00000000004531f6 in LAMMPS_NS::Input::execute_command() () at /data/home/mpeiris1/SOFTWARE/LAMMPS/Projects/lammps5/lammps/src/input.cpp:857
#7 0x00000000004538eb in LAMMPS_NS::Input::file() () at /data/home/mpeiris1/SOFTWARE/LAMMPS/Projects/lammps5/lammps/src/input.cpp:230
#8 0x000000000044bf26 in main () at /data/home/mpeiris1/SOFTWARE/LAMMPS/Projects/lammps5/lammps/src/main.cpp:62
#9 0x00002aaaabd19555 in __libc_start_main () from /lib64/libc.so.6
#10 0x000000000044bf7f in _start ()

Even if I skip minimization, NVT and NPT runs and go straight to NVE runs I still get this with more or less the same final output.
Setting up Verlet run …
Unit style : real
Current step : 0
Time step : 0.01

(gdb) where
#0 ucs_callbackq_dispatch (cbq=0x1) at /cm/shared/apps/openmpi/intel/ucx/contrib/…/src/ucs/datastruct/callbackq.h:210
#1 uct_worker_progress (worker=) at /cm/shared/apps/openmpi/intel/ucx/contrib/…/src/uct/api/uct.h:1917
#2 ucp_worker_progress (worker=0xd99f90) at /cm/shared/apps/openmpi/intel/ucx/contrib/…/src/ucp/core/ucp_worker.c:1746
#3 0x00002aaabd677c94 in mca_pml_ucx_progress () from /cm/shared/apps/openmpi/intel/4.0.2/lib/openmpi/mca_pml_ucx.so
#4 0x00002aaaac3c3124 in opal_progress () from /cm/shared/apps/openmpi/intel/4.0.2/lib/libopen-pal.so.40
#5 0x00002aaaaad2503f in ompi_request_default_wait () from /cm/shared/apps/openmpi/intel/4.0.2/lib/libmpi.so.40
#6 0x00002aaaaad69891 in PMPI_Wait () from /cm/shared/apps/openmpi/intel/4.0.2/lib/libmpi.so.40
#7 0x00000000005aa4c2 in LAMMPS_NS::CommBrick::reverse_comm (this=0xe4d570) at /data/home/mpeiris1/SOFTWARE/LAMMPS/Projects/lammps5/lammps/src/comm_brick.cpp:541
#8 0x000000000053f0b4 in LAMMPS_NS::Verlet::setup(int) () at /data/home/mpeiris1/SOFTWARE/LAMMPS/Projects/lammps5/lammps/src/verlet.cpp:151
#9 0x00000000005033f0 in LAMMPS_NS::Run::command(int, char**) () at /data/home/mpeiris1/SOFTWARE/LAMMPS/Projects/lammps5/lammps/src/run.cpp:177
#10 0x0000000000454f86 in LAMMPS_NS::Input::command_creator<LAMMPS_NS::Run> (lmp=, narg=1, arg=0xe7c100) at /data/home/mpeiris1/SOFTWARE/LAMMPS/Projects/lammps5/lammps/src/input.cpp:871
#11 0x00000000004531f6 in LAMMPS_NS::Input::execute_command() () at /data/home/mpeiris1/SOFTWARE/LAMMPS/Projects/lammps5/lammps/src/input.cpp:857
#12 0x00000000004538eb in LAMMPS_NS::Input::file() () at /data/home/mpeiris1/SOFTWARE/LAMMPS/Projects/lammps5/lammps/src/input.cpp:230
#13 0x000000000044bf26 in main () at /data/home/mpeiris1/SOFTWARE/LAMMPS/Projects/lammps5/lammps/src/main.cpp:62
#14 0x00002aaaabd19555 in __libc_start_main () from /lib64/libc.so.6
#15 0x000000000044bf7f in _start ()

Can someone point out any obvious cause for this?Thank you.

Regards,
Chathuranga

INPUT.lmp (5.33 KB)

B.data (27.3 KB)

unfortunately, the reaxff force field file is missing, so it is not possible to test your input deck.

I suggest to try this input without OpenMP and without fix reax/c/species (or any other diagnostic computes and fixes).

axel.

Prof. Axel,

Thank you for the suggestions. So I removed all the diagnostic computes and fixes and reran the script( as attached). Now I see that I am getting the pressure as a nan value, which means that I cannot proceed after that. I tried different minimization algos such as cg, hftn and fire and line as quadratic and backtrack. While some ran to a greater number of steps than cg, they still resulted in pressures as nan for all steps. I also tried a lowered minimization criteria and I now get the first step as:

Neighbor list info …
update every 1 steps, delay 0 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 12
ghost atom cutoff = 12
binsize = 6, bins = 17 17 17
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair reax/c/omp, perpetual
attributes: half, newton off, ghost, omp
pair build: half/bin/newtoff/ghost/omp
stencil: half/ghost/bin/3d/newtoff
bin: standard
(2) fix qeq/reax/omp, perpetual, copy from (1)
attributes: half, newton off, ghost
pair build: copy
stencil: none
bin: none
Setting up cg style minimization …
Unit style : real
Current step : 0
WARNING: Energy due to 1 extra global DOFs will be included in minimizer energies
Per MPI rank memory allocation (min/avg/max) = 2.628 | 6.583 | 20.66 Mbytes
Step Temp E_pair E_mol TotEng Press Volume
0 0 -48599.261 0 -48599.261 -nan 1000000

I also tried varying the initial geometry by having variations in the initial locations of the electrolyte molecules (the system was relaxed by DFT using VASP) but I did not see a difference in the nan pressure output even then. Is there anything else that I could try to alleviate the issue?

Thank you!

Regards,
Chathuranga

output.text (2.1 KB)

ffield_Islam (36 KB)

1.data (28.3 KB)

INPUT1.lmp (4.39 KB)

You didn’t mention the following “BIG FAT WARNING” that showed up in your output:

WARNING: VdWaals-parameters for element LI indicate inner wall+shielding, but earlier atoms indicate different vdWaals-method. This may cause division-by-zero errors. Keeping vdWaals-setting for earlier atoms. (src/USER-REAXC/reaxc_ffield.cpp:259)

This is a serious concern and would certainly explain the bad forces and thus bad pressure. “nan” happens when you divide by zero.
Where/how did you get the force field file?

Axel.

Prof. Axel,

Absolutely. I received this from Prof. Adri van Duin when I inquired from him about reaxff fields available for my type of work which was published in Islam, M., Bryantsev, V.S. and van Duin, A.C.T. (2014) ReaxFF Reactive Force Field Simulations on the Influence of Teflon on Electrolyte Decomposition During Li/SWCNT Anode Discharge in Lithium-Sulfur Batteries. Journal of the Electrochemical Society 161, E3009-E3014. I had used this force field for other work involving the same electrolyte with a pure Li cathode and was able to do simulations with the warnings present which lead me to believe the warnings were inconsequential. This certainly looks to be the issue right now.

Is there something that could be attempted from my side before I get in contact with Prof. Adri?
Thank you.

Regards,
Chathuranga

I cannot comment on the details of the parameters and why it may have worked in one case by not another. I simply don’t have the knowledge about ReaxFF’s inner details.
However, please see the attached dump file created before minimization with a simple “run 0” followed by write_dump.
In that file a substantial number of atoms have “nan” as forces. Upon visualization and selecting atoms with NaN forces it seems that those atoms form the solid layer or are in close proximity of it. so it could be that one of those atoms has inconsistent parameters and would trigger the failure.

Please carefully check with the referenced publication for which elements the force field file has been parameterized. The file may contain entries for elements that are not usable, but were just left over from a different parameterization.

Axel.

init.lammpstrj.gz (10.1 KB)