I am trying to run LAMMPS simulations of melts with long polymer chains. I have a strange problem that is reproducible on multiple platforms. When I run certain long-chain geometries (with say 100K or 1M monomers per chain), then the simulation never makes it past the “Setting up run …” stage. It also keeps calling the grow() function until eventually running out of memory. Here is the call stack that leads to the failure:
LAMMPS_NS::Memory::grow () at …/memory.h:159
LAMMPS_NS::AtomVecMolecular::grow () at …/atom_vec_molecular.cpp:108
LAMMPS_NS::AtomVecMolecular::unpack_border_vel () at …/atom_vec_molecular.cpp:554
LAMMPS_NS::CommBrick::borders () at …/comm_brick.cpp:830
LAMMPS_NS::Verlet::setup () at …/verlet.cpp:107
LAMMPS_NS::Run::command () at …/run.cpp:170
LAMMPS_NS::Input::command_creator<LAMMPS_NS::Run> () at …/input.cpp:631
LAMMPS_NS::Input::execute_command () at …/input.cpp:614
LAMMPS_NS::Input::file () at …/input.cpp:225
in main () at …/main.cpp:31
I tried to figure out the problem, but that code is hard to follow. On a single process, I can reproduce the issue with the attached def.chain2 file (you should be able to type “./script.sh” to run everything). Note that the def.chain1 file, which contains 1 fewer atom, works fine. I’ve verified this on my home desktop, my university cluster, and Hopper at NERSC.
Can you please help me figure out what the problem is? Ideally, I’d like to run something like one 100K monomer chain per compute node, which should be well below the 32K atoms per process that was achieved here:
PS - Also note that I changed line 24 of chain.f. Is that a bug, or is it intended to prevent from creating too large of a bead-chain system?
long_chain_issue.tgz (3.06 KB)