DPD for crosslinked polymer networks?

Hello all.

I’ve been using LAMMPS to run rheological simulations of crosslinked polymer networks, both passive and motor activated (through fix bond/react). Currently, I’ve developed a bead-spring polymer model with Langevin dynamics, applying displacements through fix move to get perform my numerical rheological experiments.

This is in the service of gathering data to develop a macroscale model of polymer networks as a viscoelastic material. However I’m having trouble reaching longer timescales. The system is too stiff for sufficiently large timescales, and the run command is limited by the size of a 32 bit integer I’ve been trying to find appropriate mesoscale methods in LAMMPS to develop a more coarse-grained model of my system so I can reach my longer timescales. I’m wondering if DPD is appropriate. Papers using DPD tend to mode dense fluid phases of complex materials like lipid bilayers and micelles. Can it also be used in practice as an “upscaled” version of Brownian/Langevin dynamics? Can I use DPD to model bead-spring polymers where each link/spring represents several of my original springs in my Langevin model? Can I still treat the background solvent as a continuum interacting with DPD beads through the DPD drag and random terms?

I’m sorry to be asking so many questions, but these considerations about physical validity are a little opaque to me. I’m not a physicist by training; the focus of my research is the process of building data-driven models, not statistical mechanics and molecular dynamics. However, I need to be sure that I can use MD tools to gather data from physically valid simulations. I’ve been haphazardly teaching myself statistical mechanics and MD as I need it (and probably not all that effectively).

Hopefully the LAMMPS user base can help me here. Thanks in advance!

If the timestep limits imposed by the bonds is the restriction you are talking about, that can be addressed by using run_style respa. That way you run the (faster) bond/angle/dihedral calculations more frequently than the (expensive) pairwise and/or kspace calculations. Depending on the specifics of the model (i.e. the softness of the pairwise interactions), you may increase your efficiency in simulation time per CPU time by up to an order of magnitude. If the systems are sufficiently large and the pair styles support it, using GPUs may be an option for further speedup.

The 32bit limit for run command applies to the individual run command. you can easily extend the simulation by using a loop and thus issuing multiple run commands continuing the simulation transparently. the timestep counter and the arguments to the start/stop keywords are 64-bit integers.

Axel.