Hi everyone,
I am simulating some systems containing various amorphous polymers, and I immediately get “out of range atoms - cannot compute PPPM” errors with both “minimize” or “fix nve/limit” with parallelization. They both work without parallelization, although “fix nve/limit” is better. However, I would like to take advantage of parallelization, since my systems are quite big (>240k atoms).
While I know why this is happening (extremely bad initial geometry), due to the very nature of my systems, I cannot possibly fix the geometry manually in a humanly acceptable time.
Do you know why this happens only if I use parallelization? Please excuse me if this has already been addressed in another thread or in the docs, but I have not been able to find this information on my own.
I attach the output of “lmp_mpi -h”.
Best,
Paolo
PS
I normally use “mpirun”, my machine can handle max. 6 parallel tasks. After some ~10K steps of “fix nve/limit” I am usually able to restart with parallelization.
I define the system by using “region”, “create_box”, “molecule” and then “create_atoms” command with the “random” option.
lmp.h (10.4 KB)
There are two main approaches I have used to set up an initial configuration of amorphous polymers. The first is using lj/cut/soft, which slowly “turns on” the interactions with a soft core while maintaining the desired polymer conformations. The second, more complicated method is to simulate a single chain in a smaller box, then replicate that box and push the chains together with fix deform or npt.
Generally I prefer the first method, though I still wouldn’t start with the desired final density. You need to give the polymers some room to uncross themselves.
Additionally, I wouldn’t use PPPM when finding the initial configuration. At best, it’s just wasted computation, as the pairwise potentials dominate.
2 Likes
Thank you very much, Micheal! This is very useful.
I had already thought of the replication strategy but it didn’t entirely fit my needs at the time.
By the way, I did not specify this but I am already starting with a lot less than the desired density. The geometry is still bad because some of these polymers are extremely long and the probability of overlapping atoms is still significant even with large boxes. That is why the approach with “fix nve/limit” is working.
But do you (or anyone who is reading) know why this command works only with 1 task and not with MPI?
Unfortunately I don’t know the cause. It may be a bad combination with long-range electrostatics, so I suggest trying nve/limit in parallel without PPPM to help with debugging. Though, with lj/cut/soft, you can simply use nve.
1 Like
You are creating huge forces when your polymer atoms are in close contact. Those will lead to large velocities and those to large displacements. The procedure in the PPPM algorithm is much more sensitive to excessive atom displacement than simple pairwise potentials and when running in parallel your are creating more surfaces that matter and thus increase the probability of some atoms being displaced too far.
As @Michael_Jacobs already mentioned using a cutoff only pair style for the initial equilibration is the smart approach here. It is much faster and the difference in accuracy doesn’t matter. In fact, if your system is problematic, using initially a soft pair style with fix adapt to gradually “turn on” the pairwise interactions is the way to go. Please see examples/micelle
for a minimal demonstration. Using minimization and fix nve/limit is not sufficient in complex cases to properly unoverlap molecules.
1 Like
@Michael_Jacobs @akohlmey
Thank you both very much for sharing your expertise, and for doing it so promptly!
Best wishes,
Paolo