[lammps-users] Optimising code to improve for polymer system simulation...

Hello,

I got problems with the speed of my simulations of a polymer systems with some ions and wanted to improve it.

The basis for my simulation is as follows:

All ions are charged and the overall charges is 0. However, it runs in principle on my CPU based computers and tried to improve the speed of the simulation with the following steps. So I used the Dreiding potential with the following code/ output:

Dreiding potential information

neighbor 0.4 bin
units real
neigh_modify delay 5 every 1
atom_style full
bond_style harmonic
angle_style cosine/squared
dihedral_style opls
pair_style lj/cut/coul/long 10.0
pair_modify mix arithmetic
kspace_style pppm 1e-4
improper_style harmonic
read_data ${fname}
run_style respa 3 4 2 bond 1 pair 2 kspace 3
special_bonds lj/coul 0.0 0.0 0.0

fix 3 all nve/limit 0.05
run 150
min_style cg
minimize 1.0e-10 1.0e-10 10000000 10000
timestep 0.1
run 2000000

And the following MPI task timing breakdown:

Section | min time | avg time | max time |%varavg| %total

Michael,

You seem to be doing things in the wrong order and are disregarding some fundamental issues.

You are running with run_style respa and nve/limit. Start your experiments with run style verlet and fix nve or fix nvt. But also the most conservative neighbor list option, i.e. delay 0 every 1 check no.

If you cannot get a stable and energy conserving trajectory with fix nve, it is not worth looking into any other options, since then your problem with your model.

For a system in real units assuming hydrogen atoms as the lightest atoms, you should be able to get very good energy conservation with fix nve and a timestep of 0.25 to 0.5 fs. Since you are reporting difficulties with r-RESPA multi timestepping your effective timestep is even shorter and if you require using fix nve/limit then there is something very bad happening. Which means that either your initial geometry is very bad or your force field settings are bogus or both. That strongly hints that this is predominantly an issue that has to be solved on the physics/model side first. It is completely pointless to discuss how to optimize the performance of a calculation if it is simulating a bogus model.

Some of your questions also exhibit a lack of expertise in basic molecular dynamics methodology. Now I know for a fact, that there are several research groups in your institution that do have that expertise and people that could consult you and provide help with putting together meaningful MD simulations and thus avoid some of the fundamental issues. They can also answer some of your questions about basic MD issues beyond what is. So my suggestion is to reach out to those groups, and as to get assistance to have a meaningful and physically sound simulation input that will get you a data file for your simulation that is well equilibrated and suitable for production simulations, and then we can discuss how to make it run more efficiently with LAMMPS in parallel.

Finally some answers your questions:

  1. if your temperature is rising with fix nve then one of two scenarios are most likely: your initial geometry has very high potential energy or your force field parameters are bad/bogus. in the first case, doing a minimization can help, but it may only lead to a metastable state, so you may need to do multiple minimize commands with a short run in between. however, constructing a more physical initial geometry is usually the better approach.

  2. what is worrying about your timing logs is the high percentage of “Comm”. that suggests that you may be using too many MPI processes for the number of atoms in your system. if using coul/wolf is not slower than coul/long with otherwise identical settings, something else is very wrong. however, this is impossible to discuss without seeing the exact input decks

  3. fix shake does not create any speedup by itself. it can make simulations more efficient by allowing the use of a larger timestep, e.g. 1fs to 2fs for typical molecular systems in real units. but if you cannot get your basic system with basic settings and regular verlet to run at a larger timestep than 0.1 fs, then you have to sort out why that is the case first.

  4. with cutoff based pair styles you have to choose the length of the cutoff based on the physics of your problem. only with a long-range solver you can choose the cutoff of the coulomb interaction to balance the amount of computational work required between real space and kspace. typically a required cutoff is given with the force field parameters. there are numerous publications discussing the impact of cutoff coulomb on the correctness of results and which approximation/damping method has the least negative impact of the correctness for what systems. it is far beyond the scope of this mailing list to discuss this here. again, consult with your local experts.

  5. I don’t understand what you are asking here. what does it mean to “use several clusters at the same time”? and how is this supposed to improve the efficiency of your simulations?

axel.

BTW: while writing the response, something seemed familiar and after searching through my e-mail archive I found that I had already made very similar suggestions on a very similar looking input file on this list in february.

this is becoming quite irritating: why are you asking for advice when you are not following it? at the very least you should provide a justification for disregarding the advice and sticking with choices in your input that have been noted as problematic.