I am running NVT simulations of SPC-water vapor at different densities.
I pack 1000 water molecules in cubic cells of varying sizes. For size of 10 * 10 * 10 nm3 I have 20 ns/day performance but for 40 * 40 * 40 nm3 performance is 1.8 ns/day although particle number was same in both cases.
Why is performance degrading ?
Larger simulation boxes increase the cost of long-range electrostatics and neighbor list searches.
Is there a way to adjust neighbor list construction or electrostatics to maintain performance ?
When I compare the timing breakdown the difference is a higher percentage of KSpace in the larger cell.
You can play with the different options of neigh_modify
, but this is risky.
As you have noted, the extra cost is not coming from the pairwise real space part but from the kspace part.
PPPM for example scales with the size of the FFT grid and the FFTs scale O(n log(n)) with the number of grid points in one direction.
The only choice you have to augment the performance is to change (only!) the Coulomb cutoff. A larger cutoff will increase the cost of the Pair part of the calculation and lower the cost of the KSpace part, but the total accuracy for the forces will remain the same.
If you lower the density, however, the number of pairs in the neighbor list becomes smaller and thus the Pair part of the calculation faster. By increasing the cutoff you can compensate that. There is an optimal cutoff value that results in optimal performance. This will depend on system size and number of MPI ranks.
Thanks for the explanation, this makes sense.
Changing the Coulomb cutoff from 11 to 20 changed performance from 1.8 to 10 ns/day.
I guess now I have to find highest cutoff with reasonable results. Any hints are appreciated.