uncharged system error

Using kspace_modify pppm I’m getting an odd error whenever the largest charge in the system is less than approximately 1e-5.

WARNING: System is not charge neutral, net charge = 0.000562708 (…/kspace.cpp:304)
ERROR: Must use kspace_modify gewald for uncharged system (…/pppm.cpp:1002)

The warning comes from a value <SMALL for the sum of all charges in the system.
The error comes from a zero value for the sum of the square of all charges in the system.

For some reason:

for (int i = 0; i < nlocal; i++) {
qsum_local += q[i];
qsqsum_local += q[i]*q[i];
}

MPI_Allreduce(&qsum_local,&qsum,1,MPI_DOUBLE,MPI_SUM,world);
MPI_Allreduce(&qsqsum_local,&qsqsum,1,MPI_DOUBLE,MPI_SUM,world);

is evaluating qsum to a relatively large number (~100 times the charge contribution of the largest charges in the system), while qsqsum is becoming zero. I don’t seem why this should be causing a roundoff error, as qsqsum_local and qsqsum are both doubles capable of ~1e-300 precision, and even as single precision they should be able to handle ~10^-40, while the largest q[i]*q[i] should be ~10^-10.

Thanks,

Please provide a minimal working example that demonstrates this behavior so it is easier to discuss.

Thanks,
axel.

Here’s an example script. I found another curious behavior while playing with it. If the variable lCharge is above ~1e-5 the script works as I mentioned before. In addition, if the sCharge variable is above ~1e-7 it will also work even with a low lCharge.

What’s strange is that changing the size of lCharge has very little effect on the total charge, while changing sCharge has no effect on the largest charge size, which rules out either a rounding error in the individual charge squared calculations and a roundoff error in the total charge squared. (which I already had ruled out on programatic grounds, but this confirms it)

variable lCharge equal 5e-6
variable sCharge equal 1e-8

atom_style hybrid sphere charge
boundary p p p
newton off
atom_modify sort 1000 10
comm_modify mode multi vel yes

units si
processors 2 2 2
lattice sc 2

region reg block -100 100 -100 100 -100 100 units box
create_box 2 reg
neighbor 0.25 bin
neigh_modify delay 0

pair_style hybrid/overlay granular coul/long 3
#pair_style granular
pair_coeff * * granular dmt 1e6 0.1 .3 0.00286 tangential linear_nohistory 1 0.4 rolling sds 10 0.4 0.8 damping tsuji
pair_coeff * * coul/long

region core sphere 0 0 0 5.5 side out
region cloud sphere 0 0 0 20 side in
region creation intersect 2 core cloud

create_atoms 1 single 0 0 0
set type 1 diameter 5 density 3000
create_atoms 2 region creation
set type 2 diameter 0.5 density 3000
group cloudG type 2
group seedG type 1

dielectric 1
kspace_style pppm 50
kspace_modify mesh 100 100 100

set group seedG charge {lCharge} set group cloudG charge {sCharge}

run_style respa 2 150 kspace 2 pair 1
fix int all nve/sphere
timestep .5

run 100

I have no problem running this script with the latest LAMMPS stable release (7Aug2019) compiled with either CLang or GCC using CMake build on a Fedora 29 x86_64 machine.
What is your LAMMPS version, platform, compilers, and settings?
Axel.

P.S: while it does not affect this particular case, but i hope you are aware that a PPPM convergence of 50 will result in very inaccurate coulomb forces. a value between 1.0e-4 and 1.0e-6 is more typical.