Ewald and "adress not mapped" error

Dear all,

I'm experiencing a segmentation fault with the "address not mapped" error message using the latest stable version of lammps. I guess it means the program tries to access some memory part that's not allocated. And I suspect this comes from the ewald summation since it started when I significantly increased the size of my system in one direction. I've run similar simulations with more atoms than that and it worked fine. So it's related to the geometry rather than to the number of atoms. The error goes away when I reduce the ewald accuracy from 1.0e-6 to 1e-5.
I'm thinking maybe somewhere in the ewald process a vector reaches a size limit or something like that. Has anyone experienced something like this? Am I looking in the right direction here?
PS: Sorry for not adding a simplified example script showing the error, I'll make one if necessary.

Regards.

If you post a simple, small-problem input script with

the problem, someone will look into it.

Steve

Dear all,

I'm experiencing a segmentation fault with the "address not mapped"
error message using the latest stable version of lammps. I guess it
means the program tries to access some memory part that's not allocated.
And I suspect this comes from the ewald summation since it started when
I significantly increased the size of my system in one direction. I've
run similar simulations with more atoms than that and it worked fine. So
it's related to the geometry rather than to the number of atoms. The
error goes away when I reduce the ewald accuracy from 1.0e-6 to 1e-5.
I'm thinking maybe somewhere in the ewald process a vector reaches a
size limit or something like that. Has anyone experienced something like
this? Am I looking in the right direction here?

actually, i would also be worried at this time about accuracy and
artifacts. 3d-ewald summation can be problematic with boxes that are
significantly different in shape from a cube. another thing that comes
to my mind and that is the density. the default estimator formula for
the ewald parameters assumes a homogeneous distribution of particles
and a "regular" density. if you system strays from that, so contrary
to your assertion from above, having less atoms in the same volume can
be more problematic.

PS: Sorry for not adding a simplified example script showing the error,
I'll make one if necessary.

it is. there is too much detail missing of your simulation process.

axel.

Axel,

Thank you for replying. In the mean time I noticed that my real-space cutoff for Coulomb was way too small. As a result, kspace time was around 90% of total time. I found the optimal value and ran the simulation again without any problem. So I assume too much work was demanded of the Ewald sum process when the seg fault happened. Should there be some kind of warning for these situations?

actually, i would also be worried at this time about accuracy and artifacts. 3d-ewald summation can be problematic with boxes that are significantly different in shape from a cube.

Yes my box is a hexagonal lattice and significantly longer in the z direction. Does pppm behave better than 3d-ewald in cases like this?

another thing that comes to my mind and that is the density. the default estimator formula for the ewald parameters assumes a homogeneous distribution of particles and a "regular" density. if you system strays from that, so contrary to your assertion from above, having less atoms in the same volume can be more problematic.

The system being a crystal at low temperature, the density is more or less homogeneous so that shouldn't be a problem. But I will keep the warning in mind when working with fluids.

Thanks again for the advice.

Axel,

Thank you for replying. In the mean time I noticed that my real-space
cutoff for Coulomb was way too small. As a result, kspace time was
around 90% of total time. I found the optimal value and ran the
simulation again without any problem. So I assume too much work was
demanded of the Ewald sum process when the seg fault happened. Should

this reasoning doesn't make sense. a code is either correct or
incorrect for a given set of (correct) parameters. whether it is
efficient or inefficient is a different matter. now, if the
implementation of a method chooses to not support some extreme
situations, then it should have some safeguards. but then again, this
is only meaningful, if it has no negative impact on regular,
meaningful inputs.

there be some kind of warning for these situations?

how should we even think about implementing such a warning, when none
of us has yet been able to reproduce it.
in general, LAMMPS is very flexible, which has the repercussion that
the users have to know what they are doing. what could be detected as
problematic is thus limited. very few people use the ewald kspace
style, since in most cases it is not very efficient compared to pppm.

actually, i would also be worried at this time about accuracy and
artifacts. 3d-ewald summation can be problematic with boxes that are
significantly different in shape from a cube.

Yes my box is a hexagonal lattice and significantly longer in the z
direction. Does pppm behave better than 3d-ewald in cases like this?

no. the basic principle is the same. out of the options supported by
LAMMPS, msm is the only that would not have a problem as it is using
entirely realspace terms, but then again, it is difficult to get it to
do the forces at higher accuracy, so it may be difficult to say what
is the lesser evil of the two.

axel.

So I assume too much work was
demanded of the Ewald sum process when the seg fault happened. Should
there be some kind of warning for these situations?

Again, if you post a simple script that triggers the problem,
we will look at it. It could be a bug, but if we can’t reproduce
it, we can’t fix it.

Steve