High memory requirement

Hi,
I am running a LAMMPS job with ~ 400,000 atoms.

Surprisingly, this job is not running and giving an error: Failed to allocate $N bytes for array pair:setflag (memory.cpp)

I have seen that this particular job needs ~ 385 GB memory per processors !!!

The input geometry is a SiC crystal rotated about the (110) axis.

I have run jobs with more number of atoms ~4 million without such a problem in our supercomputer.

I could not find any obvious mistake in the input file.

Is there someone who have faced similar problems and possibly solve it ?

Thanks,

Each pair style has its own memory requirement, and you are probably
using one that demands too much. However, 385 GB for 400,000 atoms
does not sound right. The Interatomic potential comparisons section
of the bench mark page compares the memory usage of several pair
styles.

Ray

Hi,
I am running a LAMMPS job with ~ 400,000 atoms.

Surprisingly, this job is not running and giving an error: Failed to
allocate $N bytes for array pair:setflag (memory.cpp)

I have seen that this particular job needs ~ 385 GB memory per processors
!!!!!!!!

The input geometry is a SiC crystal rotated about the (110) axis.

I have run jobs with more number of atoms ~4 million without such a problem
in our supercomputer.

it is not only the number of atoms by itself
that determines the memory requirements
of a simulation.
there are the choice of pair style, cutoff,
fixes and computes that may require
significant storage.

I could not find any obvious mistake in the input file.

how about the non-obvious (to you) ones?

Is there someone who have faced similar problems and possibly solve it ?

the way to a solution is a very simple one
and is the same as to many other problem:

make a copy of your input and strip it down
as much as you can so that you have only
the absolute minimum. if it still consumes
the large amount of RAM and you can't find
an explanation, then post it to the list again.

otherwise, put back the parts of the input
that you removed one-by-one and check the
memory consumption at each step. that should
help you to identify which is the cause of the
large memory consumption and then you
can think again about whether that is to be
expected or not and if you cannot find an
explanation, get back to the list.

simple enough, isn't it?

axel.

In addition to Axel's suggetions, I would run a small enough
problem that it does fit in memory and see what LAMMPS tells
you for how much memory it is using. E.g. does a 200K or 100K
or 50K atom problem run? The memory cost
should scale linearly with the number of atoms, and thus
you should be able to figure out what is taking so much memory,
e.g. neighbor lists, since the neighbor count will also
be printed when you make a successful run.

Steve

Hi,

Thank you everyone for the replies.

Finally, I was able to point out the “mistake”.

There was 1 “obvious” silly mistake in the input configuration file.

The mistake was the following: