[lammps-users] Large virtual memory use

Hi,

I´m running LAMMPS on an SGI Altix system, using 24 CPU´s. The strange thing is that LAMMPS is using
an enormous amount of virtual memory (134 gig !) and only a very small amount of real memory (27 meg):

PID USER      VIRT  RES  %CPU    TIME+  COMMAND
11261 dacj1984  134g  27m  100    1311:39 lmp_altix

This seems strange for an MD simulation of 1050 particles using an MEAM potential.

Any idea as to what might be going on?

Maurice

Hi,

I´m running LAMMPS on an SGI Altix system, using 24 CPU´s. The strange
thing is that LAMMPS is using
an enormous amount of virtual memory (134 gig !) and only a very small
amount of real memory (27 meg):

PID USER VIRT RES %CPU TIME+ COMMAND
11261 dacj1984 134g 27m 100 1311:39 lmp_altix

This seems strange for an MD simulation of 1050 particles using an MEAM
potential.

Any idea as to what might be going on?

hi maurice,

please check the MPI manpage. i remember that at least for some
versions the MPI library was allocating the maximum combined
address space of all nodes for each MPI task (i.e. the total
virtual space will go up, the more nodes you use).

i remember seeing this before with CPMD a long time ago, so
the resolution might be in the CPMD mailing list archive somewhere,
but if i remember correctly, the solution was to set some
specific environment variable and then everything was fine.

cheers,
   axel.

This is not uncommon if your using MPT (the Altix MPI library) We see similar things all the time (not with lammps but other codes).

You don't need to worry about this, and you _DO_ want to be using MPT as your MPI library on the Altix,
So you have nothing to worry about.

Brock Palen
www.umich.edu/~brockp
Center for Advanced Computing
[email protected]...
(734)936-1985

The amount of memory LAMMPS thinks it is using per processor
is printed to the screen when a run begins. If this is wildly different
than the numbers you quote (which it probably is for 1000 atoms),
then it's not LAMMPS. Maybe your MPI?

Steve

Thanks Axel,

Indeed there seems to be an issue with SGI´s MPI, see e. g.


Axel Kohlmeyer wrote: