Lammps cannot run parallel

Dear all:

I have compiled Lammps at our university cluster, however, I notice that it cannot run parallel, even I specify the command such as "mpirun -np 4 lmp_hotfoot < in.liquid"
In the log file, it always said the processor grid is 1 by 1 by 1.

BTW, this is the first time that I compile and run it on our university's platform.

I appreciate it very much if someone can give me some suggestion.

Best

Lingqi Yang

Dear all:

I have compiled Lammps at our university cluster, however, I notice that
it cannot run parallel, even I specify the command such as "mpirun -np 4
lmp_hotfoot < in.liquid"
In the log file, it always said the processor grid is 1 by 1 by 1.

BTW, this is the first time that I compile and run it on our
university's platform.

I appreciate it very much if someone can give me some suggestion.

there are two possible explanations:
- you compiled with the LAMMPS provided MPI stub, but not with the
system provided MPI
- you compiled against a different MPI library than what you are using
to run the executable.

you best talk to a local expert, who knows your local cluster well, to
get this sorted out.

axel.

Thank you so much for your replying. I am contacting with local expert now, but meanwhile, would you help me take a look at my Makefile? I want to make sure that I did everything correctly.

Regarding the mpich, I installed mpich2-1.4 in a local folder since our university cluster does not have mpich installed. During the installation and configuration of mpich2, I used the following command:
\./configure \-\-prefix=/hpc/astro/users/ly2282/local/mpich2 make
$ make install

I did not install fft.

Thank you so much!

Lingqi Yang

# hotfoot = RedHat Linux box, g++4, MPICH2, FFTW

SHELL = /bin/sh

# ---------------------------------------------------------------------
# compiler/linker settings
# specify flags and libraries needed for your compiler

CC = mpiCC
CCFLAGS = -O2 -fPIC
SHFLAGS = -fPIC
DEPFLAGS = -M

LINK = mpiCC
LINKFLAGS = -O
LIB = -lstdc++
SIZE = size

ARCHIVE = ar
ARFLAGS = -rc
SHLIBFLAGS = -shared

# ---------------------------------------------------------------------
# LAMMPS-specific settings
# specify settings for LAMMPS features you will use
# if you change any -D setting, do full re-compile after "make clean"

# LAMMPS ifdef settings, OPTIONAL
# see possible settings in doc/Section_start.html#2_2 (step 4)

LMP_INC = -DLAMMPS_GZIP

# MPI library, REQUIRED
# see discussion in doc/Section_start.html#2_2 (step 5)
# can point to dummy MPI library in src/STUBS as in Makefile.serial
# INC = path for mpi.h, MPI compiler settings
# PATH = path for MPI library
# LIB = name of MPI library

MPI_INC = -DMPICH_SKIP_MPICXX -I/hpc/astro/users/ly2282/local/mpich2/include
MPI_PATH = -L/hpc/astro/users/ly2282/local/mpich2/lib
MPI_LIB = -lmpich -lfmpich -lmpl -lpthread

# FFT library, OPTIONAL
# see discussion in doc/Section_start.html#2_2 (step 6)
# can be left blank to use provided KISS FFT library
# INC = -DFFT setting, e.g. -DFFT_FFTW, FFT compiler settings
# PATH = path for FFT library
# LIB = name of FFT library

FFT_INC = -DFFT_NONE
FFT_PATH =
FFT_LIB =

# JPEG library, OPTIONAL
# see discussion in doc/Section_start.html#2_2 (step 7)
# only needed if -DLAMMPS_JPEG listed with LMP_INC
# INC = path for jpeglib.h
# PATH = path for JPEG library
# LIB = name of JPEG library

JPG_INC =
JPG_PATH =
JPG_LIB =

# ---------------------------------------------------------------------
# build rules and dependencies
# no need to edit this section

include Makefile.package.settings
include Makefile.package

EXTRA_INC = \(LMP\_INC\) (PKG_INC) \(MPI\_INC\) (FFT_INC) \(JPG\_INC\) (PKG_SYSINC)
EXTRA_PATH = \(PKG\_PATH\) (MPI_PATH) \(FFT\_PATH\) (JPG_PATH) \(PKG\_SYSPATH\) EXTRA\_LIB = (PKG_LIB) \(MPI\_LIB\) (FFT_LIB) \(JPG\_LIB\) (PKG_SYSLIB)

# Path to src files

vpath \.cpp \.\. vpath .h ..

# Link target

\(EXE\): (OBJ)
     \(LINK\) (LINKFLAGS) \(EXTRA\_PATH\) (OBJ) \(EXTRA\_LIB\) (LIB) -o \(EXE\) &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;(SIZE) $(EXE)

# Library targets

lib: \(OBJ\) &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;(ARCHIVE) \(ARFLAGS\) (EXE) $(OBJ)

shlib: \(OBJ\) &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;(CC) \(CCFLAGS\) (SHFLAGS) \(SHLIBFLAGS\) (EXTRA_PATH) -o \(EXE\) \\ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;(OBJ) \(EXTRA\_LIB\) (LIB)

# Compilation rules

\.o:.cpp
     \(CC\) (CCFLAGS) \(SHFLAGS\) (EXTRA_INC) -c $<

\.d:.cpp
     \(CC\) (CCFLAGS) \(EXTRA\_INC\) (DEPFLAGS) &lt; &gt; @

# Individual dependencies

DEPENDS = \(OBJ:\.o=\.d\) sinclude (DEPENDS)

Makefile.hotfoot (2.67 KB)

Hello,

I'm not a cluster admin, but a couple of comments.

07.07.2013, 02:17, "Lingqi Yang" <[email protected]>:

Regarding the mpich, I installed mpich2-1.4 in a local folder since our
university cluster does not have mpich installed.

Are you sure? Veeeery unlikely. If you only did what you tell in your message, you don't even have your local mpich binaries in PATH - but you're compiling with mpicc. Most probably you do have some MPI implementation installed, then you build lammps against your local files and you call pre-installed mpiexec (Dr. Kohlmeyer's second suggestion). What can be done:

Try calling mpiexec without arguments or "mpicc -v". It may give you a clue if you have OpenMPI or MPICH. Them call ompi_info in case of OpenMPI or mpichversion in case of MPICH. It'll show you path to the folder where MPI is installed in your system. Then you go there, look for headers and libraries and point to them in makefile.

Even simpler way is to just leave MPI-related lines in makefile blank:

MPI_INC =
MPI_PATH =
MPI_LIB =

If you compile with mpicc, it should already know where to look for all necessary components.

Regards,
Oleg.

Dear Sergeev:

Thank you so much! I follow your suggestion, and it works!

It turns out our university cluster does have openmp installed, although it is not listed in the available software on the website. I installed mpich2 by myself and make lammps based on the library I installed in the local folder. However, when I submit job via mpirun, it called the the one installed in the system, so that's why lammps cannot run parallel.

I really appreciate!

Best

Lingqi