[lammps-users] running job only on one processor

Good time of day, dear lammps users!
Thank you for your useful answers.

System administrator tells me that I am running the program using only one processor although in PBS script
I wrote:
#PBS -l walltime=10:00:00,nodes=8:ppn=4, pvmem=550mb
#PBS -M
#PBS -m abe
#PBS -N job_name
#!/bin/sh
mpirun ./lmp_linux< in.Ne > log
But according to his notes:

Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time


58713.master tmog workq App0061 22125 8 1 – 10000 R 00:00
node129+node129+node129+node129+node131+node131+node131+node131+node130
+node130+node130+node130+node128+node128+node128+node128+node127+node127
+node127+node127+node126+node126+node126+node126+node125+node125+node125

$ cexec :123-131 uptime
************************* oscar_cluster *************************
--------- node123---------
23:56:08 up 44 days, 11:39, 0 users, load average: 0.00, 0.00, 0.00
--------- node124---------
23:56:08 up 12 days, 7:26, 0 users, load average: 0.00, 0.00, 0.00
--------- node125---------
23:56:08 up 52 days, 12:06, 0 users, load average: 0.00, 0.00, 0.00
--------- node126---------
23:56:08 up 52 days, 12:05, 0 users, load average: 0.00, 0.00, 0.00
--------- node127---------
23:56:08 up 52 days, 12:05, 0 users, load average: 0.00, 0.00, 0.00
--------- node128---------
23:56:08 up 52 days, 12:05, 0 users, load average: 0.00, 0.00, 0.00
--------- node129---------
23:56:08 up 52 days, 11:55, 0 users, load average: 1.00, 1.00, 1.00
--------- node130---------
23:56:08 up 52 days, 11:58, 0 users, load average: 0.00, 0.00, 0.00
--------- node131---------
23:56:08 up 52 days, 12:01, 0 users, load average: 0.00, 0.00, 0.00

Here is the modified makefile.linux I used to compile the program:

linux = RedHat Linux box, Intel icc, Intel ifort, MPICH2, FFTW

SHELL = /bin/sh

---------------------------------------------------------------------

compiler/linker settings

specify flags and libraries needed for your compiler

CC = mpicxx
CCFLAGS = -O
DEPFLAGS = -M
LINK = mpicxx
LINKFLAGS = -O
LIB = -lstdc++
ARCHIVE = ar
ARFLAGS = -rc
SIZE = size

---------------------------------------------------------------------

LAMMPS-specific settings

specify settings for LAMMPS features you will use

LAMMPS ifdef options, see doc/Section_start.html

LMP_INC =

MPI library, can be src/STUBS dummy lib

INC = path for mpi.h, MPI compiler settings

PATH = path for MPI library

LIB = name of MPI library

MPI_INC = -DMPICH_MPICXX
MPI_PATH =
MPI_LIB = -lmpich -lpthread

FFT library, can be -DFFT_NONE if not using PPPM from KSPACE package

INC = -DFFT_FFTW, -DFFT_INTEL, -DFFT_NONE, etc, FFT compiler settings

PATH = path for FFT library

LIB = name of FFT library

FFT_INC = -DFFT_FFTW -I/home/tmog/fftw2/include
FFT_PATH = -L/home/tmog/fftw2/lib
FFT_LIB = -lfftw
#-lfftw

additional system libraries needed by LAMMPS package libraries

these settings are IGNORED if the corresponding LAMMPS package

(e.g. gpu, meam) is NOT included in the LAMMPS build

SYSLIB = names of libraries

SYSPATH = paths of libraries

gpu_SYSLIB = -lcudart
meam_SYSLIB = -lifcore -lsvml -lompstub -limf
reax_SYSLIB = -lifcore -lsvml -lompstub -limf
user-atc_SYSLIB = -lblas -llapack

gpu_SYSPATH = -L/usr/local/cuda/lib64
meam_SYSPATH = -L/opt/intel/fce/10.1.015/lib
reax_SYSPATH = -L/opt/intel/fce/10.1.015/lib
user-atc_SYSPATH =

---------------------------------------------------------------------

build rules and dependencies

no need to edit this section

include Makefile.package

EXTRA_INC = (LMP_INC) (PKG_INC) (MPI_INC) (FFT_INC)
EXTRA_PATH = (PKG_PATH) (MPI_PATH) (FFT_PATH) (PKG_SYSPATH)
EXTRA_LIB = (PKG_LIB) (MPI_LIB) (FFT_LIB) (PKG_SYSLIB)

Link target

(EXE): (OBJ)
(LINK) (LINKFLAGS) (EXTRA_PATH) (OBJ) (EXTRA_LIB) (LIB) -o (EXE) (SIZE) $(EXE)

Library target

lib: (OBJ) (ARCHIVE) (ARFLAGS) (EXE) $(OBJ)

Compilation rules

.o:.cpp
(CC) (CCFLAGS) (EXTRA_INC) -c <

.d:.cpp
(CC) (CCFLAGS) (EXTRA_INC) (DEPFLAGS) < > @

Individual dependencies

DEPENDS = (OBJ:.o=.d) include (DEPENDS)

Thank you for your attention!

Apparently, the system administrator did not tell you how to specify how many processors you need to run your jobs. The number nodes=8,ppn=4 just specify how many nodes you “reserved” for running your jobs. However, the processors do not necessarily need to all take part in the running. That’s what you are seeing here. 32 cores seems running but only one core is effective. You need to change the last line to like: mpirun -machine vapi ./lmp_linux < in.Ne > log . The flags depend on the mpi version in your system. You should be able to get this info from your system administrator.

Cheers,

Ajing

2010/12/17 t t <lan1967@…1250…092…>