Problem in Parallel version of LAMMPS

HI All

I installed the parallel version of LAMMPS on cluster but its seems to be running as serial. Below is part of the output

HI All

I installed the parallel version of LAMMPS on cluster but its seems to be
running as serial. Below is part of the output

-------------------------------------------------
.Lattice spacing in x,y,z = 5.431 5.431 5.431
Created orthogonal box = (0 0 0) to (21.724 21.724 21.724)
  1 by 1 by 1 MPI processor grid
Created 512 atoms
Setting up run ...
------------------------------------------------
All the processors seems to be active but the speed is even slower than
the same simulation on personal laptop.

How can this problem be adjusted ?

you are not running in parallel. the output indicates that you use
only one processor.

axel.

How are you running the code? (which command are you using?). Hopefully you don’t expect your system to read your mind and decide how many procs are needed for the run.

Carlos

HI all

Below it the script I am using

#!/bin/bash
#PBS -N H-silicon
#PBS -l nodes=1:ppn=12
#PBS -l walltime=272:00:00
#PBS -m bea

cd /home

LM="/home//lmp/lammps-1Aug13/bench/lmp_g++"

#mpirun -machinefile PBS_NODEFILE -np 1 {LM} < in.lj> lj2.out

mpirun -machinefile PBS_NODEFILE -np 1 {LM} <in.Ar > Ar.out

mpirun -machinefile PBS_NODEFILE -np 12 {LM} <in.test1> test1.out

Dear Dr. Alex

I am still having the same problem in running the parallel version.

I used multiple processors but its running only in one processor.

mpirun -np 8 lmp_g++ -var x 2 -var y 2 -var z 2 < in.lj

and part of the result is the result it

Dear Dr. Alex

I am still having the same problem in running the parallel version.

I used multiple processors but its running only in one processor.

mpirun -np 8 lmp_g++ -var x 2 -var y 2 -var z 2 < in.lj

and part of the result is the result it

that means that you either compiled a serial executable, or that your
mpirun program does not match the MPI library that you used to compile
LAMMPS. neither is a LAMMPS issue, but a user/machine issue. best to find a
local person with some MPI experience. near impossible to solve from remote.

axel.