Running errors of LAMMPS GPU bench test

Dear Alex,

Yes, I can run the parallel simulation successfully without gpu package. The previous problem was solved after I re-configured the /etc/ld.so.conf by adding the /usr/local/cuda/lib64.

However, when I tried to test a simple input file (attached), below error occurred:

wuchao@…4131…:~/software/LAMMPS_GPU_Install/Drivers/lammps-24Apr13/bench/GPU> mpirun -np 8 ./lmp_g++ -sf gpu -c off -v g 1 -echo screen < in.test

LAMMPS (24 Apr 2013)
package gpu force/neigh 0 0 1

3d Lennard-Jones melt

newton off

units metal
boundary p p p

atom_style atomic

package gpu force/neigh 0 0 1

lattice fcc 3.615
Lattice spacing in x,y,z = 3.615 3.615 3.615
region box block 0 10 0 10 0 10
create_box 1 box
Created orthogonal box = (0 0 0) to (36.15 36.15 36.15)
2 by 2 by 2 MPI processor grid

pair_style eam/gpu
pair_coeff 1 1 Cu_u3.eam

create_atoms 1 box
Created 4000 atoms

velocity all create 1.44 87287 loop geom

neighbor 2.0 bin
neigh_modify delay 0 every 20 check no

fix 1 all nve

run 100

in.test (394 Bytes)

Dear Alex,

Yes, I can run the parallel simulation successfully without gpu package. The
previous problem was solved after I re-configured the /etc/ld.so.conf by
adding the /usr/local/cuda/lib64.

why not use LD_LIBRARY_PATH?

However, when I tried to test a simple input file (attached), below error
occurred:

[email protected]...:~/software/LAMMPS_GPU_Install/Drivers/lammps-24Apr13/bench/GPU>
mpirun -np 8 ./lmp_g++ -sf gpu -c off -v g 1 -echo screen < in.test

it works for me.

axel.

Yes, I have also already exported the LD_LIBRARY_PATH in the /etc/profile, as export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64, but it didn’t work until I specified it in the /etc/ld.so.conf.

I am glad to know that the simple input file works for you, but still confused why it doesn’t work for me. Is there any clue for my problem? Thanks in advance!

Junjie

Yes, I have also already exported the LD_LIBRARY_PATH in the /etc/profile,
as export
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64, but it didn't work
until I specified it in the /etc/ld.so.conf.

just editing /etc/profile doesn't automatically change your current session.

I am glad to know that the simple input file works for you, but still
confused why it doesn't work for me. Is there any clue for my problem?

no clue. it must be something in your machine setup or your hardware.

axel.

Hi axel,

I re-configured the machine again, using MPICH2-1.2.1p1 and FFTW2.1.5. It seems that there is some progress as the error of Segmentation fault disappeared. But the simulation still doesn’t work with below new error message:

wuchao@…4131…:~/software/LAMMPS_GPU_Install/Drivers/lammps-24Apr13/bench/GPU> mpirun -np 8 ./lmp_g++ -sf gpu -c off -v g 1 < in.test

LAMMPS (24 Apr 2013)
Lattice spacing in x,y,z = 3.615 3.615 3.615
Created orthogonal box = (0 0 0) to (36.15 36.15 36.15)
2 by 2 by 2 MPI processor grid
Created 4000 atoms